From 17ef93ed23c16a4418302e951540f65a5f1a5f7f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Jesu=CC=81s=20Pe=CC=81rez?= Date: Wed, 14 Jan 2026 04:53:58 +0000 Subject: [PATCH] chore: fix docs after fences fix --- docs/README.md | 141 +++++++++++- docs/src/PROVISIONING.md | 44 ++-- docs/src/README.md | 4 +- docs/src/ai/README.md | 16 +- docs/src/ai/ai-agents.md | 34 +-- docs/src/ai/ai-assisted-forms.md | 26 +-- docs/src/ai/architecture.md | 10 +- docs/src/ai/config-generation.md | 4 +- docs/src/ai/configuration.md | 54 ++--- docs/src/ai/cost-management.md | 50 ++--- docs/src/ai/mcp-integration.md | 48 ++--- docs/src/ai/natural-language-config.md | 32 +-- docs/src/ai/rag-system.md | 30 +-- docs/src/ai/security-policies.md | 34 +-- docs/src/ai/troubleshooting-with-ai.md | 34 +-- docs/src/api-reference/README.md | 2 +- docs/src/api-reference/extensions.md | 26 +-- .../src/api-reference/integration-examples.md | 26 +-- docs/src/api-reference/nushell-api.md | 2 +- docs/src/api-reference/path-resolution.md | 44 ++-- docs/src/api-reference/provider-api.md | 20 +- docs/src/api-reference/rest-api.md | 118 +++++----- docs/src/api-reference/sdks.md | 38 ++-- docs/src/api-reference/websocket.md | 34 +-- .../adr/ADR-001-project-structure.md | 2 +- .../adr/ADR-002-distribution-strategy.md | 6 +- .../adr/ADR-003-workspace-isolation.md | 6 +- .../adr/ADR-004-hybrid-architecture.md | 4 +- .../adr/ADR-005-extension-framework.md | 12 +- .../ADR-006-provisioning-cli-refactoring.md | 18 +- .../adr/ADR-007-kms-simplification.md | 4 +- .../adr/ADR-008-cedar-authorization.md | 10 +- .../adr/ADR-009-security-system-complete.md | 16 +- .../adr-010-configuration-format-strategy.md | 6 +- .../adr/adr-011-nickel-migration.md | 22 +- ...r-012-nushell-nickel-plugin-cli-wrapper.md | 10 +- .../adr/adr-013-typdialog-integration.md | 34 +-- .../adr/adr-014-secretumvault-integration.md | 36 ++-- .../adr-015-ai-integration-architecture.md | 48 ++--- ...r-016-schema-driven-accessor-generation.md | 6 +- ...17-plugin-wrapper-abstraction-framework.md | 12 +- .../adr-018-help-system-fluent-integration.md | 22 +- ...019-configuration-loader-modularization.md | 14 +- ...dr-020-command-handler-domain-splitting.md | 18 +- .../src/architecture/architecture-overview.md | 82 +++---- .../config-loading-architecture.md | 14 +- .../database-and-config-architecture.md | 38 ++-- docs/src/architecture/design-principles.md | 20 +- .../src/architecture/ecosystem-integration.md | 42 ++-- docs/src/architecture/integration-patterns.md | 48 ++--- .../architecture/multi-repo-architecture.md | 36 ++-- docs/src/architecture/multi-repo-strategy.md | 62 +++--- .../nickel-executable-examples.md | 52 ++--- .../architecture/nickel-vs-kcl-comparison.md | 96 ++++----- .../orchestrator-auth-integration.md | 28 +-- docs/src/architecture/orchestrator-info.md | 6 +- .../orchestrator-integration-model.md | 56 +++-- .../architecture/package-and-loader-system.md | 54 ++--- docs/src/architecture/repo-dist-analysis.md | 40 ++-- docs/src/architecture/system-overview.md | 10 +- .../typedialog-nickel-integration.md | 84 ++++---- docs/src/configuration/config-validation.md | 68 +++--- docs/src/development/auth-metadata-guide.md | 50 ++--- docs/src/development/build-system.md | 106 ++++----- docs/src/development/command-handler-guide.md | 48 ++--- docs/src/development/command-reference.md | 2 +- .../ctrl-c-implementation-notes.md | 28 +-- docs/src/development/dev-configuration.md | 62 +++--- .../development/dev-workspace-management.md | 84 ++++---- docs/src/development/distribution-process.md | 92 ++++---- docs/src/development/glossary.md | 62 +++--- docs/src/development/implementation-guide.md | 60 +++--- .../infrastructure-specific-extensions.md | 34 +-- docs/src/development/integration.md | 68 +++--- docs/src/development/kms-simplification.md | 58 ++--- docs/src/development/mcp-server.md | 8 +- docs/src/development/project-structure.md | 30 +-- .../provider-agnostic-architecture.md | 32 +-- .../providers/provider-comparison.md | 2 +- .../providers/provider-development-guide.md | 57 ++--- .../providers/provider-distribution-guide.md | 52 ++--- .../providers/quick-provider-guide.md | 34 +-- .../taskservs/taskserv-quick-guide.md | 30 +-- .../typedialog-platform-config-guide.md | 74 +++---- docs/src/development/workflow.md | 98 ++++----- docs/src/getting-started/01-prerequisites.md | 16 +- docs/src/getting-started/02-installation.md | 26 +-- .../getting-started/03-first-deployment.md | 34 +-- docs/src/getting-started/04-verification.md | 50 ++--- .../05-platform-configuration.md | 46 ++-- docs/src/getting-started/getting-started.md | 70 +++--- .../src/getting-started/installation-guide.md | 64 +++--- .../installation-validation-guide.md | 72 +++---- .../getting-started/quickstart-cheatsheet.md | 104 ++++----- docs/src/getting-started/quickstart.md | 2 +- docs/src/getting-started/setup-profiles.md | 94 ++++---- docs/src/getting-started/setup-quickstart.md | 22 +- .../src/getting-started/setup-system-guide.md | 22 +- docs/src/getting-started/setup.md | 54 ++--- docs/src/guides/customize-infrastructure.md | 88 ++++---- .../extension-development-quickstart.md | 30 +-- docs/src/guides/from-scratch.md | 122 +++++------ docs/src/guides/guide-system.md | 6 +- docs/src/guides/infrastructure-setup.md | 44 ++-- .../src/guides/internationalization-system.md | 50 ++--- docs/src/guides/multi-provider-deployment.md | 56 +++-- docs/src/guides/multi-provider-networking.md | 60 +++--- docs/src/guides/provider-digitalocean.md | 32 +-- docs/src/guides/provider-hetzner.md | 42 ++-- docs/src/guides/update-infrastructure.md | 132 ++++++------ .../workspace-generation-quick-reference.md | 24 +-- .../batch-workflow-multi-provider.md | 18 +- .../infrastructure/batch-workflow-system.md | 4 +- docs/src/infrastructure/cli-architecture.md | 4 +- docs/src/infrastructure/cli-reference.md | 74 +++---- .../infrastructure/config-rendering-guide.md | 89 ++++---- docs/src/infrastructure/configuration.md | 88 ++++---- .../infrastructure/dynamic-secrets-guide.md | 20 +- .../infrastructure-from-code-guide.md | 84 ++++---- .../infrastructure-management.md | 100 ++++----- docs/src/infrastructure/mode-system-guide.md | 50 ++--- .../workspace-config-architecture.md | 46 ++-- .../workspaces/workspace-config-commands.md | 22 +- .../workspaces/workspace-enforcement-guide.md | 68 +++--- .../workspaces/workspace-guide.md | 2 +- .../workspaces/workspace-infra-reference.md | 50 ++--- .../workspaces/workspace-setup.md | 28 +-- .../workspaces/workspace-switching-guide.md | 75 +++---- .../workspaces/workspace-switching-system.md | 8 +- .../integration/gitea-integration-guide.md | 74 +++---- .../integration/integrations-quickstart.md | 56 ++--- docs/src/integration/oci-registry-guide.md | 86 ++++---- docs/src/integration/oci-registry-platform.md | 18 +- .../secrets-service-layer-complete.md | 88 ++++---- .../integration/service-mesh-ingress-guide.md | 102 ++++----- .../operations/break-glass-training-guide.md | 34 +-- .../cedar-policies-production-guide.md | 70 +++--- docs/src/operations/control-center.md | 24 +-- docs/src/operations/coredns-guide.md | 116 +++++----- docs/src/operations/deployment-guide.md | 114 +++++----- .../operations/incident-response-runbooks.md | 204 +++++++++--------- docs/src/operations/installer-system.md | 26 +-- docs/src/operations/installer.md | 24 +-- docs/src/operations/mfa-admin-setup-guide.md | 114 +++++----- .../operations/monitoring-alerting-setup.md | 62 +++--- docs/src/operations/orchestrator-system.md | 10 +- docs/src/operations/orchestrator.md | 8 +- docs/src/operations/platform.md | 22 +- .../production-readiness-checklist.md | 18 +- docs/src/operations/provisioning-server.md | 16 +- .../operations/service-management-guide.md | 186 ++++++++-------- docs/src/quick-reference/general.md | 24 +-- docs/src/quick-reference/justfile-recipes.md | 18 +- docs/src/quick-reference/oci.md | 52 ++--- .../platform-operations-cheatsheet.md | 54 ++--- .../quick-reference/sudo-password-handling.md | 22 +- docs/src/roadmap/README.md | 10 +- docs/src/roadmap/ai-integration.md | 2 +- docs/src/roadmap/native-plugins.md | 14 +- docs/src/roadmap/nickel-workflows.md | 6 +- .../security/authentication-layer-guide.md | 100 ++++----- docs/src/security/config-encryption-guide.md | 92 ++++---- docs/src/security/kms-service.md | 16 +- docs/src/security/nushell-plugins-guide.md | 100 ++++----- docs/src/security/nushell-plugins-system.md | 4 +- docs/src/security/plugin-integration-guide.md | 197 +++++++++-------- docs/src/security/plugin-usage-guide.md | 54 ++--- docs/src/security/rustyvault-kms-guide.md | 56 ++--- docs/src/security/secrets-management-guide.md | 54 ++--- docs/src/security/secretumvault-kms-guide.md | 60 +++--- docs/src/security/security-system.md | 2 +- .../security/ssh-temporal-keys-user-guide.md | 60 +++--- docs/src/testing/taskserv-validation-guide.md | 44 ++-- docs/src/testing/test-environment-guide.md | 68 +++--- docs/src/testing/test-environment-system.md | 12 +- .../troubleshooting/troubleshooting-guide.md | 152 ++++++------- .../troubleshooting/ctrl-c-sudo-handling.md | 16 +- 177 files changed, 4127 insertions(+), 4010 deletions(-) diff --git a/docs/README.md b/docs/README.md index fe1432c..df5ca50 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1 +1,140 @@ -# Provisioning Platform Documentation\n\nComplete documentation for the Provisioning Platform infrastructure automation system built with Nushell,\nNickel, and Rust.\n\n## 📖 Browse Documentation\n\nAll documentation is **directly readable** as markdown files in Git/GitHub—mdBook is optional.\n\n- **[Table of Contents](src/SUMMARY.md)** – Complete documentation index (188+ pages)\n- **[Browse src/ directory](src/)** – All markdown files organized by topic\n\n---\n\n## 🚀 Quick Navigation\n\n### For Users & Operators\n\n- **[Getting Started](src/getting-started/)** – Installation, setup, and first deployment\n- **[Operations Guide](src/operations/)** – Deployment, monitoring, orchestrator management\n- **[Troubleshooting](src/troubleshooting/troubleshooting-guide.md)** – Common issues and solutions\n- **[Security](src/security/)** – Authentication, encryption, secrets management\n\n### For Developers & Architects\n\n- **[Architecture Overview](src/architecture/)** – System design and integration patterns\n- **[Infrastructure Guide](src/infrastructure/)** – CLI, configuration system, workspaces\n- **[Development Guide](src/development/)** – Extensions, providers, taskservs, build system\n- **[API Reference](src/api-reference/)** – REST API, WebSocket, SDKs, integration examples\n\n### For Advanced Users\n\n- **[Deployment Guides](src/guides/)** – Multi-provider setup, customization, infrastructure examples\n- **[Integration Guides](src/integration/)** – Gitea, OCI, service mesh, secrets integration\n- **[Testing](src/testing/)** – Test environment setup and validation\n\n---\n\n## 📚 Documentation Structure\n\n```{$detected_lang}\nprovisioning/docs/\n├── README.md # This file – navigation hub\n├── book.toml # mdBook configuration\n├── src/ # Source markdown files (version-controlled)\n│ ├── SUMMARY.md # Complete table of contents\n│ ├── getting-started/ # Installation and setup\n│ ├── architecture/ # System design and ADRs\n│ ├── infrastructure/ # CLI, configuration, workspaces\n│ ├── operations/ # Deployment, orchestrator, monitoring\n│ ├── development/ # Extensions, providers, build system\n│ ├── api-reference/ # APIs and SDKs\n│ ├── security/ # Authentication, secrets, encryption\n│ ├── integration/ # Third-party integrations\n│ ├── guides/ # How-to guides and examples\n│ ├── troubleshooting/ # Common issues\n│ └── ... # 12 other sections\n├── book/ # Generated HTML output (Git-ignored)\n└── examples/ # Example workspace configurations\n```\n\n### Why `src/` subdirectory\n\nThis is the **standard mdBook convention**:\n- **Source (`src/`)**: Version-controlled markdown files, directly readable\n- **Output (`book/`)**: Generated HTML/CSS/JS, Git-ignored (regenerated on build)\n\nThis separation allows the same source files to generate multiple output formats (HTML, PDF, EPUB) without\ncluttering the version-controlled repository.\n\n---\n\n## 🔨 Building HTML with mdBook\n\nIf you prefer a formatted HTML website with search, themes, and copy buttons, build with mdBook:\n\n### Prerequisites\n\n```bash\ncargo install mdbook\n```\n\n### Build & Serve\n\n```bash\n# Navigate to docs directory\ncd provisioning/docs\n\n# Build HTML to book/ directory\nmdbook build\n\n# Serve locally at http://localhost:3000 (with live reload)\nmdbook serve\n```\n\n### Output\n\nGenerated HTML is available in `provisioning/docs/book/` after building.\n\n**Note**: mdBook is entirely optional. The markdown files in `src/` work perfectly fine in any Git\nviewer or text editor.\n\n---\n\n## 📖 Reading Markdown Directly\n\nAll documentation is standard GitHub Flavored Markdown. You can:\n\n- **GitHub/GitLab**: Click `provisioning/docs/src/` and browse directly\n- **Local Git**: Clone the repo and open any `.md` file in your editor\n- **Text Search**: Use `grep` or your editor's search to find topics across all markdown files\n- **mdBook (optional)**: Build HTML for formatted reading with search and theming\n\n---\n\n## 🔗 Key Reference Pages\n\n| Document | Purpose |\n| ------------------------------------------------------------------------------ | --------------------------------- |\n| [System Overview](src/architecture/system-overview.md) | High-level architecture |\n| [Installation Guide](src/getting-started/installation-guide.md) | Step-by-step setup |\n| [CLI Reference](src/infrastructure/cli-reference.md) | Command reference |\n| [Configuration System](src/infrastructure/configuration-system.md) | Config management |\n| [Security System](src/security/security-system.md) | Authentication & encryption |\n| [Orchestrator](src/operations/orchestrator.md) | Service orchestration |\n| [Workspace Guide](src/infrastructure/workspaces/workspace-guide.md) | Infrastructure workspaces |\n| [ADRs](src/architecture/adr/) | Architecture Decision Records |\n\n---\n\n## ❓ Questions\n\n- **Getting started** → Start with [Installation Guide](src/getting-started/installation-guide.md)\n- **Having issues** → Check [Troubleshooting](src/troubleshooting/troubleshooting-guide.md)\n- **Looking for API docs** → See [API Reference](src/api-reference/)\n- **Want architecture details** → Read [Architecture Overview](src/architecture/architecture-overview.md)\n\nFor complete navigation, see [Table of Contents](src/SUMMARY.md). \ No newline at end of file +# Provisioning Platform Documentation + +Complete documentation for the Provisioning Platform infrastructure automation system built with Nushell, +Nickel, and Rust. + +## 📖 Browse Documentation + +All documentation is **directly readable** as markdown files in Git/GitHub—mdBook is optional. + +- **[Table of Contents](src/SUMMARY.md)** – Complete documentation index (188+ pages) +- **[Browse src/ directory](src/)** – All markdown files organized by topic + +--- + +## 🚀 Quick Navigation + +### For Users & Operators + +- **[Getting Started](src/getting-started/)** – Installation, setup, and first deployment +- **[Operations Guide](src/operations/)** – Deployment, monitoring, orchestrator management +- **[Troubleshooting](src/troubleshooting/troubleshooting-guide.md)** – Common issues and solutions +- **[Security](src/security/)** – Authentication, encryption, secrets management + +### For Developers & Architects + +- **[Architecture Overview](src/architecture/)** – System design and integration patterns +- **[Infrastructure Guide](src/infrastructure/)** – CLI, configuration system, workspaces +- **[Development Guide](src/development/)** – Extensions, providers, taskservs, build system +- **[API Reference](src/api-reference/)** – REST API, WebSocket, SDKs, integration examples + +### For Advanced Users + +- **[Deployment Guides](src/guides/)** – Multi-provider setup, customization, infrastructure examples +- **[Integration Guides](src/integration/)** – Gitea, OCI, service mesh, secrets integration +- **[Testing](src/testing/)** – Test environment setup and validation + +--- + +## 📚 Documentation Structure + +```bash +provisioning/docs/ +├── README.md # This file – navigation hub +├── book.toml # mdBook configuration +├── src/ # Source markdown files (version-controlled) +│ ├── SUMMARY.md # Complete table of contents +│ ├── getting-started/ # Installation and setup +│ ├── architecture/ # System design and ADRs +│ ├── infrastructure/ # CLI, configuration, workspaces +│ ├── operations/ # Deployment, orchestrator, monitoring +│ ├── development/ # Extensions, providers, build system +│ ├── api-reference/ # APIs and SDKs +│ ├── security/ # Authentication, secrets, encryption +│ ├── integration/ # Third-party integrations +│ ├── guides/ # How-to guides and examples +│ ├── troubleshooting/ # Common issues +│ └── ... # 12 other sections +├── book/ # Generated HTML output (Git-ignored) +└── examples/ # Example workspace configurations +``` + +### Why `src/` subdirectory + +This is the **standard mdBook convention**: +- **Source (`src/`)**: Version-controlled markdown files, directly readable +- **Output (`book/`)**: Generated HTML/CSS/JS, Git-ignored (regenerated on build) + +This separation allows the same source files to generate multiple output formats (HTML, PDF, EPUB) without +cluttering the version-controlled repository. + +--- + +## 🔨 Building HTML with mdBook + +If you prefer a formatted HTML website with search, themes, and copy buttons, build with mdBook: + +### Prerequisites + +```bash +bash +cargo install mdbook +``` + +### Build & Serve + +```bash +bash +# Navigate to docs directory +cd provisioning/docs + +# Build HTML to book/ directory +mdbook build + +# Serve locally at http://localhost:3000 (with live reload) +mdbook serve +``` + +### Output + +Generated HTML is available in `provisioning/docs/book/` after building. + +**Note**: mdBook is entirely optional. The markdown files in `src/` work perfectly fine in any Git +viewer or text editor. + +--- + +## 📖 Reading Markdown Directly + +All documentation is standard GitHub Flavored Markdown. You can: + +- **GitHub/GitLab**: Click `provisioning/docs/src/` and browse directly +- **Local Git**: Clone the repo and open any `.md` file in your editor +- **Text Search**: Use `grep` or your editor's search to find topics across all markdown files +- **mdBook (optional)**: Build HTML for formatted reading with search and theming + +--- + +## 🔗 Key Reference Pages + +| Document | Purpose | +| ------------------------------------------------------------------------------ | --------------------------------- | +| [System Overview](src/architecture/system-overview.md) | High-level architecture | +| [Installation Guide](src/getting-started/installation-guide.md) | Step-by-step setup | +| [CLI Reference](src/infrastructure/cli-reference.md) | Command reference | +| [Configuration System](src/infrastructure/configuration-system.md) | Config management | +| [Security System](src/security/security-system.md) | Authentication & encryption | +| [Orchestrator](src/operations/orchestrator.md) | Service orchestration | +| [Workspace Guide](src/infrastructure/workspaces/workspace-guide.md) | Infrastructure workspaces | +| [ADRs](src/architecture/adr/) | Architecture Decision Records | + +--- + +## ❓ Questions + +- **Getting started** → Start with [Installation Guide](src/getting-started/installation-guide.md) +- **Having issues** → Check [Troubleshooting](src/troubleshooting/troubleshooting-guide.md) +- **Looking for API docs** → See [API Reference](src/api-reference/) +- **Want architecture details** → Read [Architecture Overview](src/architecture/architecture-overview.md) + +For complete navigation, see [Table of Contents](src/SUMMARY.md). \ No newline at end of file diff --git a/docs/src/PROVISIONING.md b/docs/src/PROVISIONING.md index 1b2baf9..2077a20 100644 --- a/docs/src/PROVISIONING.md +++ b/docs/src/PROVISIONING.md @@ -86,7 +86,7 @@ Declarative Infrastructure as Code (IaC) platform providing: **Solution**: Unified abstraction layer with provider-agnostic interfaces. Write configuration once, deploy anywhere. -```text +```toml # Same configuration works on UpCloud, AWS, or local infrastructure server: Server { name = "web-01" @@ -101,7 +101,7 @@ server: Server { **Solution**: Automatic dependency resolution with topological sorting and health checks. -```text +```bash # Provisioning resolves: containerd → etcd → kubernetes → cilium taskservs = ["cilium"] # Automatically installs all dependencies ``` @@ -112,7 +112,7 @@ taskservs = ["cilium"] # Automatically installs all dependencies **Solution**: Hierarchical configuration system with 476+ config accessors replacing 200+ ENV variables. -```text +```toml Defaults → User → Project → Infrastructure → Environment → Runtime ``` @@ -197,7 +197,7 @@ Clusters handle: Isolated environments for different projects or deployment stages. -```text +```bash workspace_librecloud/ # Production workspace ├── infra/ # Infrastructure definitions ├── config/ # Workspace configuration @@ -211,7 +211,7 @@ workspace_dev/ # Development workspace Switch between workspaces with single command: -```text +```bash provisioning workspace switch librecloud ``` @@ -240,7 +240,7 @@ Coordinated sequences of operations with dependency management. ### System Components -```text +```bash ┌─────────────────────────────────────────────────────────────────┐ │ User Interface Layer │ │ • CLI (provisioning command) │ @@ -282,7 +282,7 @@ Coordinated sequences of operations with dependency management. ### Directory Structure -```text +```bash project-provisioning/ ├── provisioning/ # Core provisioning system │ ├── core/ # Core engine and libraries @@ -514,7 +514,7 @@ Comprehensive version tracking and updates. ### Data Flow -```text +```bash 1. User defines infrastructure in Nickel ↓ 2. CLI loads configuration (hierarchical) @@ -540,7 +540,7 @@ Comprehensive version tracking and updates. **Step 1**: Define infrastructure in Nickel -```text +```nickel # infra/my-cluster.ncl let config = { infra = { @@ -561,13 +561,13 @@ config **Step 2**: Submit to Provisioning -```text +```bash provisioning server create --infra my-cluster ``` **Step 3**: Provisioning executes workflow -```text +```bash 1. Create workflow: "deploy-my-cluster" 2. Resolve dependencies: - containerd (required by kubernetes) @@ -592,7 +592,7 @@ provisioning server create --infra my-cluster **Step 4**: Verify deployment -```text +```bash provisioning cluster status my-cluster ``` @@ -600,7 +600,7 @@ provisioning cluster status my-cluster Configuration values are resolved through a hierarchy: -```text +```toml 1. System Defaults (provisioning/config/config.defaults.toml) ↓ (overridden by) 2. User Preferences (~/.config/provisioning/user_config.yaml) @@ -616,7 +616,7 @@ Configuration values are resolved through a hierarchy: **Example**: -```text +```bash # System default [servers] default_plan = "small" @@ -641,7 +641,7 @@ provisioning server create --plan xlarge # Overrides everything Deploy Kubernetes clusters across different cloud providers with identical configuration. -```text +```yaml # UpCloud cluster provisioning cluster create k8s-prod --provider upcloud @@ -653,7 +653,7 @@ provisioning cluster create k8s-prod --provider aws Manage multiple environments with workspace switching. -```text +```bash # Development provisioning workspace switch dev provisioning cluster create app-stack @@ -671,7 +671,7 @@ provisioning cluster create app-stack Test infrastructure changes before deploying to production. -```text +```bash # Test Kubernetes upgrade locally provisioning test topology load kubernetes_3node | test env cluster kubernetes --version 1.29.0 @@ -687,7 +687,7 @@ provisioning test env cleanup Deploy to multiple regions in parallel. -```text +```bash # workflows/multi-region.ncl let batch_workflow = { operations = [ @@ -715,7 +715,7 @@ let batch_workflow = { batch_workflow ``` -```text +```bash provisioning batch submit workflows/multi-region.ncl provisioning batch monitor ``` @@ -724,7 +724,7 @@ provisioning batch monitor Recreate infrastructure from configuration. -```text +```toml # Infrastructure destroyed provisioning workspace switch prod @@ -738,7 +738,7 @@ provisioning cluster create --infra backup-restore --wait Automated testing and deployment pipelines. -```text +```bash # .gitlab-ci.yml test-infrastructure: script: @@ -941,4 +941,4 @@ See [LICENSE](LICENSE) file in project root. **Maintained By**: Architecture Team **Last Updated**: 2025-10-07 -**Project Home**: [provisioning/](provisioning/) +**Project Home**: [provisioning/](provisioning/) \ No newline at end of file diff --git a/docs/src/README.md b/docs/src/README.md index af9ee95..4ab8ada 100644 --- a/docs/src/README.md +++ b/docs/src/README.md @@ -117,7 +117,7 @@ Nickel, and Rust. ## Documentation Structure -```text +```bash provisioning/docs/src/ ├── README.md (this file) # Documentation hub ├── getting-started/ # Getting started guides @@ -382,4 +382,4 @@ This project welcomes contributions! See **[Development Guide](development/READM **Maintained By**: Provisioning Team **Last Review**: 2025-10-06 -**Next Review**: 2026-01-06 +**Next Review**: 2026-01-06 \ No newline at end of file diff --git a/docs/src/ai/README.md b/docs/src/ai/README.md index 5e642d2..279a981 100644 --- a/docs/src/ai/README.md +++ b/docs/src/ai/README.md @@ -20,7 +20,7 @@ The AI integration consists of multiple components working together to provide i ### Natural Language Configuration Generate infrastructure configurations from plain English descriptions: -```text +```toml provisioning ai generate "Create a production PostgreSQL cluster with encryption and daily backups" ``` @@ -31,7 +31,7 @@ Real-time suggestions and explanations as you fill out configuration forms via t ### Intelligent Troubleshooting AI analyzes deployment failures and suggests fixes: -```text +```bash provisioning ai troubleshoot deployment-12345 ``` @@ -39,13 +39,13 @@ provisioning ai troubleshoot deployment-12345 Configuration Optimization AI reviews configurations and suggests performance and security improvements: -```text +```toml provisioning ai optimize workspaces/prod/config.ncl ``` ### Autonomous Agents AI agents execute multi-step workflows with minimal human intervention: -```text +```bash provisioning ai agent --goal "Set up complete dev environment for Python app" ``` @@ -68,7 +68,7 @@ provisioning ai agent --goal "Set up complete dev environment for Python app" ### Enable AI Features -```text +```bash # Edit provisioning config vim provisioning/config/ai.toml @@ -86,7 +86,7 @@ troubleshooting = true ### Generate Configuration from Natural Language -```text +```toml # Simple generation provisioning ai generate "PostgreSQL database with encryption" @@ -99,7 +99,7 @@ provisioning ai generate ### Use AI-Assisted Forms -```text +```bash # Open typdialog web UI with AI assistance provisioning workspace init --interactive --ai-assist @@ -110,7 +110,7 @@ provisioning workspace init --interactive --ai-assist ### Troubleshoot with AI -```text +```bash # Analyze failed deployment provisioning ai troubleshoot deployment-12345 diff --git a/docs/src/ai/ai-agents.md b/docs/src/ai/ai-agents.md index c0cf63e..0558035 100644 --- a/docs/src/ai/ai-agents.md +++ b/docs/src/ai/ai-agents.md @@ -13,7 +13,7 @@ security and requiring human approval for critical operations. Enable AI agents to manage complex provisioning workflows: -```text +```bash User Goal: "Set up a complete development environment with: - PostgreSQL database @@ -39,7 +39,7 @@ AI Agent executes: Agents coordinate complex, multi-component deployments: -```text +```bash Goal: "Deploy production Kubernetes cluster with managed databases" Agent Plan: @@ -75,7 +75,7 @@ Agent Plan: Agents adapt to conditions and make intelligent decisions: -```text +```bash Scenario: Database provisioning fails due to resource quota Standard approach (human): @@ -102,7 +102,7 @@ Agent approach: Agents understand resource dependencies: -```text +```bash Knowledge graph of dependencies: VPC ──→ Subnets ──→ EC2 Instances @@ -125,7 +125,7 @@ Agent ensures: ### Agent Design Pattern -```text +```bash ┌────────────────────────────────────────────────────────┐ │ Agent Supervisor (Orchestrator) │ │ - Accepts user goal │ @@ -151,7 +151,7 @@ Agent ensures: ### Agent Workflow -```text +```bash Start: User Goal ↓ ┌─────────────────────────────────────────┐ @@ -214,7 +214,7 @@ Success: Deployment Complete ### 1. Database Specialist Agent -```text +```bash Responsibilities: - Create and configure databases - Set up replication and backups @@ -231,7 +231,7 @@ Examples: ### 2. Kubernetes Specialist Agent -```text +```yaml Responsibilities: - Create and configure Kubernetes clusters - Configure networking and ingress @@ -248,7 +248,7 @@ Examples: ### 3. Infrastructure Agent -```text +```bash Responsibilities: - Create networking infrastructure - Configure security and firewalls @@ -265,7 +265,7 @@ Examples: ### 4. Monitoring Agent -```text +```bash Responsibilities: - Deploy monitoring stack - Configure alerting @@ -282,7 +282,7 @@ Examples: ### 5. Compliance Agent -```text +```bash Responsibilities: - Check security policies - Verify compliance requirements @@ -301,7 +301,7 @@ Examples: ### Example 1: Development Environment Setup -```text +```bash $ provisioning ai agent --goal "Set up dev environment for Python web app" Agent Plan Generated: @@ -357,7 +357,7 @@ Grafana dashboards: [http://grafana.internal:3000](http://grafana.internal:3000) ### Example 2: Production Kubernetes Deployment -```text +```yaml $ provisioning ai agent --interactive --goal "Deploy production Kubernetes cluster with managed databases" @@ -414,7 +414,7 @@ User: Review configs Agents stop and ask humans for approval at critical points: -```text +```bash Automatic Approval (Agent decides): - Create configuration - Validate configuration @@ -434,7 +434,7 @@ Human Approval Required: All decisions logged for audit trail: -```text +```bash Agent Decision Log: | 2025-01-13 10:00:00 | Generate database config | | 2025-01-13 10:00:05 | Config validation: PASS | @@ -451,7 +451,7 @@ Agent Decision Log: Agents can rollback on failure: -```text +```bash Scenario: Database creation succeeds, but Kubernetes creation fails Agent behavior: @@ -469,7 +469,7 @@ Full rollback capability if entire workflow fails before human approval. ### Agent Settings -```text +```toml # In provisioning/config/ai.toml [ai.agents] enabled = true diff --git a/docs/src/ai/ai-assisted-forms.md b/docs/src/ai/ai-assisted-forms.md index 0ededcb..41376ed 100644 --- a/docs/src/ai/ai-assisted-forms.md +++ b/docs/src/ai/ai-assisted-forms.md @@ -11,7 +11,7 @@ typdialog web UI. This enables users to configure infrastructure through interac Enhance configuration forms with AI-powered assistance: -```text +```toml User typing in form field: "storage" ↓ AI analyzes context: @@ -38,7 +38,7 @@ Suggestions appear: ### User Interface Integration -```text +```bash ┌────────────────────────────────────────┐ │ Typdialog Web UI (React/TypeScript) │ │ │ @@ -65,7 +65,7 @@ Suggestions appear: ### Suggestion Pipeline -```text +```bash User Event (typing, focusing field, validation error) ↓ ┌─────────────────────────────────────┐ @@ -107,7 +107,7 @@ User Event (typing, focusing field, validation error) Intelligent suggestions based on context: -```text +```bash Scenario: User filling database configuration form 1. Engine selection @@ -135,7 +135,7 @@ Scenario: User filling database configuration form Human-readable error messages with fixes: -```text +```bash User enters: "storage = -100" Current behavior: @@ -157,7 +157,7 @@ Planned AI behavior: Suggestions change based on other fields: -```text +```bash Scenario: Multi-step configuration form Step 1: Select environment @@ -186,7 +186,7 @@ Step 4: Encryption Quick access to relevant docs: -```text +```bash Field: "Backup Retention Days" Suggestion popup: @@ -207,7 +207,7 @@ Suggestion popup: Suggest multiple related fields together: -```text +```bash User selects: environment = "production" AI suggests completing: @@ -231,7 +231,7 @@ AI suggests completing: ### Frontend (typdialog-ai JavaScript/TypeScript) -```text +```bash // React component for field with AI assistance interface AIFieldProps { fieldName: string; @@ -286,7 +286,7 @@ function AIAssistedField({fieldName, formContext, schema}: AIFieldProps) { ### Backend Service Integration -```text +```bash // In AI Service: field suggestion endpoint async fn suggest_field_value( req: SuggestFieldRequest, @@ -316,7 +316,7 @@ async fn suggest_field_value( ### Form Assistant Settings -```text +```toml # In provisioning/config/ai.toml [ai.forms] enabled = true @@ -352,7 +352,7 @@ track_rejected_suggestions = true ### Scenario: New User Configuring PostgreSQL -```text +```toml 1. User opens typdialog form - Form title: "Create Database" - First field: "Database Engine" @@ -395,7 +395,7 @@ track_rejected_suggestions = true NLC and form assistance share the same backend: -```text +```bash Natural Language Generation AI-Assisted Forms ↓ ↓ "Create a PostgreSQL db" Select field values diff --git a/docs/src/ai/architecture.md b/docs/src/ai/architecture.md index 8ff9cf6..67f73cc 100644 --- a/docs/src/ai/architecture.md +++ b/docs/src/ai/architecture.md @@ -36,7 +36,7 @@ The RAG system enables AI to access and reason over platform documentation: - Semantic caching for repeated queries **Capabilities**: -```text +```bash provisioning ai query "How do I set up Kubernetes?" provisioning ai template "Describe my infrastructure" ``` @@ -56,14 +56,14 @@ Provides Model Context Protocol integration: **Status**: ✅ Production-Ready Interactive commands: -```text +```bash provisioning ai template --prompt "Describe infrastructure" provisioning ai query --prompt "Configuration question" provisioning ai chat # Interactive mode ``` **Configuration**: -```text +```toml [ai] enabled = true provider = "anthropic" # or "openai" or "local" @@ -108,7 +108,7 @@ Real-time AI suggestions in configuration forms: ## Architecture Diagram -```text +```bash ┌─────────────────────────────────────────────────┐ │ User Interface │ │ ├── CLI (provisioning ai ...) │ @@ -191,4 +191,4 @@ See [Configuration Guide](configuration.md) for: **Last Updated**: 2025-01-13 **Status**: ✅ Production-Ready (core system) -**Test Coverage**: 22/22 tests passing +**Test Coverage**: 22/22 tests passing \ No newline at end of file diff --git a/docs/src/ai/config-generation.md b/docs/src/ai/config-generation.md index f6c08ce..f4a7c33 100644 --- a/docs/src/ai/config-generation.md +++ b/docs/src/ai/config-generation.md @@ -14,7 +14,7 @@ The Configuration Generator (typdialog-prov-gen) will provide template-based Nic - Preview before generation ### Customization via Natural Language -```text +```bash provisioning ai config-gen --template "kubernetes-cluster" --customize "Add Prometheus monitoring, increase replicas to 5, use us-east-1" @@ -32,7 +32,7 @@ provisioning ai config-gen ## Architecture -```text +```bash Template Library ↓ Template Selection (AI + User) diff --git a/docs/src/ai/configuration.md b/docs/src/ai/configuration.md index 6597c27..d4a65db 100644 --- a/docs/src/ai/configuration.md +++ b/docs/src/ai/configuration.md @@ -9,7 +9,7 @@ controls, and security settings. ### Minimal Configuration -```text +```toml # provisioning/config/ai.toml [ai] enabled = true @@ -27,7 +27,7 @@ temperature = 0.7 ### Initialize Configuration -```text +```toml # Generate default configuration provisioning config init ai @@ -45,7 +45,7 @@ provisioning config show ai ### Anthropic Claude -```text +```toml [ai] enabled = true provider = "anthropic" @@ -68,7 +68,7 @@ top_k = 40 ### OpenAI GPT-4 -```text +```toml [ai] enabled = true provider = "openai" @@ -89,7 +89,7 @@ top_p = 0.95 ### Local Models -```text +```toml [ai] enabled = true provider = "local" @@ -112,7 +112,7 @@ max_batch_size = 4 ### Enable Specific Features -```text +```toml [ai.features] # Core features (production-ready) rag_search = true # Retrieve-Augmented Generation @@ -137,7 +137,7 @@ knowledge_base = false # Custom knowledge base per workspace ### Cache Strategy -```text +```toml [ai.cache] enabled = true cache_type = "memory" # or "redis", "disk" @@ -169,7 +169,7 @@ cache_embeddings = true # Cache embedding vectors ### Cache Metrics -```text +```bash # Monitor cache performance provisioning admin cache stats ai @@ -184,7 +184,7 @@ provisioning admin cache analyze ai --hours 24 ### Rate Limits -```text +```toml [ai.limits] # Tokens per request max_tokens = 4096 @@ -207,7 +207,7 @@ track_cost_per_request = true ### Cost Budgeting -```text +```toml [ai.budget] enabled = true monthly_limit_usd = 1000 @@ -226,7 +226,7 @@ local_limit = 0 # Free (run locally) ### Track Costs -```text +```bash # View cost metrics provisioning admin costs show ai --period month @@ -244,7 +244,7 @@ provisioning admin costs export ai --format csv --output costs.csv ### Authentication -```text +```toml [ai.auth] # API key from environment variable api_key = "${PROVISIONING_AI_API_KEY}" @@ -263,7 +263,7 @@ signing_method = "hmac-sha256" ### Authorization (Cedar) -```text +```toml [ai.authorization] enabled = true policy_file = "provisioning/policies/ai-policies.cedar" @@ -276,7 +276,7 @@ policy_file = "provisioning/policies/ai-policies.cedar" ### Data Protection -```text +```toml [ai.security] # Sanitize data before sending to external LLM sanitize_pii = true @@ -300,7 +300,7 @@ local_only = false # Set true for air-gapped deployments ### Vector Store Setup -```text +```toml [ai.rag] enabled = true @@ -337,7 +337,7 @@ code_overlap = 128 ### Index Management -```text +```bash # Create indexes provisioning ai index create rag @@ -355,7 +355,7 @@ provisioning ai index cleanup rag --older-than 30days ### MCP Server Setup -```text +```toml [ai.mcp] enabled = true port = 3000 @@ -380,7 +380,7 @@ timeout_seconds = 30 ### MCP Client Configuration -```text +```toml ~/.claude/claude_desktop_config.json: { "mcpServers": { @@ -400,7 +400,7 @@ timeout_seconds = 30 ### Logging Configuration -```text +```toml [ai.logging] level = "info" # or "debug", "warn", "error" format = "json" # or "text" @@ -423,7 +423,7 @@ log_costs = true ### Metrics and Monitoring -```text +```bash # View AI service metrics provisioning admin metrics show ai @@ -443,7 +443,7 @@ curl [http://localhost:8083/metrics](http://localhost:8083/metrics) ### Configuration Validation -```text +```toml # Validate configuration syntax provisioning config validate ai @@ -464,7 +464,7 @@ provisioning ai health-check ### Common Settings -```text +```toml # Provider configuration export PROVISIONING_AI_PROVIDER="anthropic" export PROVISIONING_AI_MODEL="claude-sonnet-4" @@ -492,7 +492,7 @@ export RUST_LOG="provisioning::ai=info" ### Common Issues **Issue**: API key not recognized -```text +```bash # Check environment variable is set echo $PROVISIONING_AI_API_KEY @@ -504,7 +504,7 @@ provisioning ai test provider anthropic ``` **Issue**: Cache not working -```text +```bash # Check cache status provisioning admin cache stats ai @@ -517,7 +517,7 @@ RUST_LOG=provisioning::cache=debug provisioning-ai-service ``` **Issue**: RAG search not finding results -```text +```bash # Rebuild RAG indexes provisioning ai index rebuild rag @@ -534,7 +534,7 @@ provisioning ai index status rag New AI versions automatically migrate old configurations: -```text +```toml # Check configuration version provisioning config version ai @@ -549,7 +549,7 @@ provisioning config backup ai ### Recommended Production Settings -```text +```toml [ai] enabled = true provider = "anthropic" diff --git a/docs/src/ai/cost-management.md b/docs/src/ai/cost-management.md index 908854a..5cc2680 100644 --- a/docs/src/ai/cost-management.md +++ b/docs/src/ai/cost-management.md @@ -21,7 +21,7 @@ includes built-in cost controls to prevent runaway spending while maximizing val ### Cost Examples -```text +```bash Scenario 1: Generate simple database configuration - Input: 500 tokens (description + schema) - Output: 200 tokens (generated config) @@ -49,7 +49,7 @@ Scenario 3: Monthly usage (typical organization) Caching is the primary cost reduction strategy, cutting costs by 50-80%: -```text +```bash Without Caching: User 1: "Generate PostgreSQL config" → API call → $0.005 User 2: "Generate PostgreSQL config" → API call → $0.005 @@ -69,7 +69,7 @@ With Semantic Cache: ### Cache Configuration -```text +```toml [ai.cache] enabled = true cache_type = "redis" # Distributed cache across instances @@ -96,7 +96,7 @@ alert_on_low_hit_rate = true Prevent usage spikes from unexpected costs: -```text +```toml [ai.limits] # Per-request limits max_tokens = 4096 @@ -119,7 +119,7 @@ stop_at_percent = 95 # Stop when at 95% of budget ### Workspace-Level Budgets -```text +```toml [ai.workspace_budgets] # Per-workspace cost limits dev.daily_limit_usd = 10 @@ -135,7 +135,7 @@ teams.team-b.monthly_limit = 300 ### Track Spending -```text +```bash # View current month spending provisioning admin costs show ai @@ -154,7 +154,7 @@ provisioning admin costs export ai --format csv --output costs.csv ### Cost Breakdown -```text +```bash Month: January 2025 Total Spending: $285.42 @@ -192,7 +192,7 @@ Cache Performance: ### Strategy 1: Increase Cache Hit Rate -```text +```bash # Longer TTL = more cache hits [ai.cache] ttl_seconds = 7200 # 2 hours instead of 1 hour @@ -208,7 +208,7 @@ similarity_threshold = 0.90 # Lower threshold = more hits ### Strategy 2: Use Local Models -```text +```toml [ai] provider = "local" model = "mistral-7b" # Free, runs on GPU @@ -222,7 +222,7 @@ model = "mistral-7b" # Free, runs on GPU ### Strategy 3: Use Haiku for Simple Tasks -```text +```bash Task Complexity vs Model: Simple (form assist): Claude Haiku 4 ($0.80/$4) @@ -241,7 +241,7 @@ Example optimization: ### Strategy 4: Batch Operations -```text +```bash # Instead of individual requests, batch similar operations: # Before: 100 configs, 100 separate API calls @@ -257,7 +257,7 @@ provisioning ai batch --input configs-list.yaml ### Strategy 5: Smart Feature Enablement -```text +```toml [ai.features] # Enable high-ROI features config_generation = true # High value, moderate cost @@ -273,7 +273,7 @@ agents = false # Complex, requires multiple calls ### 1. Set Budget -```text +```bash # Set monthly budget provisioning config set ai.budget.monthly_limit_usd 500 @@ -287,7 +287,7 @@ provisioning config set ai.workspace_budgets.dev.monthly_limit 100 ### 2. Monitor Spending -```text +```bash # Daily check provisioning admin costs show ai @@ -300,7 +300,7 @@ provisioning admin costs analyze ai --period month ### 3. Adjust If Needed -```text +```bash # If overspending: # - Increase cache TTL # - Enable local models for simple tasks @@ -315,7 +315,7 @@ provisioning admin costs analyze ai --period month ### 4. Forecast and Plan -```text +```bash # Current monthly run rate provisioning admin costs forecast ai @@ -334,7 +334,7 @@ provisioning admin costs forecast ai ### Chargeback Models **Per-Workspace Model**: -```text +```bash Development workspace: $50/month Staging workspace: $100/month Production workspace: $300/month @@ -343,14 +343,14 @@ Total: $450/month ``` **Per-User Model**: -```text +```bash Each user charged based on their usage Encourages efficiency Difficult to track/allocate ``` **Shared Pool Model**: -```text +```bash All teams share $1000/month budget Budget splits by consumption rate Encourages optimization @@ -361,7 +361,7 @@ Most flexible ### Generate Reports -```text +```bash # Monthly cost report provisioning admin costs report ai --format pdf @@ -384,7 +384,7 @@ provisioning admin costs report ai ### ROI Examples -```text +```bash Scenario 1: Developer Time Savings Problem: Manual config creation takes 2 hours Solution: AI config generation, 10 minutes (12x faster) @@ -422,7 +422,7 @@ Scenario 3: Reduction in Failed Deployments ### Hybrid Strategy (Recommended) -```text +```bash ✓ Local models for: - Form assistance (high volume, low complexity) - Simple validation checks @@ -445,7 +445,7 @@ Result: ### Cost Anomaly Detection -```text +```bash # Enable anomaly detection provisioning config set ai.monitoring.anomaly_detection true @@ -462,7 +462,7 @@ provisioning config set ai.monitoring.cost_spike_percent 150 ### Alert Configuration -```text +```toml [ai.monitoring.alerts] enabled = true spike_threshold_percent = 150 @@ -494,4 +494,4 @@ monthly_budget_warning_percent = 70 **Status**: ✅ Production-Ready **Average Savings**: 50-80% through caching **Typical Cost**: $50-500/month per organization -**ROI**: 100:1 to 10,000:1 depending on use case +**ROI**: 100:1 to 10,000:1 depending on use case \ No newline at end of file diff --git a/docs/src/ai/mcp-integration.md b/docs/src/ai/mcp-integration.md index af834eb..a7cb079 100644 --- a/docs/src/ai/mcp-integration.md +++ b/docs/src/ai/mcp-integration.md @@ -9,7 +9,7 @@ platform capabilities as tools. This enables complex multi-step workflows, tool The MCP integration follows the Model Context Protocol specification: -```text +```bash ┌──────────────────────────────────────────────────────────────┐ │ External LLM (Claude, GPT-4, etc.) │ └────────────────────┬─────────────────────────────────────────┘ @@ -44,7 +44,7 @@ The MCP integration follows the Model Context Protocol specification: The MCP server is started as a stdio-based service: -```text +```bash # Start MCP server (stdio transport) provisioning-mcp-server --config /etc/provisioning/ai.toml @@ -74,7 +74,7 @@ RUST_LOG=debug provisioning-mcp-server --config /etc/provisioning/ai.toml Generate infrastructure configuration from natural language description. -```text +```json { "name": "generate_config", "description": "Generate a Nickel infrastructure configuration from a natural language description", @@ -102,7 +102,7 @@ Generate infrastructure configuration from natural language description. **Example Usage**: -```text +```bash # Via MCP client mcp-client provisioning generate_config --description "Production PostgreSQL cluster with encryption and daily backups" @@ -114,7 +114,7 @@ mcp-client provisioning generate_config **Response**: -```text +```json { database = { engine = "postgresql", @@ -155,7 +155,7 @@ mcp-client provisioning generate_config Validate a Nickel configuration against schemas and policies. -```text +```json { "name": "validate_config", "description": "Validate a Nickel configuration file", @@ -182,7 +182,7 @@ Validate a Nickel configuration against schemas and policies. **Example Usage**: -```text +```bash # Validate configuration mcp-client provisioning validate_config --config "$(cat workspaces/prod/database.ncl)" @@ -195,7 +195,7 @@ mcp-client provisioning validate_config **Response**: -```text +```json { "valid": true, "errors": [], @@ -216,7 +216,7 @@ mcp-client provisioning validate_config Search infrastructure documentation using RAG system. -```text +```json { "name": "search_docs", "description": "Search provisioning documentation for information", @@ -244,7 +244,7 @@ Search infrastructure documentation using RAG system. **Example Usage**: -```text +```bash # Search documentation mcp-client provisioning search_docs --query "How do I configure PostgreSQL with replication?" @@ -258,7 +258,7 @@ mcp-client provisioning search_docs **Response**: -```text +```json { "results": [ { @@ -283,7 +283,7 @@ mcp-client provisioning search_docs Analyze deployment failures and suggest fixes. -```text +```json { "name": "troubleshoot_deployment", "description": "Analyze deployment logs and suggest fixes", @@ -310,7 +310,7 @@ Analyze deployment failures and suggest fixes. **Example Usage**: -```text +```bash # Troubleshoot recent deployment mcp-client provisioning troubleshoot_deployment --deployment_id "deploy-2025-01-13-001" @@ -322,7 +322,7 @@ mcp-client provisioning troubleshoot_deployment **Response**: -```text +```json { "status": "failure", "root_cause": "Database connection timeout during migration phase", @@ -349,7 +349,7 @@ mcp-client provisioning troubleshoot_deployment Retrieve schema definition with examples. -```text +```json { "name": "get_schema", "description": "Get a provisioning schema definition", @@ -373,7 +373,7 @@ Retrieve schema definition with examples. **Example Usage**: -```text +```bash # Get schema definition mcp-client provisioning get_schema --schema_name database @@ -389,7 +389,7 @@ mcp-client provisioning get_schema Verify configuration against compliance policies (Cedar). -```text +```json { "name": "check_compliance", "description": "Check configuration against compliance policies", @@ -412,7 +412,7 @@ Verify configuration against compliance policies (Cedar). **Example Usage**: -```text +```bash # Check against PCI-DSS mcp-client provisioning check_compliance --config "$(cat workspaces/prod/database.ncl)" @@ -423,7 +423,7 @@ mcp-client provisioning check_compliance ### Claude Desktop (Most Common) -```text +```bash ~/.claude/claude_desktop_config.json: { "mcpServers": { @@ -441,7 +441,7 @@ mcp-client provisioning check_compliance **Usage in Claude**: -```text +```bash User: I need a production Kubernetes cluster in AWS with automatic scaling Claude can now use provisioning tools: @@ -454,7 +454,7 @@ I'll help you create a production Kubernetes cluster. Let me: ### OpenAI Function Calling -```text +```bash import openai tools = [ @@ -486,7 +486,7 @@ response = openai.ChatCompletion.create( ### Local LLM Integration (Ollama) -```text +```bash # Start Ollama with provisioning MCP OLLAMA_MCP_SERVERS=provisioning://localhost:3000 ollama serve @@ -504,7 +504,7 @@ curl [http://localhost:11434/api/generate](http://localhost:11434/api/generate) Tools return consistent error responses: -```text +```json { "error": { "code": "VALIDATION_ERROR", @@ -567,7 +567,7 @@ See [Configuration Guide](configuration.md) for MCP-specific settings: ## Monitoring and Debugging -```text +```bash # Monitor MCP server provisioning admin mcp status diff --git a/docs/src/ai/natural-language-config.md b/docs/src/ai/natural-language-config.md index 09d1d41..e0c8aa0 100644 --- a/docs/src/ai/natural-language-config.md +++ b/docs/src/ai/natural-language-config.md @@ -12,7 +12,7 @@ validation. Transform infrastructure descriptions into production-ready Nickel configurations: -```text +```nickel User Input: "Create a production PostgreSQL cluster with 100GB storage, daily backups, encryption enabled, and cross-region replication @@ -34,7 +34,7 @@ System Output: ### Generation Pipeline -```text +```bash Input Description (Natural Language) ↓ ┌─────────────────────────────────────┐ @@ -84,7 +84,7 @@ Input Description (Natural Language) Extract structured intent from natural language: -```text +```bash Input: "Create a production PostgreSQL cluster with encryption and backups" Extracted Intent: @@ -104,7 +104,7 @@ Extracted Intent: Map natural language entities to schema fields: -```text +```bash Description Terms → Schema Fields: "100GB storage" → database.instance.allocated_storage_gb = 100 "daily backups" → backup.enabled = true, backup.frequency = "daily" @@ -117,7 +117,7 @@ Description Terms → Schema Fields: Sophisticated prompting for schema-aware generation: -```text +```bash System Prompt: You are generating Nickel infrastructure configurations. Generate ONLY valid Nickel syntax. @@ -144,7 +144,7 @@ Start with: let { database = { Handle generation errors through iteration: -```text +```bash Attempt 1: Generate initial config ↓ Validate ✗ Error: field `version` type mismatch (string vs number) @@ -158,7 +158,7 @@ Attempt 2: Fix with context from error ### CLI Usage -```text +```bash # Simple generation provisioning ai generate "PostgreSQL database for production" @@ -188,7 +188,7 @@ provisioning ai generate --batch descriptions.yaml ### Interactive Refinement -```text +```bash $ provisioning ai generate --interactive > Describe infrastructure: Create production PostgreSQL cluster @@ -209,12 +209,12 @@ Configuration saved to: workspaces/prod/database.ncl ### Example 1: Simple Database **Input**: -```text +```bash "PostgreSQL database with 50GB storage and encryption" ``` **Output**: -```text +```javascript let { database = { engine = "postgresql", @@ -249,13 +249,13 @@ let { ### Example 2: Complex Kubernetes Setup **Input**: -```text +```yaml "Production Kubernetes cluster in AWS with 3 availability zones, auto-scaling from 3 to 10 nodes, managed PostgreSQL, and monitoring" ``` **Output**: -```text +```javascript let { kubernetes = { version = "1.28.0", @@ -314,7 +314,7 @@ let { ### Configurable Generation Parameters -```text +```toml # In provisioning/config/ai.toml [ai.generation] # Which schema to use by default @@ -360,7 +360,7 @@ require_compliance_check = true ### Typical Usage Session -```text +```bash # 1. Describe infrastructure need $ provisioning ai generate "I need a database for my web app" @@ -386,7 +386,7 @@ $ provisioning workspace logs database NLC uses RAG to find similar configurations: -```text +```toml User: "Create Kubernetes cluster" ↓ RAG searches for: @@ -407,7 +407,7 @@ NLC and form assistance share components: ### CLI Integration -```text +```bash # Generate then preview | provisioning ai generate "PostgreSQL prod" | \ | provisioning config preview diff --git a/docs/src/ai/rag-system.md b/docs/src/ai/rag-system.md index 7808f93..6c91797 100644 --- a/docs/src/ai/rag-system.md +++ b/docs/src/ai/rag-system.md @@ -22,7 +22,7 @@ The RAG system consists of: The system uses embedding models to convert documents into vector representations: -```text +```bash ┌─────────────────────┐ │ Document Source │ │ (Markdown, Code) │ @@ -55,7 +55,7 @@ The system uses embedding models to convert documents into vector representation SurrealDB serves as the vector database and knowledge store: -```text +```bash # Configuration in provisioning/schemas/ai.ncl let { rag = { @@ -108,7 +108,7 @@ Intelligent chunking preserves context while managing token limits: #### Markdown Chunking Strategy -```text +```bash Input Document: provisioning/docs/src/guides/from-scratch.md Chunks: @@ -126,7 +126,7 @@ Each chunk includes: #### Code Chunking Strategy -```text +```bash Input Document: provisioning/schemas/main.ncl Chunks: @@ -148,7 +148,7 @@ The system implements dual search strategy for optimal results: ### Vector Similarity Search -```text +```bash // Find semantically similar documents async fn vector_search(query: &str, top_k: usize) -> Vec { let embedding = embed(query).await?; @@ -173,7 +173,7 @@ async fn vector_search(query: &str, top_k: usize) -> Vec { ### BM25 Keyword Search -```text +```bash // Find documents with matching keywords async fn keyword_search(query: &str, top_k: usize) -> Vec { // BM25 full-text search in SurrealDB @@ -196,7 +196,7 @@ async fn keyword_search(query: &str, top_k: usize) -> Vec { ### Hybrid Results -```text +```javascript async fn hybrid_search( query: &str, vector_weight: f32, @@ -231,7 +231,7 @@ async fn hybrid_search( Reduces API calls by caching embeddings of repeated queries: -```text +```rust struct SemanticCache { queries: Arc, CachedResult>>, similarity_threshold: f32, @@ -268,7 +268,7 @@ impl SemanticCache { ### Document Indexing -```text +```bash # Index all documentation provisioning ai index-docs provisioning/docs/src @@ -284,7 +284,7 @@ provisioning ai watch docs provisioning/docs/src ### Programmatic Indexing -```text +```bash // In ai-service on startup async fn initialize_rag() -> Result<()> { let rag = RAGSystem::new(&config.rag).await?; @@ -309,7 +309,7 @@ async fn initialize_rag() -> Result<()> { ### Query the RAG System -```text +```bash # Search for context-aware information provisioning ai query "How do I configure PostgreSQL with encryption?" @@ -323,7 +323,7 @@ provisioning ai chat ### AI Service Integration -```text +```bash // AI service uses RAG to enhance generation async fn generate_config(user_request: &str) -> Result { // Retrieve relevant context @@ -344,7 +344,7 @@ async fn generate_config(user_request: &str) -> Result { ### Form Assistance Integration -```text +```bash // In typdialog-ai (JavaScript/TypeScript) async function suggestFieldValue(fieldName, currentInput) { // Query RAG for similar configurations @@ -415,7 +415,7 @@ See [Configuration Guide](configuration.md) for detailed RAG setup: ### Query Metrics -```text +```bash # View RAG search metrics provisioning ai metrics show rag @@ -425,7 +425,7 @@ provisioning ai eval-rag --sample-queries 100 ### Debug Mode -```text +```bash # In provisioning/config/ai.toml [ai.rag.debug] enabled = true diff --git a/docs/src/ai/security-policies.md b/docs/src/ai/security-policies.md index 7a1e005..8675264 100644 --- a/docs/src/ai/security-policies.md +++ b/docs/src/ai/security-policies.md @@ -9,7 +9,7 @@ controlled through Cedar policies and include strict secret isolation. ### Defense in Depth -```text +```bash ┌─────────────────────────────────────────┐ │ User Request to AI │ └──────────────┬──────────────────────────┘ @@ -60,7 +60,7 @@ controlled through Cedar policies and include strict secret isolation. ### Policy Engine Setup -```text +```bash // File: provisioning/policies/ai-policies.cedar // Core principle: Least privilege @@ -164,7 +164,7 @@ when { Before sending data to external LLMs, the system removes: -```text +```bash Patterns Removed: ├─ Passwords: password="...", pwd=..., etc. ├─ API Keys: api_key=..., api-key=..., etc. @@ -178,7 +178,7 @@ Patterns Removed: ### Configuration -```text +```toml [ai.security] sanitize_pii = true sanitize_secrets = true @@ -207,7 +207,7 @@ preserve_patterns = [ ### Example Sanitization **Before**: -```text +```bash Error configuring database: connection_string: postgresql://dbadmin:MySecurePassword123@prod-db.us-east-1.rds.amazonaws.com:5432/app api_key: sk-ant-abc123def456 @@ -215,7 +215,7 @@ vault_token: hvs.CAESIyg7... ``` **After Sanitization**: -```text +```bash Error configuring database: connection_string: postgresql://dbadmin:[REDACTED]@prod-db.us-east-1.rds.amazonaws.com:5432/app api_key: [REDACTED] @@ -228,7 +228,7 @@ vault_token: [REDACTED] AI cannot directly access secrets. Instead: -```text +```bash User wants: "Configure PostgreSQL with encrypted backups" ↓ AI generates: Configuration schema with placeholders @@ -255,7 +255,7 @@ Deployment: Uses secrets from secure store (Vault, AWS Secrets Manager) For environments requiring zero external API calls: -```text +```bash # Deploy local Ollama with provisioning support docker run -d --name provisioning-ai @@ -301,7 +301,7 @@ api_base = "[http://localhost:11434"](http://localhost:11434") For highly sensitive environments: -```text +```toml [ai.security.hsm] enabled = true provider = "aws-cloudhsm" # or "thales", "yubihsm" @@ -317,7 +317,7 @@ server_key = "/etc/provisioning/certs/server.key" ### Data at Rest -```text +```toml [ai.security.encryption] enabled = true algorithm = "aes-256-gcm" @@ -335,7 +335,7 @@ log_encryption = true ### Data in Transit -```text +```bash All external LLM API calls: ├─ TLS 1.3 (minimum) ├─ Certificate pinning (optional) @@ -347,7 +347,7 @@ All external LLM API calls: ### What Gets Logged -```text +```json { "timestamp": "2025-01-13T10:30:45Z", "event_type": "ai_action", @@ -380,7 +380,7 @@ All external LLM API calls: ### Audit Trail Access -```text +```bash # View recent AI actions provisioning audit log ai --tail 100 @@ -404,7 +404,7 @@ provisioning audit search ai "error in database configuration" ### Built-in Compliance Checks -```text +```toml [ai.compliance] frameworks = ["pci-dss", "hipaa", "sox", "gdpr"] @@ -423,7 +423,7 @@ enabled = true ### Compliance Reports -```text +```bash # Generate compliance report provisioning audit compliance-report --framework pci-dss @@ -467,7 +467,7 @@ provisioning audit verify-compliance ### Compromised API Key -```text +```bash # 1. Immediately revoke key provisioning admin revoke-key ai-api-key-123 @@ -486,7 +486,7 @@ provisioning audit log ai ### Unauthorized Access -```text +```bash # Review Cedar policy logs provisioning audit log ai --decision deny diff --git a/docs/src/ai/troubleshooting-with-ai.md b/docs/src/ai/troubleshooting-with-ai.md index 1644682..29cc22e 100644 --- a/docs/src/ai/troubleshooting-with-ai.md +++ b/docs/src/ai/troubleshooting-with-ai.md @@ -11,7 +11,7 @@ root causes, suggests fixes, and generates corrected configurations based on fai Transform deployment failures into actionable insights: -```text +```bash Deployment Fails with Error ↓ AI analyzes logs: @@ -37,7 +37,7 @@ Developer reviews and accepts: ### Automatic Detection and Analysis -```text +```bash ┌──────────────────────────────────────────┐ │ Deployment Monitoring │ │ - Watches deployment for failures │ @@ -91,14 +91,14 @@ Developer reviews and accepts: ### Example 1: Database Connection Timeout **Failure**: -```text +```bash Deployment: deploy-2025-01-13-001 Status: FAILED at phase database_migration Error: connection timeout after 30s connecting to postgres://... ``` **Run Troubleshooting**: -```text +```bash $ provisioning ai troubleshoot deploy-2025-01-13-001 Analyzing deployment failure... @@ -175,14 +175,14 @@ Ready to redeploy with corrected configuration? [yes/no]: yes ### Example 2: Kubernetes Deployment Error **Failure**: -```text +```yaml Deployment: deploy-2025-01-13-002 Status: FAILED at phase kubernetes_workload Error: failed to create deployment app: Pod exceeded capacity ``` **Troubleshooting**: -```text +```bash $ provisioning ai troubleshoot deploy-2025-01-13-002 --detailed ╔════════════════════════════════════════════════════════════════╗ @@ -239,7 +239,7 @@ $ provisioning ai troubleshoot deploy-2025-01-13-002 --detailed ### Basic Troubleshooting -```text +```bash # Troubleshoot recent deployment provisioning ai troubleshoot deploy-2025-01-13-001 @@ -255,7 +255,7 @@ provisioning ai troubleshoot deploy-2025-01-13-001 --alternatives ### Working with Logs -```text +```bash # Troubleshoot from custom logs provisioning ai troubleshoot | --logs "$(journalctl -u provisioning --no-pager | tail -100)" | @@ -271,7 +271,7 @@ provisioning ai troubleshoot ### Generate Reports -```text +```bash # Generate detailed troubleshooting report provisioning ai troubleshoot deploy-123 --report @@ -294,7 +294,7 @@ provisioning ai troubleshoot deploy-123 ### Shallow Analysis (Fast) -```text +```bash provisioning ai troubleshoot deploy-123 --depth shallow Analyzes: @@ -306,7 +306,7 @@ Analyzes: ### Deep Analysis (Thorough) -```text +```bash provisioning ai troubleshoot deploy-123 --depth deep Analyzes: @@ -322,7 +322,7 @@ Analyzes: ### Automatic Troubleshooting -```text +```bash # Enable auto-troubleshoot on failures provisioning config set ai.troubleshooting.auto_analyze true @@ -333,7 +333,7 @@ provisioning config set ai.troubleshooting.auto_analyze true ### WebUI Integration -```text +```bash Deployment Dashboard ├─ deployment-123 [FAILED] │ └─ AI Analysis @@ -349,7 +349,7 @@ Deployment Dashboard The system learns common failure patterns: -```text +```bash Collected Patterns: ├─ Database Timeouts (25% of failures) │ └─ Usually: Security group, connection pool, slow startup @@ -363,7 +363,7 @@ Collected Patterns: ### Improvement Tracking -```text +```bash # See patterns in your deployments provisioning ai analytics failures --period month @@ -386,7 +386,7 @@ Month Summary: ### Troubleshooting Settings -```text +```toml [ai.troubleshooting] enabled = true @@ -416,7 +416,7 @@ estimate_alternative_costs = true ### Failure Detection -```text +```toml [ai.troubleshooting.detection] # Monitor logs for these patterns watch_patterns = [ diff --git a/docs/src/api-reference/README.md b/docs/src/api-reference/README.md index 4639354..e831f65 100644 --- a/docs/src/api-reference/README.md +++ b/docs/src/api-reference/README.md @@ -12,7 +12,7 @@ API reference for programmatic access to the Provisioning Platform. ## Quick Start -```text +```bash # Check API health curl http://localhost:9090/health diff --git a/docs/src/api-reference/extensions.md b/docs/src/api-reference/extensions.md index 0e588e4..6e5cb9f 100644 --- a/docs/src/api-reference/extensions.md +++ b/docs/src/api-reference/extensions.md @@ -16,7 +16,7 @@ All extensions follow a standardized structure and API for seamless integration. ### Standard Directory Layout -```text +```bash extension-name/ ├── manifest.toml # Extension metadata ├── schemas/ # Nickel configuration files @@ -71,7 +71,7 @@ All providers must implement the following interface: Create `schemas/settings.ncl`: -```text +```nickel # Provider settings schema { ProviderSettings = { @@ -146,7 +146,7 @@ schema ServerConfig { Create `nulib/mod.nu`: -```text +```nushell use std log # Provider name and version @@ -231,7 +231,7 @@ export def "test-connection" [config: record] -> record { Create `nulib/create.nu`: -```text +```nushell use std log use utils.nu * @@ -368,7 +368,7 @@ def wait-for-server-ready [server_id: string] -> string { Add provider metadata in `metadata.toml`: -```text +```toml [extension] name = "my-provider" type = "provider" @@ -429,7 +429,7 @@ Task services must implement: Create `schemas/version.ncl`: -```text +```nickel # Task service version configuration { taskserv_version = { @@ -483,7 +483,7 @@ Create `schemas/version.ncl`: Create `nulib/mod.nu`: -```text +```nushell use std log use ../../../lib_provisioning * @@ -697,7 +697,7 @@ Clusters orchestrate multiple components: Create `schemas/cluster.ncl`: -```text +```nickel # Cluster configuration schema { ClusterConfig = { @@ -812,7 +812,7 @@ Create `schemas/cluster.ncl`: Create `nulib/mod.nu`: -```text +```nushell use std log use ../../../lib_provisioning * @@ -1065,7 +1065,7 @@ Extensions should include comprehensive tests: Create `tests/unit_tests.nu`: -```text +```nushell use std testing export def test_provider_config_validation [] { @@ -1096,7 +1096,7 @@ export def test_server_creation_check_mode [] { Create `tests/integration_tests.nu`: -```text +```nushell use std testing export def test_full_server_lifecycle [] { @@ -1127,7 +1127,7 @@ export def test_full_server_lifecycle [] { ### Running Tests -```text +```bash # Run unit tests nu tests/unit_tests.nu @@ -1151,7 +1151,7 @@ Each extension must include: ### API Documentation Template -```text +```bash # Extension Name API ## Overview diff --git a/docs/src/api-reference/integration-examples.md b/docs/src/api-reference/integration-examples.md index e96a26d..87949d2 100644 --- a/docs/src/api-reference/integration-examples.md +++ b/docs/src/api-reference/integration-examples.md @@ -18,7 +18,7 @@ Provisioning offers multiple integration points: #### Full-Featured Python Client -```text +```bash import asyncio import json import logging @@ -416,7 +416,7 @@ if __name__ == "__main__": #### Complete JavaScript/TypeScript Client -```text +```bash import axios, { AxiosInstance, AxiosResponse } from 'axios'; import WebSocket from 'ws'; import { EventEmitter } from 'events'; @@ -925,7 +925,7 @@ export { ProvisioningClient, Task, BatchConfig }; ### Comprehensive Error Handling -```text +```python class ProvisioningErrorHandler: """Centralized error handling for provisioning operations""" @@ -1028,7 +1028,7 @@ async def robust_workflow_execution(): ### Circuit Breaker Pattern -```text +```javascript class CircuitBreaker { private failures = 0; private nextAttempt = Date.now(); @@ -1104,7 +1104,7 @@ class ResilientProvisioningClient { ### Connection Pooling and Caching -```text +```bash import asyncio import aiohttp from cachetools import TTLCache @@ -1222,7 +1222,7 @@ async def high_performance_workflow(): ### WebSocket Connection Pooling -```text +```javascript class WebSocketPool { constructor(maxConnections = 5) { this.maxConnections = maxConnections; @@ -1290,13 +1290,13 @@ The Python SDK provides a comprehensive interface for provisioning: #### Installation -```text +```bash pip install provisioning-client ``` #### Quick Start -```text +```bash from provisioning_client import ProvisioningClient # Initialize client @@ -1319,7 +1319,7 @@ print(f"Workflow completed: {task.status}") #### Advanced Usage -```text +```bash # Use with async context manager async with ProvisioningClient() as client: # Batch operations @@ -1340,13 +1340,13 @@ async with ProvisioningClient() as client: #### Installation -```text +```bash npm install @provisioning/client ``` #### Usage -```text +```bash import { ProvisioningClient } from '@provisioning/client'; const client = new ProvisioningClient({ @@ -1373,7 +1373,7 @@ await client.connectWebSocket(); ### Workflow Orchestration Pipeline -```text +```python class WorkflowPipeline: """Orchestrate complex multi-step workflows""" @@ -1462,7 +1462,7 @@ async def complex_deployment(): ### Event-Driven Architecture -```text +```javascript class EventDrivenWorkflowManager { constructor(client) { this.client = client; diff --git a/docs/src/api-reference/nushell-api.md b/docs/src/api-reference/nushell-api.md index 268dad9..88d0bc6 100644 --- a/docs/src/api-reference/nushell-api.md +++ b/docs/src/api-reference/nushell-api.md @@ -69,7 +69,7 @@ The provisioning platform provides a comprehensive Nushell library with reusable ## Usage Example -```text +```nushell # Load provisioning library use provisioning/core/nulib/lib_provisioning * diff --git a/docs/src/api-reference/path-resolution.md b/docs/src/api-reference/path-resolution.md index d212cd8..281dac2 100644 --- a/docs/src/api-reference/path-resolution.md +++ b/docs/src/api-reference/path-resolution.md @@ -17,7 +17,7 @@ The path resolution system provides a hierarchical and configurable mechanism fo The system follows a specific hierarchy for loading configuration files: -```text +```toml 1. System defaults (config.defaults.toml) 2. User configuration (config.user.toml) 3. Project configuration (config.project.toml) @@ -30,7 +30,7 @@ The system follows a specific hierarchy for loading configuration files: The system searches for configuration files in these locations: -```text +```toml # Default search paths (in order) /usr/local/provisioning/config.defaults.toml $HOME/.config/provisioning/config.user.toml @@ -59,7 +59,7 @@ Resolves configuration file paths using the search hierarchy. **Example:** -```text +```bash use path-resolution.nu * let config_path = (resolve-config-path "config.user.toml" []) # Returns: "/home/user/.config/provisioning/config.user.toml" @@ -76,7 +76,7 @@ Discovers extension paths (providers, taskservs, clusters). **Returns:** -```text +```json { base_path: "/usr/local/provisioning/providers/upcloud", schemas_path: "/usr/local/provisioning/providers/upcloud/schemas", @@ -92,7 +92,7 @@ Gets current workspace path configuration. **Returns:** -```text +```json { base: "/usr/local/provisioning", current_infra: "/workspace/infra/production", @@ -130,7 +130,7 @@ Interpolates variables in path templates. **Example:** -```text +```javascript let template = "{{paths.base}}/infra/{{env.USER}}/{{git.branch}}" let result = (interpolate-path $template { paths: { base: "/usr/local/provisioning" }, @@ -150,7 +150,7 @@ Discovers all available providers. **Returns:** -```text +```bash [ { name: "upcloud", @@ -185,7 +185,7 @@ Gets provider-specific configuration and paths. **Returns:** -```text +```json { name: "upcloud", base_path: "/usr/local/provisioning/providers/upcloud", @@ -214,7 +214,7 @@ Discovers all available task services. **Returns:** -```text +```bash [ { name: "kubernetes", @@ -245,7 +245,7 @@ Gets task service configuration and version information. **Returns:** -```text +```json { name: "kubernetes", path: "/usr/local/provisioning/taskservs/kubernetes", @@ -272,7 +272,7 @@ Discovers all available cluster configurations. **Returns:** -```text +```bash [ { name: "buildkit", @@ -312,7 +312,7 @@ Gets environment-specific configuration. **Returns:** -```text +```json { name: "production", paths: { @@ -359,7 +359,7 @@ Discovers available workspaces and infrastructure directories. **Returns:** -```text +```bash [ { name: "production", @@ -405,7 +405,7 @@ Analyzes project structure and identifies components. **Returns:** -```text +```json { root: "/workspace/project", type: "provisioning_workspace", @@ -458,7 +458,7 @@ Gets path resolution cache statistics. **Returns:** -```text +```json { enabled: true, size: 150, @@ -485,7 +485,7 @@ Normalizes paths for cross-platform compatibility. **Example:** -```text +```bash # On Windows normalize-path "path/to/file" # Returns: "path\to\file" @@ -519,7 +519,7 @@ Validates all paths in configuration. **Returns:** -```text +```json { valid: true, errors: [], @@ -541,7 +541,7 @@ Validates extension directory structure. **Returns:** -```text +```json { valid: true, required_files: [ @@ -561,7 +561,7 @@ Validates extension directory structure. The path resolution API is exposed via Nushell commands: -```text +```nushell # Show current path configuration provisioning show paths @@ -584,7 +584,7 @@ provisioning workspace set /path/to/infra ### Python Integration -```text +```bash import subprocess import json @@ -612,7 +612,7 @@ providers = resolver.discover_providers() ### JavaScript/Node.js Integration -```text +```javascript const { exec } = require('child_process'); const util = require('util'); const execAsync = util.promisify(exec); @@ -697,7 +697,7 @@ The system provides graceful fallbacks: Monitor path resolution performance: -```text +```bash # Get resolution statistics provisioning debug path-stats diff --git a/docs/src/api-reference/provider-api.md b/docs/src/api-reference/provider-api.md index 26c23a6..696d5b6 100644 --- a/docs/src/api-reference/provider-api.md +++ b/docs/src/api-reference/provider-api.md @@ -18,7 +18,7 @@ All providers must implement the following interface: ### Required Functions -```text +```bash # Provider initialization export def init [] -> record { ... } @@ -37,7 +37,7 @@ export def get-pricing [plan: string] -> record { ... } Each provider requires configuration in Nickel format: -```text +```nickel # Example: UpCloud provider configuration { provider = { @@ -57,7 +57,7 @@ Each provider requires configuration in Nickel format: ### 1. Directory Structure -```text +```bash provisioning/extensions/providers/my-provider/ ├── nulib/ │ └── my_provider.nu # Provider implementation @@ -69,7 +69,7 @@ provisioning/extensions/providers/my-provider/ ### 2. Implementation Template -```text +```bash # my_provider.nu export def init [] { { @@ -94,7 +94,7 @@ export def list-servers [] { ### 3. Nickel Schema -```text +```nickel # main.ncl { MyProvider = { @@ -118,7 +118,7 @@ Providers are automatically discovered from: - `provisioning/extensions/providers/*/nu/*.nu` - User workspace: `workspace/extensions/providers/*/nu/*.nu` -```text +```nushell # Discover available providers provisioning module discover providers @@ -130,7 +130,7 @@ provisioning module load providers workspace my-provider ### Create Servers -```text +```bash use my_provider.nu * let plan = { @@ -144,13 +144,13 @@ create-servers $plan ### List Servers -```text +```bash list-servers | where status == "running" | select hostname ip_address ``` ### Get Pricing -```text +```bash get-pricing "small" | to yaml ``` @@ -158,7 +158,7 @@ get-pricing "small" | to yaml Use the test environment system to test providers: -```text +```bash # Test provider without real resources provisioning test env single my-provider --check ``` diff --git a/docs/src/api-reference/rest-api.md b/docs/src/api-reference/rest-api.md index 30bf9e4..3b7eb21 100644 --- a/docs/src/api-reference/rest-api.md +++ b/docs/src/api-reference/rest-api.md @@ -20,13 +20,13 @@ Provisioning exposes two main REST APIs: All API endpoints (except health checks) require JWT authentication via the Authorization header: -```text +```bash Authorization: Bearer ``` ### Getting Access Token -```text +```bash POST /auth/login Content-Type: application/json @@ -47,7 +47,7 @@ Check orchestrator health status. **Response:** -```text +```json { "success": true, "data": "Orchestrator is healthy" @@ -68,7 +68,7 @@ List all workflow tasks. **Response:** -```text +```json { "success": true, "data": [ @@ -99,7 +99,7 @@ Get specific task status and details. **Response:** -```text +```json { "success": true, "data": { @@ -126,7 +126,7 @@ Submit server creation workflow. **Request Body:** -```text +```json { "infra": "production", "settings": "config.ncl", @@ -137,7 +137,7 @@ Submit server creation workflow. **Response:** -```text +```json { "success": true, "data": "uuid-task-id" @@ -150,7 +150,7 @@ Submit task service workflow. **Request Body:** -```text +```json { "operation": "create", "taskserv": "kubernetes", @@ -163,7 +163,7 @@ Submit task service workflow. **Response:** -```text +```json { "success": true, "data": "uuid-task-id" @@ -176,7 +176,7 @@ Submit cluster workflow. **Request Body:** -```text +```json { "operation": "create", "cluster_type": "buildkit", @@ -189,7 +189,7 @@ Submit cluster workflow. **Response:** -```text +```json { "success": true, "data": "uuid-task-id" @@ -204,7 +204,7 @@ Execute batch workflow operation. **Request Body:** -```text +```json { "name": "multi_cloud_deployment", "version": "1.0.0", @@ -235,7 +235,7 @@ Execute batch workflow operation. **Response:** -```text +```json { "success": true, "data": { @@ -263,7 +263,7 @@ List all batch operations. **Response:** -```text +```json { "success": true, "data": [ @@ -288,7 +288,7 @@ Get batch operation status. **Response:** -```text +```json { "success": true, "data": { @@ -317,7 +317,7 @@ Cancel running batch operation. **Response:** -```text +```json { "success": true, "data": "Operation cancelled" @@ -336,7 +336,7 @@ Get real-time workflow progress. **Response:** -```text +```json { "success": true, "data": { @@ -360,7 +360,7 @@ Get workflow state snapshots. **Response:** -```text +```json { "success": true, "data": [ @@ -380,7 +380,7 @@ Get system-wide metrics. **Response:** -```text +```json { "success": true, "data": { @@ -403,7 +403,7 @@ Get system health status. **Response:** -```text +```json { "success": true, "data": { @@ -424,7 +424,7 @@ Get state manager statistics. **Response:** -```text +```json { "success": true, "data": { @@ -444,7 +444,7 @@ Create new checkpoint. **Request Body:** -```text +```json { "name": "before_major_update", "description": "Checkpoint before deploying v2.0.0" @@ -453,7 +453,7 @@ Create new checkpoint. **Response:** -```text +```json { "success": true, "data": "checkpoint-uuid" @@ -466,7 +466,7 @@ List all checkpoints. **Response:** -```text +```json { "success": true, "data": [ @@ -491,7 +491,7 @@ Get specific checkpoint details. **Response:** -```text +```json { "success": true, "data": { @@ -511,7 +511,7 @@ Execute rollback operation. **Request Body:** -```text +```json { "checkpoint_id": "checkpoint-uuid" } @@ -519,7 +519,7 @@ Execute rollback operation. Or for partial rollback: -```text +```json { "operation_ids": ["op-1", "op-2", "op-3"] } @@ -527,7 +527,7 @@ Or for partial rollback: **Response:** -```text +```json { "success": true, "data": { @@ -550,7 +550,7 @@ Restore system state from checkpoint. **Response:** -```text +```json { "success": true, "data": "State restored from checkpoint checkpoint-uuid" @@ -563,7 +563,7 @@ Get rollback system statistics. **Response:** -```text +```json { "success": true, "data": { @@ -585,7 +585,7 @@ Authenticate user and get JWT token. **Request Body:** -```text +```json { "username": "admin", "password": "secure_password", @@ -595,7 +595,7 @@ Authenticate user and get JWT token. **Response:** -```text +```json { "success": true, "data": { @@ -617,7 +617,7 @@ Refresh JWT token. **Request Body:** -```text +```json { "token": "current-jwt-token" } @@ -625,7 +625,7 @@ Refresh JWT token. **Response:** -```text +```json { "success": true, "data": { @@ -641,7 +641,7 @@ Logout and invalidate token. **Response:** -```text +```json { "success": true, "data": "Successfully logged out" @@ -661,7 +661,7 @@ List all users. **Response:** -```text +```json { "success": true, "data": [ @@ -684,7 +684,7 @@ Create new user. **Request Body:** -```text +```json { "username": "newuser", "email": "newuser@example.com", @@ -696,7 +696,7 @@ Create new user. **Response:** -```text +```json { "success": true, "data": { @@ -719,7 +719,7 @@ Update existing user. **Request Body:** -```text +```json { "email": "updated@example.com", "roles": ["admin", "operator"], @@ -729,7 +729,7 @@ Update existing user. **Response:** -```text +```json { "success": true, "data": "User updated successfully" @@ -746,7 +746,7 @@ Delete user. **Response:** -```text +```json { "success": true, "data": "User deleted successfully" @@ -761,7 +761,7 @@ List all policies. **Response:** -```text +```json { "success": true, "data": [ @@ -783,7 +783,7 @@ Create new policy. **Request Body:** -```text +```json { "name": "new_policy", "version": "1.0.0", @@ -800,7 +800,7 @@ Create new policy. **Response:** -```text +```json { "success": true, "data": { @@ -821,7 +821,7 @@ Update policy. **Request Body:** -```text +```json { "name": "updated_policy", "rules": [...] @@ -830,7 +830,7 @@ Update policy. **Response:** -```text +```json { "success": true, "data": "Policy updated successfully" @@ -855,7 +855,7 @@ Get audit logs. **Response:** -```text +```json { "success": true, "data": [ @@ -876,7 +876,7 @@ Get audit logs. All endpoints may return error responses in this format: -```text +```json { "success": false, "error": "Detailed error message" @@ -904,7 +904,7 @@ API endpoints are rate-limited: Rate limit headers are included in responses: -```text +```bash X-RateLimit-Limit: 100 X-RateLimit-Remaining: 95 X-RateLimit-Reset: 1632150000 @@ -918,7 +918,7 @@ Prometheus-compatible metrics endpoint. **Response:** -```text +```bash # HELP orchestrator_tasks_total Total number of tasks # TYPE orchestrator_tasks_total counter orchestrator_tasks_total{status="completed"} 150 @@ -937,7 +937,7 @@ Real-time event streaming via WebSocket connection. **Connection:** -```text +```javascript const ws = new WebSocket('ws://localhost:9090/ws?token=jwt-token'); ws.onmessage = function(event) { @@ -948,7 +948,7 @@ ws.onmessage = function(event) { **Event Format:** -```text +```json { "event_type": "TaskStatusChanged", "timestamp": "2025-09-26T10:00:00Z", @@ -967,7 +967,7 @@ ws.onmessage = function(event) { ### Python SDK Example -```text +```bash import requests class ProvisioningClient: @@ -1007,7 +1007,7 @@ print(f"Task ID: {result['data']}") ### JavaScript/Node.js SDK Example -```text +```javascript const axios = require('axios'); class ProvisioningClient { @@ -1051,7 +1051,7 @@ The system supports webhooks for external integrations: Configure webhooks in the system configuration: -```text +```toml [webhooks] enabled = true endpoints = [ @@ -1065,7 +1065,7 @@ endpoints = [ ### Webhook Payload -```text +```json { "event": "task.completed", "timestamp": "2025-09-26T10:00:00Z", @@ -1087,7 +1087,7 @@ For endpoints that return lists, use pagination parameters: Pagination metadata is included in response headers: -```text +```bash X-Total-Count: 1500 X-Limit: 50 X-Offset: 100 @@ -1098,7 +1098,7 @@ Link: ; rel="next" The API uses header-based versioning: -```text +```bash Accept: application/vnd.provisioning.v1+json ``` @@ -1108,7 +1108,7 @@ Current version: v1 Use the included test suite to validate API functionality: -```text +```bash # Run API integration tests cd src/orchestrator cargo test --test api_tests diff --git a/docs/src/api-reference/sdks.md b/docs/src/api-reference/sdks.md index 2bb086e..4ba2020 100644 --- a/docs/src/api-reference/sdks.md +++ b/docs/src/api-reference/sdks.md @@ -23,7 +23,7 @@ Provisioning provides SDKs in multiple languages to facilitate integration: ### Installation -```text +```bash # Install from PyPI pip install provisioning-client @@ -33,7 +33,7 @@ pip install git+https://github.com/provisioning-systems/python-client.git ### Quick Start -```text +```bash from provisioning_client import ProvisioningClient import asyncio @@ -79,7 +79,7 @@ if __name__ == "__main__": #### WebSocket Integration -```text +```javascript async def monitor_workflows(): client = ProvisioningClient() await client.authenticate() @@ -103,7 +103,7 @@ async def monitor_workflows(): #### Batch Operations -```text +```javascript async def execute_batch_deployment(): client = ProvisioningClient() await client.authenticate() @@ -158,7 +158,7 @@ async def execute_batch_deployment(): #### Error Handling with Retries -```text +```bash from provisioning_client.exceptions import ( ProvisioningAPIError, AuthenticationError, @@ -209,7 +209,7 @@ async def robust_workflow(): #### ProvisioningClient Class -```text +```python class ProvisioningClient: def __init__(self, base_url: str = "http://localhost:9090", @@ -258,7 +258,7 @@ class ProvisioningClient: ### Installation -```text +```bash # npm npm install @provisioning/client @@ -271,7 +271,7 @@ pnpm add @provisioning/client ### Quick Start -```text +```bash import { ProvisioningClient } from '@provisioning/client'; async function main() { @@ -308,7 +308,7 @@ main(); ### React Integration -```text +```bash import React, { useState, useEffect } from 'react'; import { ProvisioningClient } from '@provisioning/client'; @@ -434,7 +434,7 @@ export default WorkflowDashboard; ### Node.js CLI Tool -```text +```bash #!/usr/bin/env node import { Command } from 'commander'; @@ -591,7 +591,7 @@ program.parse(); ### API Reference -```text +```bash interface ProvisioningClientOptions { baseUrl?: string; authUrl?: string; @@ -645,13 +645,13 @@ class ProvisioningClient extends EventEmitter { ### Installation -```text +```bash go get github.com/provisioning-systems/go-client ``` ### Quick Start -```text +```bash package main import ( @@ -717,7 +717,7 @@ func main() { ### WebSocket Integration -```text +```bash package main import ( @@ -785,7 +785,7 @@ func main() { ### HTTP Client with Retry Logic -```text +```bash package main import ( @@ -877,7 +877,7 @@ func main() { Add to your `Cargo.toml`: -```text +```toml [dependencies] provisioning-rs = "2.0.0" tokio = { version = "1.0", features = ["full"] } @@ -885,7 +885,7 @@ tokio = { version = "1.0", features = ["full"] } ### Quick Start -```text +```bash use provisioning_rs::{ProvisioningClient, Config, CreateServerRequest}; use tokio; @@ -941,7 +941,7 @@ async fn main() -> Result<(), Box> { ### WebSocket Integration -```text +```bash use provisioning_rs::{ProvisioningClient, Config, WebSocketEvent}; use futures_util::StreamExt; use tokio; @@ -997,7 +997,7 @@ async fn main() -> Result<(), Box> { ### Batch Operations -```text +```bash use provisioning_rs::{BatchOperationRequest, BatchOperation}; #[tokio::main] diff --git a/docs/src/api-reference/websocket.md b/docs/src/api-reference/websocket.md index d9de10b..f9813bd 100644 --- a/docs/src/api-reference/websocket.md +++ b/docs/src/api-reference/websocket.md @@ -30,7 +30,7 @@ The main WebSocket endpoint for real-time events and monitoring. **Example Connection:** -```text +```javascript const ws = new WebSocket('ws://localhost:9090/ws?token=jwt-token&events=task,batch,system'); ``` @@ -64,7 +64,7 @@ Live log streaming endpoint. All WebSocket connections require authentication via JWT token: -```text +```bash // Include token in connection URL const ws = new WebSocket('ws://localhost:9090/ws?token=' + jwtToken); @@ -93,7 +93,7 @@ ws.onopen = function() { Fired when a workflow task status changes. -```text +```json { "event_type": "TaskStatusChanged", "timestamp": "2025-09-26T10:00:00Z", @@ -116,7 +116,7 @@ Fired when a workflow task status changes. Fired when batch operation status changes. -```text +```json { "event_type": "BatchOperationUpdate", "timestamp": "2025-09-26T10:00:00Z", @@ -150,7 +150,7 @@ Fired when batch operation status changes. Fired when system health status changes. -```text +```json { "event_type": "SystemHealthUpdate", "timestamp": "2025-09-26T10:00:00Z", @@ -185,7 +185,7 @@ Fired when system health status changes. Fired when workflow progress changes. -```text +```json { "event_type": "WorkflowProgressUpdate", "timestamp": "2025-09-26T10:00:00Z", @@ -215,7 +215,7 @@ Fired when workflow progress changes. Real-time log streaming. -```text +```json { "event_type": "LogEntry", "timestamp": "2025-09-26T10:00:00Z", @@ -241,7 +241,7 @@ Real-time log streaming. Real-time metrics streaming. -```text +```json { "event_type": "MetricUpdate", "timestamp": "2025-09-26T10:00:00Z", @@ -266,7 +266,7 @@ Real-time metrics streaming. Applications can define custom event types: -```text +```json { "event_type": "CustomApplicationEvent", "timestamp": "2025-09-26T10:00:00Z", @@ -283,7 +283,7 @@ Applications can define custom event types: ### Connection Management -```text +```javascript class ProvisioningWebSocket { constructor(baseUrl, token, options = {}) { this.baseUrl = baseUrl; @@ -430,7 +430,7 @@ ws.subscribe(['TaskStatusChanged', 'WorkflowProgressUpdate']); ### Real-Time Dashboard Example -```text +```javascript class ProvisioningDashboard { constructor(wsUrl, token) { this.ws = new ProvisioningWebSocket(wsUrl, token); @@ -542,7 +542,7 @@ const dashboard = new ProvisioningDashboard('ws://localhost:9090', jwtToken); The orchestrator implements WebSocket support using Axum and Tokio: -```text +```bash use axum::{ extract::{ws::WebSocket, ws::WebSocketUpgrade, Query, State}, response::Response, @@ -702,7 +702,7 @@ fn has_event_permission(claims: &Claims, event_type: &str) -> bool { ### Client-Side Filtering -```text +```bash // Subscribe to specific event types ws.subscribe(['TaskStatusChanged', 'WorkflowProgressUpdate']); @@ -741,7 +741,7 @@ Events can be filtered on the server side based on: ### Connection Errors -```text +```bash ws.on('error', (error) => { console.error('WebSocket error:', error); @@ -780,7 +780,7 @@ ws.on('disconnected', (event) => { ### Heartbeat and Keep-Alive -```text +```javascript class ProvisioningWebSocket { constructor(baseUrl, token, options = {}) { // ... existing code ... @@ -835,7 +835,7 @@ class ProvisioningWebSocket { To improve performance, the server can batch multiple events into single WebSocket messages: -```text +```json { "type": "batch", "timestamp": "2025-09-26T10:00:00Z", @@ -856,7 +856,7 @@ To improve performance, the server can batch multiple events into single WebSock Enable message compression for large events: -```text +```javascript const ws = new WebSocket('ws://localhost:9090/ws?token=jwt&compression=true'); ``` diff --git a/docs/src/architecture/adr/ADR-001-project-structure.md b/docs/src/architecture/adr/ADR-001-project-structure.md index a708b02..e7420ed 100644 --- a/docs/src/architecture/adr/ADR-001-project-structure.md +++ b/docs/src/architecture/adr/ADR-001-project-structure.md @@ -28,7 +28,7 @@ The system needed a clear, maintainable structure that supports: Adopt a **domain-driven hybrid structure** organized around functional boundaries: -```text +```bash src/ ├── core/ # Core system and CLI entry point ├── platform/ # High-performance coordination layer (Rust orchestrator) diff --git a/docs/src/architecture/adr/ADR-002-distribution-strategy.md b/docs/src/architecture/adr/ADR-002-distribution-strategy.md index 6b31d34..d358cee 100644 --- a/docs/src/architecture/adr/ADR-002-distribution-strategy.md +++ b/docs/src/architecture/adr/ADR-002-distribution-strategy.md @@ -49,7 +49,7 @@ Implement a **layered distribution strategy** with clear separation between deve ### Distribution Structure -```text +```bash # User Distribution /usr/local/bin/ ├── provisioning # Main CLI entry point @@ -153,7 +153,7 @@ Use environment variables to control what gets installed. ### Configuration Hierarchy -```text +```toml System Defaults (lowest precedence) └── User Configuration └── Project Configuration @@ -176,4 +176,4 @@ System Defaults (lowest precedence) - Workspace Isolation Decision (ADR-003) - Configuration System Migration (CLAUDE.md) - User Experience Guidelines (Design Principles) -- Installation and Deployment Procedures +- Installation and Deployment Procedures \ No newline at end of file diff --git a/docs/src/architecture/adr/ADR-003-workspace-isolation.md b/docs/src/architecture/adr/ADR-003-workspace-isolation.md index dc9948d..676abe3 100644 --- a/docs/src/architecture/adr/ADR-003-workspace-isolation.md +++ b/docs/src/architecture/adr/ADR-003-workspace-isolation.md @@ -33,7 +33,7 @@ Implement **isolated user workspaces** with clear boundaries and hierarchical co ### Workspace Structure -```text +```bash ~/workspace/provisioning/ # User workspace root ├── config/ │ ├── user.toml # User preferences and overrides @@ -141,7 +141,7 @@ Store all user configuration in database. ### Workspace Initialization -```text +```bash # Automatic workspace creation on first run provisioning workspace init @@ -163,7 +163,7 @@ provisioning workspace validate ### Backup and Migration -```text +```bash # Backup entire workspace provisioning workspace backup --output ~/backup/provisioning-workspace.tar.gz diff --git a/docs/src/architecture/adr/ADR-004-hybrid-architecture.md b/docs/src/architecture/adr/ADR-004-hybrid-architecture.md index 4a375f4..182633d 100644 --- a/docs/src/architecture/adr/ADR-004-hybrid-architecture.md +++ b/docs/src/architecture/adr/ADR-004-hybrid-architecture.md @@ -54,7 +54,7 @@ Implement a **Hybrid Rust/Nushell Architecture** with clear separation of concer #### Rust → Nushell Communication -```text +```nushell // Rust orchestrator invokes Nushell scripts via process execution let result = Command::new("nu") .arg("-c") @@ -64,7 +64,7 @@ let result = Command::new("nu") #### Nushell → Rust Communication -```text +```nushell # Nushell submits workflows to Rust orchestrator via HTTP API http post "http://localhost:9090/workflows/servers/create" { name: "server-name", diff --git a/docs/src/architecture/adr/ADR-005-extension-framework.md b/docs/src/architecture/adr/ADR-005-extension-framework.md index 1cf7735..fbbc8a7 100644 --- a/docs/src/architecture/adr/ADR-005-extension-framework.md +++ b/docs/src/architecture/adr/ADR-005-extension-framework.md @@ -45,7 +45,7 @@ Implement a **registry-based extension framework** with structured discovery and ### Extension Structure -```text +```bash extensions/ ├── providers/ # Provider extensions │ └── custom-cloud/ @@ -75,7 +75,7 @@ extensions/ ### Extension Manifest (extension.toml) -```text +```toml [extension] name = "custom-provider" version = "1.0.0" @@ -186,7 +186,7 @@ Traditional plugin architecture with dynamic loading. ### Extension Loading Lifecycle -```text +```bash # Extension discovery and validation provisioning extension discover provisioning extension validate --extension custom-provider @@ -208,7 +208,7 @@ provisioning extension update custom-provider Extensions integrate with hierarchical configuration system: -```text +```toml # System configuration includes extension settings [custom_provider] api_endpoint = "https://api.custom-cloud.com" @@ -238,7 +238,7 @@ timeout = 30 ### Provider Extension Pattern -```text +```bash # extensions/providers/custom-cloud/nulib/provider.nu export def list-servers [] -> table { http get $"($config.custom_provider.api_endpoint)/servers" @@ -260,7 +260,7 @@ export def create-server [name: string, config: record] -> record { ### Task Service Extension Pattern -```text +```bash # extensions/taskservs/custom-service/nulib/service.nu export def install [server: string] -> nothing { let manifest_data = open ./manifests/deployment.yaml diff --git a/docs/src/architecture/adr/ADR-006-provisioning-cli-refactoring.md b/docs/src/architecture/adr/ADR-006-provisioning-cli-refactoring.md index 0d3a572..fb8ed5a 100644 --- a/docs/src/architecture/adr/ADR-006-provisioning-cli-refactoring.md +++ b/docs/src/architecture/adr/ADR-006-provisioning-cli-refactoring.md @@ -40,7 +40,7 @@ monolithic structure created multiple critical problems: We refactored the monolithic CLI into a **modular, domain-driven architecture** with the following structure: -```text +```bash provisioning/core/nulib/ ├── provisioning (211 lines) ⬅️ 84% reduction ├── main_provisioning/ @@ -63,7 +63,7 @@ provisioning/core/nulib/ Single source of truth for all flag parsing and argument building: -```text +```javascript export def parse_common_flags [flags: record]: nothing -> record export def build_module_args [flags: record, extra: string = ""]: nothing -> string export def set_debug_env [flags: record] @@ -81,7 +81,7 @@ export def get_debug_flag [flags: record]: nothing -> string Central routing with 80+ command mappings: -```text +```javascript export def get_command_registry []: nothing -> record # 80+ shortcuts export def dispatch_command [args: list, flags: record] # Main router ``` @@ -148,7 +148,7 @@ Eliminated repetition: All handlers depend on abstractions (flag records, not concrete flags): -```text +```bash # Handler signature export def handle_infrastructure_command [ command: string @@ -182,7 +182,7 @@ export def handle_infrastructure_command [ Users can now access help in multiple ways: -```text +```bash # All these work equivalently: provisioning help workspace provisioning workspace help # ⬅️ NEW: Bi-directional @@ -192,7 +192,7 @@ provisioning help ws # ⬅️ NEW: Shortcut in help **Implementation:** -```text +```bash # Intercept "command help" → "help command" let first_op = if ($ops_list | length) > 0 { ($ops_list | get 0) } else { "" } if $first_op in ["help" "h"] { @@ -242,7 +242,7 @@ Comprehensive test suite created (`tests/test_provisioning_refactor.nu`): ### Test Results -```text +```bash 📋 Testing main help... ✅ 📋 Testing category help... ✅ 🔄 Testing bi-directional help... ✅ @@ -319,7 +319,7 @@ Comprehensive test suite created (`tests/test_provisioning_refactor.nu`): ### Before: Repetitive Flag Handling -```text +```bash "server" => { let use_check = if $check { "--check "} else { "" } let use_yes = if $yes { "--yes" } else { "" } @@ -335,7 +335,7 @@ Comprehensive test suite created (`tests/test_provisioning_refactor.nu`): ### After: Clean, Reusable -```text +```python def handle_server [ops: string, flags: record] { let args = build_module_args $flags $ops run_module $args "server" --exec diff --git a/docs/src/architecture/adr/ADR-007-kms-simplification.md b/docs/src/architecture/adr/ADR-007-kms-simplification.md index b0bb5cc..3d8eeff 100644 --- a/docs/src/architecture/adr/ADR-007-kms-simplification.md +++ b/docs/src/architecture/adr/ADR-007-kms-simplification.md @@ -128,7 +128,7 @@ Remove support for: ### For Development -```text +```bash # 1. Install Age brew install age # or apt install age @@ -142,7 +142,7 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisionin ### For Production -```text +```bash # 1. Set up Cosmian KMS (cloud or self-hosted) # 2. Create master key in Cosmian # 3. Migrate secrets from Vault/AWS to Cosmian diff --git a/docs/src/architecture/adr/ADR-008-cedar-authorization.md b/docs/src/architecture/adr/ADR-008-cedar-authorization.md index 121f48d..ae07fca 100644 --- a/docs/src/architecture/adr/ADR-008-cedar-authorization.md +++ b/docs/src/architecture/adr/ADR-008-cedar-authorization.md @@ -117,7 +117,7 @@ Use Casbin authorization library. #### Architecture -```text +```bash ┌─────────────────────────────────────────────────────────┐ │ Orchestrator │ ├─────────────────────────────────────────────────────────┤ @@ -143,7 +143,7 @@ Use Casbin authorization library. #### Policy Organization -```text +```bash provisioning/config/cedar-policies/ ├── schema.cedar # Entity and action definitions ├── production.cedar # Production environment policies @@ -154,7 +154,7 @@ provisioning/config/cedar-policies/ #### Rust Implementation -```text +```rust provisioning/platform/orchestrator/src/security/ ├── cedar.rs # Cedar engine integration (450 lines) ├── policy_loader.rs # Policy loading with hot reload (320 lines) @@ -190,7 +190,7 @@ provisioning/platform/orchestrator/src/security/ #### Context Variables -```text +```bash AuthorizationContext { mfa_verified: bool, // MFA verification status ip_address: String, // Client IP address @@ -204,7 +204,7 @@ AuthorizationContext { #### Example Policy -```text +```bash // Production deployments require MFA verification @id("prod-deploy-mfa") @description("All production deployments must have MFA verification") diff --git a/docs/src/architecture/adr/ADR-009-security-system-complete.md b/docs/src/architecture/adr/ADR-009-security-system-complete.md index ff54bd5..9285456 100644 --- a/docs/src/architecture/adr/ADR-009-security-system-complete.md +++ b/docs/src/architecture/adr/ADR-009-security-system-complete.md @@ -249,7 +249,7 @@ Implement a complete security architecture using 12 specialized components organ ### End-to-End Request Flow -```text +```bash 1. User Request ↓ 2. Rate Limiting (100 req/min per IP) @@ -271,7 +271,7 @@ Implement a complete security architecture using 12 specialized components organ ### Emergency Access Flow -```text +```bash 1. Emergency Request (reason + justification) ↓ 2. Multi-Party Approval (2+ approvers, different teams) @@ -382,7 +382,7 @@ Implement a complete security architecture using 12 specialized components organ ### Development -```text +```bash # Start all services cd provisioning/platform/kms-service && cargo run & cd provisioning/platform/orchestrator && cargo run & @@ -391,7 +391,7 @@ cd provisioning/platform/control-center && cargo run & ### Production -```text +```bash # Kubernetes deployment kubectl apply -f k8s/security-stack.yaml @@ -410,7 +410,7 @@ systemctl start provisioning-control-center ### Environment Variables -```text +```bash # JWT export JWT_ISSUER="control-center" export JWT_AUDIENCE="orchestrator,cli" @@ -433,7 +433,7 @@ export MFA_WEBAUTHN_RP_ID="provisioning.example.com" ### Config Files -```text +```toml # provisioning/config/security.toml [jwt] issuer = "control-center" @@ -470,7 +470,7 @@ pii_anonymization = true ### Run All Tests -```text +```bash # Control Center (JWT, MFA) cd provisioning/platform/control-center cargo test @@ -489,7 +489,7 @@ nu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu ### Integration Tests -```text +```bash # Full security flow cd provisioning/platform/orchestrator cargo test --test security_integration_tests diff --git a/docs/src/architecture/adr/adr-010-configuration-format-strategy.md b/docs/src/architecture/adr/adr-010-configuration-format-strategy.md index 6321328..4c1f3fb 100644 --- a/docs/src/architecture/adr/adr-010-configuration-format-strategy.md +++ b/docs/src/architecture/adr/adr-010-configuration-format-strategy.md @@ -65,7 +65,7 @@ Define and document the three-format approach through: **Move template files to proper directory structure and correct extensions**: -```text +```bash Previous (KCL): provisioning/kcl/templates/*.k (had Nushell/Jinja2 code, not KCL) @@ -326,7 +326,7 @@ Current (Nickel): Currently, 15/16 files in `provisioning/kcl/templates/` have `.k` extension but contain Nushell/Jinja2 code, not KCL: -```text +```nushell provisioning/kcl/templates/ ├── server.ncl # Actually Nushell/Jinja2 template ├── taskserv.ncl # Actually Nushell/Jinja2 template @@ -343,7 +343,7 @@ This causes: Reorganize into type-specific directories: -```text +```bash provisioning/templates/ ├── nushell/ # Nushell code generation (*.nu.j2) │ ├── server.nu.j2 diff --git a/docs/src/architecture/adr/adr-011-nickel-migration.md b/docs/src/architecture/adr/adr-011-nickel-migration.md index 3b8a7bd..dd41612 100644 --- a/docs/src/architecture/adr/adr-011-nickel-migration.md +++ b/docs/src/architecture/adr/adr-011-nickel-migration.md @@ -112,7 +112,7 @@ The provisioning system required: **Example - UpCloud Provider**: -```text +```nickel # upcloud/nickel/main.ncl (migrated from upcloud/kcl/) let contracts = import "./contracts.ncl" in let defaults = import "./defaults.ncl" in @@ -171,7 +171,7 @@ let defaults = import "./defaults.ncl" in **File 1: Contracts** (`batch_contracts.ncl`): -```text +```json { BatchScheduler = { strategy | String, @@ -184,7 +184,7 @@ let defaults = import "./defaults.ncl" in **File 2: Defaults** (`batch_defaults.ncl`): -```text +```json { scheduler = { strategy = "dependency_first", @@ -197,7 +197,7 @@ let defaults = import "./defaults.ncl" in **File 3: Main** (`batch.ncl`): -```text +```javascript let contracts = import "./batch_contracts.ncl" in let defaults = import "./batch_defaults.ncl" in @@ -218,7 +218,7 @@ let defaults = import "./batch_defaults.ncl" in ### Domain-Organized Architecture -```text +```nickel provisioning/schemas/ ├── lib/ # Storage, TaskServDef, ClusterDef ├── config/ # Settings, defaults, workspace_config @@ -233,7 +233,7 @@ provisioning/schemas/ **Import pattern**: -```text +```javascript let provisioning = import "./main.ncl" in provisioning.lib # For Storage, TaskServDef provisioning.config.settings # For Settings, Defaults @@ -254,7 +254,7 @@ provisioning.operations.workflows - No snapshot overhead - Usage: Local development, testing, experimentation -```text +```nickel # workspace_librecloud/nickel/main.ncl import "../../provisioning/schemas/main.ncl" import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl" @@ -264,13 +264,13 @@ import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl" Create immutable snapshots for reproducible deployments: -```text +```nickel provisioning workspace freeze --version "2025-12-15-prod-v1" --env production ``` **Frozen structure** (`.frozen/{version}/`): -```text +```nickel ├── provisioning/schemas/ # Snapshot of central schemas ├── extensions/ # Snapshot of all extensions └── workspace/ # Snapshot of workspace configs @@ -285,7 +285,7 @@ provisioning workspace freeze --version "2025-12-15-prod-v1" --env production **Deploy from frozen snapshot**: -```text +```nickel provisioning deploy --frozen "2025-12-15-prod-v1" --infra wuji ``` @@ -308,7 +308,7 @@ provisioning deploy --frozen "2025-12-15-prod-v1" --infra wuji **Key Feature**: Nickel schemas → Type-safe UIs → Nickel output -```text +```nickel # Nickel schema → Interactive form typedialog form --schema server.ncl --output json diff --git a/docs/src/architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.md b/docs/src/architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.md index d657b01..5181ef7 100644 --- a/docs/src/architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.md +++ b/docs/src/architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.md @@ -19,7 +19,7 @@ The architectural decision was whether the plugin should: Nickel configurations in provisioning use the **module system**: -```text +```nickel # config/database.ncl import "lib/defaults" as defaults import "lib/validation" as valid @@ -47,7 +47,7 @@ Implement the `nu_plugin_nickel` plugin as a **CLI wrapper** that invokes the ex ### Architecture Diagram -```text +```nickel ┌─────────────────────────────┐ │ Nushell Script │ │ │ @@ -288,7 +288,7 @@ This makes direct usage risky. The CLI is the documented, proven interface. The plugin uses the **correct Nickel command syntax**: -```text +```nickel // Correct: cmd.arg("export").arg(file).arg("--format").arg(format); // Results in: "nickel export /file --format json" @@ -323,7 +323,7 @@ Plugin correctly processes JSON output: This enables Nushell cell path access: -```text +```nushell nickel-export json /config.ncl | .database.host # ✅ Works ``` @@ -343,7 +343,7 @@ nickel-export json /config.ncl | .database.host # ✅ Works **Manual Verification**: -```text +```nickel # Test module imports nickel-export json /workspace/config.ncl diff --git a/docs/src/architecture/adr/adr-013-typdialog-integration.md b/docs/src/architecture/adr/adr-013-typdialog-integration.md index 9f9a93b..0630d09 100644 --- a/docs/src/architecture/adr/adr-013-typdialog-integration.md +++ b/docs/src/architecture/adr/adr-013-typdialog-integration.md @@ -78,7 +78,7 @@ integration with the provisioning orchestrator. ### Architecture Diagram -```text +```bash ┌─────────────────────────────────────────┐ │ Nushell Script │ │ │ @@ -167,7 +167,7 @@ integration with the provisioning orchestrator. Nushell's `input` command is limited: -```text +```nushell # Current: No validation, no security let password = input "Password: " # ❌ Shows in terminal let region = input "AWS Region: " # ❌ No autocomplete/validation @@ -184,7 +184,7 @@ let region = input "AWS Region: " # ❌ No autocomplete/validation Nickel is declarative and cannot prompt users: -```text +```nickel # Nickel defines what the config looks like, NOT how to get it { database = { @@ -243,7 +243,7 @@ Nickel is declarative and cannot prompt users: ### Mitigation Strategies **Non-Interactive Mode**: -```text +```bash // Support both interactive and non-interactive if terminal::is_interactive() { // Show TUI dialog @@ -255,7 +255,7 @@ if terminal::is_interactive() { ``` **Testing**: -```text +```bash // Unit tests: Test form validation logic (no TUI) #[test] fn test_validate_workspace_name() { @@ -267,7 +267,7 @@ fn test_validate_workspace_name() { ``` **Scriptability**: -```text +```bash # Batch mode: Provide config via file provisioning workspace init --config workspace.toml @@ -316,7 +316,7 @@ provisioning workspace init --interactive ### Form Definition Pattern -```text +```bash use typdialog::Form; pub fn workspace_initialization_form() -> Result { @@ -353,7 +353,7 @@ pub fn workspace_initialization_form() -> Result { ### Integration with Nickel -```text +```nickel // 1. Get validated input from TUI dialog let config = workspace_initialization_form()?; @@ -370,7 +370,7 @@ fs::write("workspace/config.toml", config_toml)?; ### CLI Command Structure -```text +```bash // provisioning/core/cli/src/commands/workspace.rs #[derive(Parser)] @@ -404,7 +404,7 @@ pub fn handle_workspace_init(args: InitArgs) -> Result<()> { ### Validation Rules -```text +```rust pub fn validate_workspace_name(name: &str) -> Result<(), String> { // Alphanumeric, hyphens, 3-32 chars let re = Regex::new(r"^[a-z0-9-]{3,32}$").unwrap(); @@ -425,7 +425,7 @@ pub fn validate_region(region: &str) -> Result<(), String> { ### Security: Password Handling -```text +```bash use zeroize::Zeroizing; pub fn get_secure_password() -> Result> { @@ -447,7 +447,7 @@ pub fn get_secure_password() -> Result> { ## Testing Strategy **Unit Tests**: -```text +```bash #[test] fn test_workspace_name_validation() { assert!(validate_workspace_name("my-workspace").is_ok()); @@ -457,7 +457,7 @@ fn test_workspace_name_validation() { ``` **Integration Tests**: -```text +```bash // Use non-interactive mode with config files #[test] fn test_workspace_init_non_interactive() { @@ -481,7 +481,7 @@ fn test_workspace_init_non_interactive() { ``` **Manual Testing**: -```text +```bash # Test interactive flow cargo build --release ./target/release/provisioning workspace init --interactive @@ -495,7 +495,7 @@ cargo build --release ## Configuration Integration **CLI Flag**: -```text +```toml # provisioning/config/config.defaults.toml [ui] interactive_mode = "auto" # "auto" | "always" | "never" @@ -503,7 +503,7 @@ dialog_theme = "default" # "default" | "minimal" | "colorful" ``` **Environment Override**: -```text +```bash # Force non-interactive mode (for CI/CD) export PROVISIONING_INTERACTIVE=false @@ -523,7 +523,7 @@ export PROVISIONING_INTERACTIVE=true - Validation rule patterns **Configuration Schema**: -```text +```toml # provisioning/schemas/workspace.ncl { WorkspaceConfig = { diff --git a/docs/src/architecture/adr/adr-014-secretumvault-integration.md b/docs/src/architecture/adr/adr-014-secretumvault-integration.md index e094696..0c84c16 100644 --- a/docs/src/architecture/adr/adr-014-secretumvault-integration.md +++ b/docs/src/architecture/adr/adr-014-secretumvault-integration.md @@ -93,7 +93,7 @@ Integrate **SecretumVault** as the centralized secrets management system for the ### Architecture Diagram -```text +```bash ┌─────────────────────────────────────────────────────────────┐ │ Provisioning CLI / Orchestrator / Services │ │ │ @@ -273,7 +273,7 @@ SOPS is excellent for **static secrets in git**, but inadequate for: ### Mitigation Strategies **High Availability**: -```text +```bash # Deploy SecretumVault cluster (3 nodes) provisioning deploy secretum-vault --ha --replicas 3 @@ -282,7 +282,7 @@ provisioning deploy secretum-vault --ha --replicas 3 ``` **Migration from SOPS**: -```text +```bash # Phase 1: Import existing SOPS secrets into SecretumVault provisioning secrets migrate --from-sops config/secrets.yaml @@ -291,7 +291,7 @@ provisioning secrets migrate --from-sops config/secrets.yaml ``` **Fallback Strategy**: -```text +```bash // Graceful degradation if vault unavailable let secret = match vault_client.get_secret("database/password").await { Ok(s) => s, @@ -305,7 +305,7 @@ let secret = match vault_client.get_secret("database/password").await { ``` **Operational Monitoring**: -```text +```bash # prometheus metrics secretum_vault_request_duration_seconds secretum_vault_secret_lease_expiry @@ -351,7 +351,7 @@ secretum_vault_raft_leader_changes ### SecretumVault Deployment -```text +```bash # Deploy via provisioning system provisioning deploy secretum-vault --ha @@ -367,7 +367,7 @@ provisioning vault unseal --key-shares 5 --key-threshold 3 ### Rust Client Library -```text +```rust // provisioning/core/libs/secretum-client/src/lib.rs use secretum_vault::{Client, SecretEngine, Auth}; @@ -402,7 +402,7 @@ impl VaultClient { ### Nushell Integration -```text +```nushell # Nushell commands via Rust CLI wrapper provisioning secrets get database/prod/password provisioning secrets set api/keys/stripe --value "sk_live_xyz" @@ -413,7 +413,7 @@ provisioning secrets list database/ ### Nickel Configuration Integration -```text +```nickel # provisioning/schemas/database.ncl { database = { @@ -429,7 +429,7 @@ provisioning secrets list database/ ### Cedar Policy for Secret Access -```text +```bash // policy: developers can read dev secrets, not prod permit( principal in Group::"developers", @@ -455,7 +455,7 @@ permit( ### Dynamic Database Credentials -```text +```bash // Application requests temporary DB credentials let creds = vault_client .database() @@ -472,7 +472,7 @@ println!("TTL: {}", creds.lease_duration); // 1h ### Secret Rotation Automation -```text +```bash # secretum-vault config [[rotation_policies]] path = "database/prod/password" @@ -487,7 +487,7 @@ max_age = "90d" ### Audit Log Format -```text +```json { "timestamp": "2025-01-08T12:34:56Z", "type": "request", @@ -515,7 +515,7 @@ max_age = "90d" ## Testing Strategy **Unit Tests**: -```text +```bash #[tokio::test] async fn test_get_secret() { let vault = mock_vault_client(); @@ -533,7 +533,7 @@ async fn test_dynamic_credentials_generation() { ``` **Integration Tests**: -```text +```bash # Test vault deployment provisioning deploy secretum-vault --test-mode provisioning vault init @@ -551,7 +551,7 @@ provisioning secrets rotate test/secret ``` **Security Tests**: -```text +```bash #[tokio::test] async fn test_unauthorized_access_denied() { let vault = vault_client_with_limited_token(); @@ -563,7 +563,7 @@ async fn test_unauthorized_access_denied() { ## Configuration Integration **Provisioning Config**: -```text +```toml # provisioning/config/config.defaults.toml [secrets] provider = "secretum-vault" # "secretum-vault" | "sops" | "env" @@ -583,7 +583,7 @@ max_size = "100MB" ``` **Environment Variables**: -```text +```javascript export VAULT_ADDR="https://vault.example.com:8200" export VAULT_TOKEN="s.abc123def456..." export VAULT_NAMESPACE="provisioning" diff --git a/docs/src/architecture/adr/adr-015-ai-integration-architecture.md b/docs/src/architecture/adr/adr-015-ai-integration-architecture.md index 4ff68ee..fb0f333 100644 --- a/docs/src/architecture/adr/adr-015-ai-integration-architecture.md +++ b/docs/src/architecture/adr/adr-015-ai-integration-architecture.md @@ -100,7 +100,7 @@ All AI components are **schema-aware**, **security-enforced**, and **human-super ### Architecture Diagram -```text +```bash ┌─────────────────────────────────────────────────────────────────┐ │ User Interfaces │ │ │ @@ -268,7 +268,7 @@ All AI components are **schema-aware**, **security-enforced**, and **human-super Traditional AI code generation fails for infrastructure because: -```text +```bash Generic AI (like GitHub Copilot): ❌ Generates syntactically correct but semantically wrong configs ❌ Doesn't understand cloud provider constraints @@ -278,7 +278,7 @@ Generic AI (like GitHub Copilot): ``` **Schema-aware AI** (our approach): -```text +```bash # Nickel schema provides ground truth { Database = { @@ -303,7 +303,7 @@ Generic AI (like GitHub Copilot): LLMs alone have limitations: -```text +```bash Pure LLM: ❌ Knowledge cutoff (no recent updates) ❌ Hallucinations (invents plausible-sounding configs) @@ -312,7 +312,7 @@ Pure LLM: ``` **RAG-enhanced LLM**: -```text +```bash Query: "How to configure Postgres with encryption?" RAG retrieves: @@ -332,7 +332,7 @@ LLM generates answer WITH retrieved context: AI-generated infrastructure configs require human approval: -```text +```toml // All AI operations require approval pub async fn ai_generate_config(request: GenerateRequest) -> Result { let ai_generated = ai_service.generate(request).await?; @@ -414,7 +414,7 @@ No single LLM provider is best for all tasks: ### Mitigation Strategies **Cost Control**: -```text +```toml [ai.rate_limiting] requests_per_minute = 60 tokens_per_day = 1000000 @@ -427,7 +427,7 @@ ttl = "1h" ``` **Latency Optimization**: -```text +```bash // Streaming responses for real-time feedback pub async fn ai_generate_stream(request: GenerateRequest) -> impl Stream { ai_service @@ -438,7 +438,7 @@ pub async fn ai_generate_stream(request: GenerateRequest) -> impl Stream, @@ -1074,7 +1074,7 @@ pub struct AIAuditLog { **Estimated Costs** (per month, based on typical usage): -```text +```bash Assumptions: - 100 active users - 10 AI config generations per user per day diff --git a/docs/src/architecture/adr/adr-016-schema-driven-accessor-generation.md b/docs/src/architecture/adr/adr-016-schema-driven-accessor-generation.md index 1985040..735916b 100644 --- a/docs/src/architecture/adr/adr-016-schema-driven-accessor-generation.md +++ b/docs/src/architecture/adr/adr-016-schema-driven-accessor-generation.md @@ -10,7 +10,7 @@ The `lib_provisioning/config/accessor.nu` file contains 1567 lines across 187 accessor functions. Analysis reveals that 95% of these functions follow an identical mechanical pattern: -```text +```javascript export def get-{field-name} [--config: record] { config-get "{path.to.field}" {default_value} --config $config } @@ -42,7 +42,7 @@ Implement **Schema-Driven Accessor Generation**: automatically generate accessor ### Architecture -```text +```bash Nickel Schema (contracts.ncl) ↓ [Parse & Extract Schema Structure] @@ -156,4 +156,4 @@ CI/CD enforces: schema hash == generated code - Nickel Language: [https://nickel-lang.org/](https://nickel-lang.org/) - Nushell 0.109 Guidelines: `.claude/guidelines/nushell.md` - Current Accessor Implementation: `provisioning/core/nulib/lib_provisioning/config/accessor.nu` -- Schema Source: `provisioning/schemas/config/settings/contracts.ncl` +- Schema Source: `provisioning/schemas/config/settings/contracts.ncl` \ No newline at end of file diff --git a/docs/src/architecture/adr/adr-017-plugin-wrapper-abstraction-framework.md b/docs/src/architecture/adr/adr-017-plugin-wrapper-abstraction-framework.md index a825e8e..5e2a3e9 100644 --- a/docs/src/architecture/adr/adr-017-plugin-wrapper-abstraction-framework.md +++ b/docs/src/architecture/adr/adr-017-plugin-wrapper-abstraction-framework.md @@ -16,7 +16,7 @@ The provisioning system integrates with four critical plugins, each with its own Analysis reveals ~90% code duplication across these wrappers: -```text +```bash # Pattern repeated 4 times with minor variations: export def plugin-available? [] { # Check if plugin is installed @@ -53,7 +53,7 @@ Implement **Plugin Wrapper Abstraction Framework**: replace manual plugin wrappe ### Architecture -```text +```bash Plugin Definition (YAML) ├─ plugin: auth ├─ methods: @@ -89,7 +89,7 @@ Generated Wrappers **Nushell 0.109 Compliant** (do-complete pattern, no try-catch): -```text +```python def call-plugin-with-fallback [method: string args: record] { let plugin_result = ( do { @@ -175,7 +175,7 @@ def call-plugin-with-fallback [method: string args: record] { ### auth.yaml Example -```text +```yaml plugin: auth http_endpoint: http://localhost:8001 methods: @@ -196,7 +196,7 @@ methods: **Feature Flag Approach**: -```text +```bash # Use original manual wrappers export PROVISIONING_USE_GENERATED_PLUGINS=false @@ -222,4 +222,4 @@ Allows parallel operation and gradual migration. - Nushell 0.109 Guidelines: `.claude/guidelines/nushell.md` - Do-Complete Pattern: Error handling without try-catch -- Plugin Framework: `provisioning/core/nulib/lib_provisioning/plugins/` +- Plugin Framework: `provisioning/core/nulib/lib_provisioning/plugins/` \ No newline at end of file diff --git a/docs/src/architecture/adr/adr-018-help-system-fluent-integration.md b/docs/src/architecture/adr/adr-018-help-system-fluent-integration.md index 23b672f..0416366 100644 --- a/docs/src/architecture/adr/adr-018-help-system-fluent-integration.md +++ b/docs/src/architecture/adr/adr-018-help-system-fluent-integration.md @@ -10,7 +10,7 @@ The current help system in `main_provisioning/help_system.nu` (1303 lines) consists almost entirely of hardcoded string concatenation with embedded ANSI formatting codes: -```text +```nushell def help-infrastructure [] { print "╔════════════════════════════════════════════════════╗" print "║ SERVER & INFRASTRUCTURE ║" @@ -45,7 +45,7 @@ Implement **Data-Driven Help with Mozilla Fluent Integration**: ### Architecture -```text +```bash Help Content (Fluent Files) ├─ en-US/help.ftl (65 strings - English base) └─ es-ES/help.ftl (65 strings - Spanish translations) @@ -72,7 +72,7 @@ User Interface **en-US/help.ftl**: -```text +```bash help-main-title = PROVISIONING SYSTEM help-main-subtitle = Layered Infrastructure Automation help-main-categories = COMMAND CATEGORIES @@ -99,7 +99,7 @@ help-orch-batch = Multi-Provider Batch Operations **es-ES/help.ftl** (Spanish translations): -```text +```bash help-main-title = SISTEMA DE PROVISIÓN help-main-subtitle = Automatización de Infraestructura por Capas help-main-categories = CATEGORÍAS DE COMANDOS @@ -126,7 +126,7 @@ help-orch-batch = Operaciones por Lotes Multi-Proveedor ### 2. Fluent Loading in Nushell -```text +```python def load-fluent-file [category: string] { let lang = ($env.LANG? | default "en_US" | str replace "_" "-") let fluent_path = $"provisioning/locales/($lang)/help.ftl" @@ -138,7 +138,7 @@ def load-fluent-file [category: string] { ### 3. Help System Wrapper -```text +```javascript export def help-infrastructure [] { let strings = (load-fluent-file "infrastructure") @@ -191,7 +191,7 @@ export def help-infrastructure [] { ## Language Resolution Flow -```text +```bash 1. Check LANG environment variable LANG=es_ES.UTF-8 → extract "es_ES" or "es-ES" @@ -213,7 +213,7 @@ export def help-infrastructure [] { ### Unit Tests -```text +```bash # Test language detection LANG=en_US provisioning help infrastructure # Expected: English output @@ -227,7 +227,7 @@ LANG=fr_FR provisioning help infrastructure ## File Structure -```text +```bash provisioning/ ├── locales/ │ ├── i18n-config.toml # Locale metadata & fallback chains @@ -243,7 +243,7 @@ provisioning/ **i18n-config.toml** defines: -```text +```toml [locales] default = "en-US" fallback = "en-US" @@ -277,4 +277,4 @@ es-ES = ["en-US"] - Fluent Syntax: [https://projectfluent.org/fluent/guide/](https://projectfluent.org/fluent/guide/) - Nushell 0.109 Guidelines: `.claude/guidelines/nushell.md` - Current Help Implementation: `provisioning/core/nulib/main_provisioning/help_system.nu` -- Fluent Files: `provisioning/locales/{en-US,es-ES}/help.ftl` +- Fluent Files: `provisioning/locales/{en-US,es-ES}/help.ftl` \ No newline at end of file diff --git a/docs/src/architecture/adr/adr-019-configuration-loader-modularization.md b/docs/src/architecture/adr/adr-019-configuration-loader-modularization.md index c5410b0..2f10d46 100644 --- a/docs/src/architecture/adr/adr-019-configuration-loader-modularization.md +++ b/docs/src/architecture/adr/adr-019-configuration-loader-modularization.md @@ -9,7 +9,7 @@ The `lib_provisioning/config/loader.nu` file (2199 lines) is a monolithic implementation mixing multiple unrelated concerns: -```text +```nushell Current Structure (2199 lines): ├─ Cache lookup/storage (300 lines) ├─ Nickel evaluation (400 lines) @@ -43,7 +43,7 @@ Implement **Layered Loader Architecture**: decompose monolithic loader into spec ### Target Architecture -```text +```bash lib_provisioning/config/ ├── loader.nu # ORCHESTRATOR (< 300 lines) │ └─ Coordinates loading pipeline @@ -165,7 +165,7 @@ Create each loader as independent module: Extract Nickel evaluation logic: -```text +```javascript export def evaluate-nickel [file: string] { let result = ( do { @@ -185,7 +185,7 @@ export def evaluate-nickel [file: string] { Implement thin loader.nu: -```text +```javascript export def load-provisioning-config [] { let env_config = (env-loader load-environment) let toml_config = (toml-loader load-toml "config.toml") @@ -207,7 +207,7 @@ export def load-provisioning-config [] { Create test for each module: -```text +```bash tests/config/ ├── loaders/ │ ├── test_nickel_loader.nu @@ -235,7 +235,7 @@ tests/config/ ## Backward Compatibility **Public API Unchanged**: -```text +```bash # Current usage (unchanged) let config = (load-provisioning-config) ``` @@ -259,4 +259,4 @@ let config = (load-provisioning-config) - Current Implementation: `provisioning/core/nulib/lib_provisioning/config/loader.nu` - Cache System: `provisioning/core/nulib/lib_provisioning/config/cache/` -- Nushell 0.109 Guidelines: `.claude/guidelines/nushell.md` +- Nushell 0.109 Guidelines: `.claude/guidelines/nushell.md` \ No newline at end of file diff --git a/docs/src/architecture/adr/adr-020-command-handler-domain-splitting.md b/docs/src/architecture/adr/adr-020-command-handler-domain-splitting.md index ab78d40..2105453 100644 --- a/docs/src/architecture/adr/adr-020-command-handler-domain-splitting.md +++ b/docs/src/architecture/adr/adr-020-command-handler-domain-splitting.md @@ -38,7 +38,7 @@ Implement **Domain-Based Command Modules**: split monolithic handlers into focus ### Target Architecture -```text +```bash main_provisioning/commands/ ├── dispatcher.nu # Routes commands to domain handlers ├── utilities/ # Split by domain @@ -168,7 +168,7 @@ Create `integrations/` directory with 3 modules: Implement `dispatcher.nu`: -```text +```javascript export def provision-ssh [args] { use ./utilities/ssh.nu * handle-ssh-command $args @@ -189,7 +189,7 @@ export def provision-cache [args] { Keep public exports in original files for compatibility: -```text +```bash # commands/utilities.nu (compatibility layer) use ./utilities/ssh.nu * use ./utilities/sops.nu * @@ -204,7 +204,7 @@ export use ./utilities/sops.nu Create test structure: -```text +```bash tests/commands/ ├── utilities/ │ ├── test_ssh.nu @@ -225,7 +225,7 @@ tests/commands/ **utilities/ssh.nu**: -```text +```nushell # Connect to remote host export def ssh-connect [host: string --port: int = 22] { # Implementation @@ -244,7 +244,7 @@ export def ssh-close [host: string] { ## File Structure -```text +```bash main_provisioning/commands/ ├── dispatcher.nu # Route to domain handlers ├── utilities/ @@ -269,7 +269,7 @@ main_provisioning/commands/ Users see no change in CLI: -```text +```bash provisioning ssh host.example.com provisioning sops edit config.yaml provisioning cache clear @@ -281,7 +281,7 @@ provisioning guide from-scratch **Import Path Options**: -```text +```bash # Option 1: Import from domain module (new way) use ./utilities/ssh.nu * connect $host @@ -309,4 +309,4 @@ Both paths work without breaking existing code. - Current Implementation: `provisioning/core/nulib/main_provisioning/commands/` - Nushell 0.109 Guidelines: `.claude/guidelines/nushell.md` -- Module System: Nushell module documentation +- Module System: Nushell module documentation \ No newline at end of file diff --git a/docs/src/architecture/architecture-overview.md b/docs/src/architecture/architecture-overview.md index 7b11ce9..af88b13 100644 --- a/docs/src/architecture/architecture-overview.md +++ b/docs/src/architecture/architecture-overview.md @@ -43,7 +43,7 @@ The Provisioning Platform is a modern, cloud-native infrastructure automation sy ### Architecture at a Glance -```text +```bash ┌─────────────────────────────────────────────────────────────────────┐ │ Provisioning Platform │ ├─────────────────────────────────────────────────────────────────────┤ @@ -93,7 +93,7 @@ The Provisioning Platform is a modern, cloud-native infrastructure automation sy ### High-Level Architecture -```text +```bash ┌────────────────────────────────────────────────────────────────────────────┐ │ PRESENTATION LAYER │ ├────────────────────────────────────────────────────────────────────────────┤ @@ -191,7 +191,7 @@ The system is organized into three separate repositories: #### **provisioning-core** -```text +```bash Core system functionality ├── CLI interface (Nushell entry point) ├── Core libraries (lib_provisioning) @@ -205,7 +205,7 @@ Core system functionality #### **provisioning-extensions** -```text +```bash All provider, taskserv, cluster extensions ├── providers/ │ ├── aws/ @@ -229,7 +229,7 @@ All provider, taskserv, cluster extensions #### **provisioning-platform** -```text +```bash Platform services ├── orchestrator/ (Rust) ├── control-center/ (Rust/Yew) @@ -255,7 +255,7 @@ Platform services **Architecture**: -```text +```bash Main CLI (211 lines) ↓ Command Dispatcher (264 lines) @@ -281,7 +281,7 @@ Domain Handlers (7 modules) **Hierarchical Loading**: -```text +```bash 1. System defaults (config.defaults.toml) 2. User config (~/.provisioning/config.user.toml) 3. Workspace config (workspace/config/provisioning.yaml) @@ -303,7 +303,7 @@ Domain Handlers (7 modules) **Architecture**: -```text +```bash src/ ├── main.rs // Entry point ├── api/ @@ -342,7 +342,7 @@ src/ **Workflow Types**: -```text +```bash workflows/ ├── server_create.nu // Server provisioning ├── taskserv.nu // Task service management @@ -371,7 +371,7 @@ workflows/ **Extension Structure**: -```text +```bash extension-name/ ├── schemas/ │ ├── main.ncl // Main schema @@ -401,7 +401,7 @@ Each extension packaged as OCI artifact: **Module System**: -```text +```bash # Discover available extensions provisioning module discover taskservs @@ -414,7 +414,7 @@ provisioning module list taskserv my-workspace **Layer System** (Configuration Inheritance): -```text +```toml Layer 1: Core (provisioning/extensions/{type}/{name}) ↓ Layer 2: Workspace (workspace/extensions/{type}/{name}) @@ -438,7 +438,7 @@ Layer 3: Infrastructure (workspace/infra/{infra}/extensions/{type}/{name}) **Example**: -```text +```javascript let { TaskservDependencies } = import "provisioning/dependencies.ncl" in { kubernetes = TaskservDependencies { @@ -467,7 +467,7 @@ let { TaskservDependencies } = import "provisioning/dependencies.ncl" in **Lifecycle Management**: -```text +```bash # Start all auto-start services provisioning platform start @@ -485,7 +485,7 @@ provisioning platform logs orchestrator --follow **Architecture**: -```text +```bash User Command (CLI) ↓ Test Orchestrator (Rust) @@ -520,7 +520,7 @@ The platform supports four operational modes that adapt the system from individu ### Mode Comparison -```text +```bash ┌───────────────────────────────────────────────────────────────────────┐ │ MODE ARCHITECTURE │ ├───────────────┬───────────────┬───────────────┬───────────────────────┤ @@ -562,7 +562,7 @@ The platform supports four operational modes that adapt the system from individu **Switching Modes**: -```text +```bash # Check current mode provisioning mode current @@ -577,7 +577,7 @@ provisioning mode validate enterprise #### Solo Mode -```text +```bash # 1. Default mode, no setup needed provisioning workspace init @@ -590,7 +590,7 @@ provisioning server create #### Multi-User Mode -```text +```bash # 1. Switch mode and authenticate provisioning mode switch multi-user provisioning auth login @@ -609,7 +609,7 @@ provisioning workspace unlock my-infra #### CI/CD Mode -```text +```bash # GitLab CI deploy: stage: deploy @@ -626,7 +626,7 @@ deploy: #### Enterprise Mode -```text +```bash # 1. Switch to enterprise, verify K8s provisioning mode switch enterprise kubectl get pods -n provisioning-system @@ -654,7 +654,7 @@ provisioning workspace unlock prod-deployment ### Service Communication -```text +```bash ┌──────────────────────────────────────────────────────────────────────┐ │ NETWORK LAYER │ ├──────────────────────────────────────────────────────────────────────┤ @@ -732,7 +732,7 @@ provisioning workspace unlock prod-deployment ### Data Storage -```text +```bash ┌────────────────────────────────────────────────────────────────┐ │ DATA LAYER │ ├────────────────────────────────────────────────────────────────┤ @@ -813,7 +813,7 @@ provisioning workspace unlock prod-deployment **Configuration Loading**: -```text +```toml 1. Load system defaults (config.defaults.toml) 2. Merge user config (~/.provisioning/config.user.toml) 3. Load workspace config (workspace/config/provisioning.yaml) @@ -824,7 +824,7 @@ provisioning workspace unlock prod-deployment **State Persistence**: -```text +```bash Workflow execution ↓ Create checkpoint (JSON) @@ -836,7 +836,7 @@ On failure, load checkpoint and resume **OCI Artifact Flow**: -```text +```bash 1. Package extension (oci-package.nu) 2. Push to OCI registry (provisioning oci push) 3. Extension stored as OCI artifact @@ -850,7 +850,7 @@ On failure, load checkpoint and resume ### Security Layers -```text +```bash ┌─────────────────────────────────────────────────────────────────┐ │ SECURITY ARCHITECTURE │ ├─────────────────────────────────────────────────────────────────┤ @@ -921,7 +921,7 @@ On failure, load checkpoint and resume **SOPS Integration**: -```text +```bash # Edit encrypted file provisioning sops workspace/secrets/keys.yaml.enc @@ -931,7 +931,7 @@ provisioning sops workspace/secrets/keys.yaml.enc **KMS Integration** (Enterprise): -```text +```bash # workspace/config/provisioning.yaml secrets: provider: "kms" @@ -945,7 +945,7 @@ secrets: **CI/CD Mode** (Required): -```text +```bash # Sign OCI artifact cosign sign oci://registry/kubernetes:1.28.0 @@ -955,7 +955,7 @@ cosign verify oci://registry/kubernetes:1.28.0 **Enterprise Mode** (Mandatory): -```text +```bash # Pull with verification provisioning extension pull kubernetes --verify-signature @@ -970,7 +970,7 @@ provisioning extension pull kubernetes --verify-signature #### 1. **Binary Deployment** (Solo, Multi-user) -```text +```bash User Machine ├── ~/.provisioning/bin/ │ ├── provisioning-orchestrator @@ -986,7 +986,7 @@ User Machine #### 2. **Docker Deployment** (Multi-user, CI/CD) -```text +```bash Docker Daemon ├── Container: provisioning-orchestrator ├── Container: provisioning-control-center @@ -1001,7 +1001,7 @@ Docker Daemon #### 3. **Docker Compose Deployment** (Multi-user) -```text +```bash # provisioning/platform/docker-compose.yaml services: orchestrator: @@ -1039,7 +1039,7 @@ services: #### 4. **Kubernetes Deployment** (CI/CD, Enterprise) -```text +```yaml # Namespace: provisioning-system apiVersion: apps/v1 kind: Deployment @@ -1085,7 +1085,7 @@ spec: #### 5. **Remote Deployment** (All modes) -```text +```bash # Connect to remotely-running services services: orchestrator: @@ -1108,7 +1108,7 @@ services: #### 1. **Hybrid Language Integration** (Rust ↔ Nushell) -```text +```nushell Rust Orchestrator ↓ (HTTP API) Nushell CLI @@ -1124,7 +1124,7 @@ File-based Task Queue #### 2. **Provider Abstraction** -```text +```bash Unified Provider Interface ├── create_server(config) -> Server ├── delete_server(id) -> bool @@ -1139,7 +1139,7 @@ Provider Implementations: #### 3. **OCI Registry Integration** -```text +```bash Extension Development ↓ Package (oci-package.nu) @@ -1157,7 +1157,7 @@ Load into Workspace #### 4. **Gitea Integration** (Multi-user, Enterprise) -```text +```bash Workspace Operations ↓ Check Lock Status (Gitea API) @@ -1179,7 +1179,7 @@ Release Lock (Delete lock file) #### 5. **CoreDNS Integration** -```text +```bash Service Registration ↓ Update CoreDNS Corefile diff --git a/docs/src/architecture/config-loading-architecture.md b/docs/src/architecture/config-loading-architecture.md index 20898be..3326f60 100644 --- a/docs/src/architecture/config-loading-architecture.md +++ b/docs/src/architecture/config-loading-architecture.md @@ -86,7 +86,7 @@ Original comprehensive loader that handles: ## Module Dependency Graph -```text +```bash Help/Status Commands ↓ loader-lazy.nu @@ -110,7 +110,7 @@ loader.nu (full configuration) ### Fast Path (Help Commands) -```text +```bash # Uses minimal loader - 23ms ./provisioning help infrastructure ./provisioning workspace list @@ -119,7 +119,7 @@ loader.nu (full configuration) ### Medium Path (Status Operations) -```text +```bash # Uses minimal loader with some full config - ~50ms ./provisioning status ./provisioning workspace active @@ -128,7 +128,7 @@ loader.nu (full configuration) ### Full Path (Infrastructure Operations) -```text +```bash # Uses full loader - ~150ms ./provisioning server create --infra myinfra ./provisioning taskserv create kubernetes @@ -139,7 +139,7 @@ loader.nu (full configuration) ### Lazy Loading Decision Logic -```text +```bash # In loader-lazy.nu let is_fast_command = ( $command == "help" or @@ -160,7 +160,7 @@ if $is_fast_command { The minimal loader returns a lightweight config record: -```text +```json { workspace: { name: "librecloud" @@ -247,7 +247,7 @@ Only add if: ### Performance Testing -```text +```bash # Benchmark minimal loader time nu -n -c "use loader-minimal.nu *; get-active-workspace" diff --git a/docs/src/architecture/database-and-config-architecture.md b/docs/src/architecture/database-and-config-architecture.md index c9ad8e7..caaf746 100644 --- a/docs/src/architecture/database-and-config-architecture.md +++ b/docs/src/architecture/database-and-config-architecture.md @@ -13,7 +13,7 @@ Control-Center uses **SurrealDB with kv-mem backend**, an embedded in-memory dat ### Database Configuration -```text +```toml [database] url = "memory" # In-memory backend namespace = "control_center" @@ -24,7 +24,7 @@ database = "main" **Production Alternative**: Switch to remote WebSocket connection for persistent storage: -```text +```toml [database] url = "ws://localhost:8000" namespace = "control_center" @@ -79,7 +79,7 @@ Control-Center also supports (via Cargo.toml dependencies): Orchestrator uses simple file-based storage by default: -```text +```toml [orchestrator.storage] type = "filesystem" # Default backend_path = "{{orchestrator.paths.data_dir}}/queue.rkvs" @@ -87,7 +87,7 @@ backend_path = "{{orchestrator.paths.data_dir}}/queue.rkvs" **Resolved Path**: -```text +```json {{workspace.path}}/.orchestrator/data/queue.rkvs ``` @@ -95,7 +95,7 @@ backend_path = "{{orchestrator.paths.data_dir}}/queue.rkvs" For production deployments, switch to SurrealDB: -```text +```toml [orchestrator.storage] type = "surrealdb-server" # or surrealdb-embedded @@ -115,7 +115,7 @@ password = "secret" All services load configuration in this order (priority: low → high): -```text +```toml 1. System Defaults provisioning/config/config.defaults.toml 2. Service Defaults provisioning/platform/{service}/config.defaults.toml 3. Workspace Config workspace/{name}/config/provisioning.yaml @@ -128,7 +128,7 @@ All services load configuration in this order (priority: low → high): Configs support dynamic variable interpolation: -```text +```toml [paths] base = "/Users/Akasha/project-provisioning/provisioning" data_dir = "{{paths.base}}/data" # Resolves to: /Users/.../data @@ -175,7 +175,7 @@ All services use workspace-aware paths: **Orchestrator**: -```text +```toml [orchestrator.paths] base = "{{workspace.path}}/.orchestrator" data_dir = "{{orchestrator.paths.base}}/data" @@ -185,7 +185,7 @@ queue_dir = "{{orchestrator.paths.data_dir}}/queue" **Control-Center**: -```text +```toml [paths] base = "{{workspace.path}}/.control-center" data_dir = "{{paths.base}}/data" @@ -194,7 +194,7 @@ logs_dir = "{{paths.base}}/logs" **Result** (workspace: `workspace-librecloud`): -```text +```bash workspace-librecloud/ ├── .orchestrator/ │ ├── data/ @@ -214,7 +214,7 @@ Any config value can be overridden via environment variables: ### Control-Center -```text +```bash # Override server port export CONTROL_CENTER_SERVER_PORT=8081 @@ -227,7 +227,7 @@ export CONTROL_CENTER_JWT_ISSUER="my-issuer" ### Orchestrator -```text +```bash # Override orchestrator port export ORCHESTRATOR_SERVER_PORT=8080 @@ -241,7 +241,7 @@ export ORCHESTRATOR_QUEUE_MAX_CONCURRENT_TASKS=10 ### Naming Convention -```text +```json {SERVICE}_{SECTION}_{KEY} = value ``` @@ -259,7 +259,7 @@ export ORCHESTRATOR_QUEUE_MAX_CONCURRENT_TASKS=10 **Container paths** (resolved inside container): -```text +```toml [paths] base = "/app/provisioning" data_dir = "/data" # Mounted volume @@ -268,7 +268,7 @@ logs_dir = "/var/log/orchestrator" # Mounted volume **Docker Compose volumes**: -```text +```bash services: orchestrator: volumes: @@ -289,7 +289,7 @@ volumes: **Host paths** (macOS/Linux): -```text +```toml [paths] base = "/Users/Akasha/project-provisioning/provisioning" data_dir = "{{workspace.path}}/.orchestrator/data" @@ -302,7 +302,7 @@ logs_dir = "{{workspace.path}}/.orchestrator/logs" Check current configuration: -```text +```toml # Show effective configuration provisioning env @@ -322,7 +322,7 @@ PROVISIONING_DEBUG=true ./orchestrator --show-config **Cosmian KMS** uses its own database (when deployed): -```text +```bash # KMS database location (Docker) /data/kms.db # SQLite database inside KMS container @@ -332,7 +332,7 @@ PROVISIONING_DEBUG=true ./orchestrator --show-config KMS also integrates with Control-Center's KMS hybrid backend (local + remote): -```text +```toml [kms] mode = "hybrid" # local, remote, or hybrid diff --git a/docs/src/architecture/design-principles.md b/docs/src/architecture/design-principles.md index a46029e..ba5b026 100644 --- a/docs/src/architecture/design-principles.md +++ b/docs/src/architecture/design-principles.md @@ -32,7 +32,7 @@ without code changes. Hardcoded values defeat the purpose of IaC and create main **Example**: -```text +```bash # ✅ PAP Compliant - Configuration-driven [providers.aws] regions = ["us-west-2", "us-east-1"] @@ -62,7 +62,7 @@ configuration management and domain-specific operations. **Language Responsibility Matrix**: -```text +```bash Rust Layer: ├── Workflow orchestration and coordination ├── REST API servers and HTTP endpoints @@ -111,7 +111,7 @@ flexibility while maintaining predictability. **Domain Organization**: -```text +```bash ├── core/ # Core system and library functions ├── platform/ # High-performance coordination layer ├── provisioning/ # Main business logic with providers and services @@ -160,7 +160,7 @@ evolution. **Recovery Strategies**: -```text +```bash Operation Level: ├── Atomic operations with rollback ├── Retry logic with exponential backoff @@ -203,7 +203,7 @@ gains. **Security Implementation**: -```text +```bash Authentication & Authorization: ├── API authentication for external access ├── Role-based access control for operations @@ -234,7 +234,7 @@ the system. **Testing Strategy**: -```text +```bash Unit Testing: ├── Configuration validation tests ├── Individual component tests @@ -272,7 +272,7 @@ System Testing: **Error Categories**: -```text +```bash Configuration Errors: ├── Invalid configuration syntax ├── Missing required configuration @@ -300,7 +300,7 @@ System Errors: **Observability Implementation**: -```text +```bash Logging: ├── Structured JSON logging ├── Configurable log levels @@ -358,7 +358,7 @@ Monitoring: **Debt Management Strategy**: -```text +```bash Assessment: ├── Regular code quality reviews ├── Performance profiling and optimization @@ -382,7 +382,7 @@ Improvement: **Trade-off Categories**: -```text +```bash Performance vs. Maintainability: ├── Rust coordination layer for performance ├── Nushell business logic for maintainability diff --git a/docs/src/architecture/ecosystem-integration.md b/docs/src/architecture/ecosystem-integration.md index 0aea545..d88dd8b 100644 --- a/docs/src/architecture/ecosystem-integration.md +++ b/docs/src/architecture/ecosystem-integration.md @@ -19,7 +19,7 @@ This document describes the **hybrid selective integration** of prov-ecosystem a ### Three-Layer Integration -```text +```bash ┌─────────────────────────────────────────────┐ │ Provisioning CLI (provisioning/core/cli/) │ │ ✅ 80+ command shortcuts │ @@ -70,7 +70,7 @@ This document describes the **hybrid selective integration** of prov-ecosystem a **Key Types**: -```text +```rust pub enum ContainerRuntime { Docker, Podman, @@ -85,7 +85,7 @@ pub struct ComposeAdapter { ... } **Nushell Functions**: -```text +```nushell runtime-detect # Auto-detect available runtime runtime-exec # Execute command in detected runtime runtime-compose # Adapt docker-compose for runtime @@ -112,7 +112,7 @@ runtime-list # List all available runtimes **Key Types**: -```text +```rust pub struct SshConfig { ... } pub struct SshPool { ... } pub enum DeploymentStrategy { @@ -124,7 +124,7 @@ pub enum DeploymentStrategy { **Nushell Functions**: -```text +```nushell ssh-pool-connect # Create SSH pool connection ssh-pool-exec # Execute on SSH pool ssh-pool-status # Check pool status @@ -153,7 +153,7 @@ ssh-circuit-breaker-status # Check circuit breaker **Key Types**: -```text +```rust pub enum BackupBackend { Restic, Borg, @@ -169,7 +169,7 @@ pub struct BackupManager { ... } **Nushell Functions**: -```text +```nushell backup-create # Create backup job backup-restore # Restore from snapshot backup-list # List snapshots @@ -199,7 +199,7 @@ backup-status # Check backup status **Key Types**: -```text +```rust pub enum GitProvider { GitHub, GitLab, @@ -212,7 +212,7 @@ pub struct GitOpsOrchestrator { ... } **Nushell Functions**: -```text +```nushell gitops-rules # Load rules from config gitops-watch # Watch for Git events gitops-trigger # Manually trigger deployment @@ -243,7 +243,7 @@ gitops-status # Get GitOps status **Nushell Functions**: -```text +```nushell service-install # Install service service-start # Start service service-stop # Stop service @@ -300,7 +300,7 @@ All implementations follow project standards: ## File Structure -```text +```bash provisioning/ ├── platform/integrations/ │ └── provisioning-bridge/ # Rust bridge crate @@ -338,7 +338,7 @@ provisioning/ ### Runtime Abstraction -```text +```bash # Auto-detect available runtime let runtime = (runtime-detect) @@ -351,7 +351,7 @@ let compose_cmd = (runtime-compose "./docker-compose.yml") ### SSH Advanced -```text +```bash # Connect to SSH pool let pool = (ssh-pool-connect "server01.example.com" "root" --port 22) @@ -364,7 +364,7 @@ ssh-circuit-breaker-status ### Backup System -```text +```bash # Schedule regular backups backup-schedule "daily-app-backup" "0 2 * * *" --paths ["/opt/app" "/var/lib/app"] @@ -381,7 +381,7 @@ backup-restore "snapshot-001" --restore_path "." ### GitOps Events -```text +```bash # Load GitOps rules let rules = (gitops-rules "./gitops-rules.yaml") @@ -394,7 +394,7 @@ gitops-trigger "deploy-app" --environment "prod" ### Service Management -```text +```bash # Install service service-install "my-app" "/usr/local/bin/my-app" --user "appuser" @@ -418,7 +418,7 @@ service-restart-policy "my-app" --policy "on-failure" --delay-secs 5 Existing `provisioning` CLI will gain new command tree: -```text +```bash provisioning runtime detect|exec|compose|info|list provisioning ssh pool connect|exec|status|strategies provisioning backup create|restore|list|schedule|retention|status @@ -430,7 +430,7 @@ provisioning service install|start|stop|restart|status|list|policy|detect-init All integrations use Nickel schemas from `provisioning/schemas/integrations/`: -```text +```javascript let { IntegrationConfig } = import "provisioning/integrations.ncl" in { runtime = { ... }, @@ -445,7 +445,7 @@ let { IntegrationConfig } = import "provisioning/integrations.ncl" in Nushell plugins can be created for performance-critical operations: -```text +```nushell provisioning plugin list # [installed] # nu_plugin_runtime @@ -460,7 +460,7 @@ provisioning plugin list ### Rust Tests -```text +```rust cd provisioning/platform/integrations/provisioning-bridge cargo test --all cargo test -p provisioning-bridge --lib @@ -469,7 +469,7 @@ cargo test -p provisioning-bridge --doc ### Nushell Tests -```text +```nushell nu provisioning/core/nulib/integrations/runtime.nu nu provisioning/core/nulib/integrations/ssh_advanced.nu ``` diff --git a/docs/src/architecture/integration-patterns.md b/docs/src/architecture/integration-patterns.md index c2c12e8..cf777df 100644 --- a/docs/src/architecture/integration-patterns.md +++ b/docs/src/architecture/integration-patterns.md @@ -15,7 +15,7 @@ workflows, and enable extensible functionality. This document outlines the key i **Implementation**: -```text +```bash use tokio::process::Command; use serde_json; @@ -35,7 +35,7 @@ pub async fn execute_nushell_workflow( **Data Exchange Format**: -```text +```json { "status": "success" | "error" | "partial", "result": { @@ -54,7 +54,7 @@ pub async fn execute_nushell_workflow( **Implementation**: -```text +```python def submit-workflow [workflow: record] -> record { let payload = $workflow | to json @@ -68,7 +68,7 @@ def submit-workflow [workflow: record] -> record { **API Contract**: -```text +```json { "workflow_id": "wf-456", "name": "multi_cloud_deployment", @@ -86,7 +86,7 @@ def submit-workflow [workflow: record] -> record { **Interface Definition**: -```text +```bash # Standard provider interface that all providers must implement export def list-servers [] -> table { # Provider-specific implementation @@ -107,7 +107,7 @@ export def get-server [id: string] -> record { **Configuration Integration**: -```text +```toml [providers.aws] region = "us-west-2" credentials_profile = "default" @@ -125,7 +125,7 @@ network_mode = "bridge" #### Provider Discovery and Loading -```text +```bash def load-providers [] -> table { let provider_dirs = glob "providers/*/nulib" @@ -150,7 +150,7 @@ def load-providers [] -> table { **Implementation**: -```text +```python def resolve-configuration [context: record] -> record { let base_config = open config.defaults.toml let user_config = if ("config.user.toml" | path exists) { @@ -173,7 +173,7 @@ def resolve-configuration [context: record] -> record { #### Variable Interpolation Pattern -```text +```python def interpolate-variables [config: record] -> record { let interpolations = { "{{paths.base}}": ($env.PWD), @@ -200,7 +200,7 @@ def interpolate-variables [config: record] -> record { **Implementation (Rust)**: -```text +```rust use petgraph::{Graph, Direction}; use std::collections::HashMap; @@ -229,7 +229,7 @@ impl DependencyResolver { #### Parallel Execution Pattern -```text +```bash use tokio::task::JoinSet; use futures::stream::{FuturesUnordered, StreamExt}; @@ -265,7 +265,7 @@ pub async fn execute_parallel_batch( **Implementation**: -```text +```bash #[derive(Serialize, Deserialize)] pub struct WorkflowCheckpoint { pub workflow_id: String, @@ -309,7 +309,7 @@ impl CheckpointManager { #### Rollback Pattern -```text +```rust pub struct RollbackManager { rollback_stack: Vec, } @@ -349,7 +349,7 @@ impl RollbackManager { **Event Definition**: -```text +```bash #[derive(Serialize, Deserialize, Clone, Debug)] pub enum SystemEvent { WorkflowStarted { workflow_id: String, name: String }, @@ -363,7 +363,7 @@ pub enum SystemEvent { **Event Bus Implementation**: -```text +```bash use tokio::sync::broadcast; pub struct EventBus { @@ -392,7 +392,7 @@ impl EventBus { #### Extension Discovery and Loading -```text +```bash def discover-extensions [] -> table { let extension_dirs = glob "extensions/*/extension.toml" @@ -417,7 +417,7 @@ def discover-extensions [] -> table { #### Extension Interface Pattern -```text +```bash # Standard extension interface export def extension-info [] -> record { { @@ -452,7 +452,7 @@ export def extension-deactivate [] -> nothing { **Base API Structure**: -```text +```bash use axum::{ extract::{Path, State}, response::Json, @@ -473,7 +473,7 @@ pub fn create_api_router(state: AppState) -> Router { **Standard Response Format**: -```text +```json { "status": "success" | "error" | "pending", "data": { ... }, @@ -494,7 +494,7 @@ pub fn create_api_router(state: AppState) -> Router { ### Structured Error Pattern -```text +```bash #[derive(thiserror::Error, Debug)] pub enum ProvisioningError { #[error("Configuration error: {message}")] @@ -513,7 +513,7 @@ pub enum ProvisioningError { ### Error Recovery Pattern -```text +```python def with-retry [operation: closure, max_attempts: int = 3] { mut attempts = 0 mut last_error = null @@ -540,7 +540,7 @@ def with-retry [operation: closure, max_attempts: int = 3] { ### Caching Strategy Pattern -```text +```bash use std::sync::Arc; use tokio::sync::RwLock; use std::collections::HashMap; @@ -583,7 +583,7 @@ impl Cache { ### Streaming Pattern for Large Data -```text +```python def process-large-dataset [source: string] -> nothing { # Stream processing instead of loading entire dataset open $source @@ -600,7 +600,7 @@ def process-large-dataset [source: string] -> nothing { ### Integration Test Pattern -```text +```bash #[cfg(test)] mod integration_tests { use super::*; diff --git a/docs/src/architecture/multi-repo-architecture.md b/docs/src/architecture/multi-repo-architecture.md index 6fa4fde..e32d672 100644 --- a/docs/src/architecture/multi-repo-architecture.md +++ b/docs/src/architecture/multi-repo-architecture.md @@ -24,7 +24,7 @@ distributed extension management through OCI registry integration. **Purpose**: Core system functionality - CLI, libraries, base schemas -```text +```bash provisioning-core/ ├── core/ │ ├── cli/ # Command-line interface @@ -82,7 +82,7 @@ provisioning-core/ **Purpose**: All provider, taskserv, and cluster extensions -```text +```bash provisioning-extensions/ ├── providers/ │ ├── aws/ @@ -143,7 +143,7 @@ Each extension published separately as OCI artifact: **Extension Manifest** (`manifest.yaml`): -```text +```yaml name: kubernetes type: taskserv version: 1.28.0 @@ -183,7 +183,7 @@ min_provisioning_version: "3.0.0" **Purpose**: Platform services (orchestrator, control-center, MCP server, API gateway) -```text +```bash provisioning-platform/ ├── orchestrator/ # Rust orchestrator service │ ├── src/ @@ -238,7 +238,7 @@ Standard Docker images in OCI registry: ### Registry Structure -```text +```bash OCI Registry (localhost:5000 or harbor.company.com) ├── provisioning-core/ │ ├── v3.5.0 # Core system artifact @@ -263,7 +263,7 @@ OCI Registry (localhost:5000 or harbor.company.com) Each extension packaged as OCI artifact: -```text +```bash kubernetes-1.28.0.tar.gz ├── schemas/ # Nickel schemas │ ├── kubernetes.ncl @@ -291,7 +291,7 @@ kubernetes-1.28.0.tar.gz **File**: `workspace/config/provisioning.yaml` -```text +```yaml # Core system dependency dependencies: core: @@ -363,7 +363,7 @@ The system resolves dependencies in this order: ### Dependency Resolution Commands -```text +```bash # Resolve and install all dependencies provisioning dep resolve @@ -386,7 +386,7 @@ provisioning dep tree kubernetes ### CLI Commands -```text +```bash # Pull extension from OCI registry provisioning oci pull kubernetes:1.28.0 @@ -419,7 +419,7 @@ provisioning oci copy ### OCI Configuration -```text +```toml # Show OCI configuration provisioning oci config @@ -442,7 +442,7 @@ provisioning oci config ### 1. Develop Extension -```text +```bash # Create new extension from template provisioning generate extension taskserv redis @@ -466,7 +466,7 @@ provisioning generate extension taskserv redis ### 2. Test Extension Locally -```text +```bash # Load extension from local path provisioning module load taskserv workspace_dev redis --source local @@ -479,7 +479,7 @@ provisioning test extension redis ### 3. Package Extension -```text +```bash # Validate extension structure provisioning oci package validate ./extensions/taskservs/redis @@ -491,7 +491,7 @@ provisioning oci package ./extensions/taskservs/redis ### 4. Publish Extension -```text +```bash # Login to registry (one-time) provisioning oci login localhost:5000 @@ -511,7 +511,7 @@ provisioning oci tags redis ### 5. Use Published Extension -```text +```bash # Add to workspace configuration # workspace/config/provisioning.yaml: # dependencies: @@ -534,7 +534,7 @@ provisioning dep resolve **Using Zot (lightweight OCI registry)**: -```text +```bash # Start local OCI registry provisioning oci-registry start @@ -555,7 +555,7 @@ provisioning oci-registry status **Using Harbor**: -```text +```bash # workspace/config/provisioning.yaml dependencies: registry: @@ -591,7 +591,7 @@ dependencies: ### Phase 2: Gradual Migration -```text +```bash # Migrate extensions one by one for ext in (ls provisioning/extensions/taskservs); do provisioning oci publish $ext.name diff --git a/docs/src/architecture/multi-repo-strategy.md b/docs/src/architecture/multi-repo-strategy.md index afd2363..930d4bc 100644 --- a/docs/src/architecture/multi-repo-strategy.md +++ b/docs/src/architecture/multi-repo-strategy.md @@ -79,7 +79,7 @@ dependency model. **Contents:** -```text +```bash provisioning-core/ ├── nulib/ # Nushell libraries │ ├── lib_provisioning/ # Core library functions @@ -120,7 +120,7 @@ provisioning-core/ **Installation Path:** -```text +```bash /usr/local/ ├── bin/provisioning ├── lib/provisioning/ @@ -135,7 +135,7 @@ provisioning-core/ **Contents:** -```text +```bash provisioning-platform/ ├── orchestrator/ # Rust orchestrator │ ├── src/ @@ -180,7 +180,7 @@ provisioning-platform/ **Installation Path:** -```text +```bash /usr/local/ ├── bin/ │ ├── provisioning-orchestrator @@ -203,7 +203,7 @@ provisioning-platform/ **Contents:** -```text +```bash provisioning-extensions/ ├── registry/ # Extension registry │ ├── index.json # Searchable index @@ -252,7 +252,7 @@ provisioning-extensions/ **Installation:** -```text +```bash # Install extension via core CLI provisioning extension install mongodb provisioning extension install azure-provider @@ -261,7 +261,7 @@ provisioning extension install azure-provider **Extension Structure:** Each extension is self-contained: -```text +```bash mongodb/ ├── manifest.toml # Extension metadata ├── taskserv.nu # Implementation @@ -279,7 +279,7 @@ mongodb/ **Contents:** -```text +```bash provisioning-workspace/ ├── templates/ # Workspace templates │ ├── minimal/ # Minimal starter @@ -315,7 +315,7 @@ provisioning-workspace/ **Usage:** -```text +```bash # Create workspace from template provisioning workspace init my-project --template kubernetes @@ -333,7 +333,7 @@ provisioning workspace init **Contents:** -```text +```bash provisioning-distribution/ ├── release-automation/ # Automated release workflows │ ├── build-all.nu # Build all packages @@ -385,7 +385,7 @@ provisioning-distribution/ ### Package-Based Dependencies (Not Submodules) -```text +```bash ┌─────────────────────────────────────────────────────────────┐ │ provisioning-distribution │ │ (Release orchestration & registry) │ @@ -416,7 +416,7 @@ provisioning-distribution/ **Method:** Loose coupling via CLI + REST API -```text +```bash # Platform calls Core CLI (subprocess) def create-server [name: string] { # Orchestrator executes Core CLI @@ -431,7 +431,7 @@ def submit-workflow [workflow: record] { **Version Compatibility:** -```text +```bash # platform/Cargo.toml [package.metadata.provisioning] core-version = "^3.0" # Compatible with core 3.x @@ -441,7 +441,7 @@ core-version = "^3.0" # Compatible with core 3.x **Method:** Plugin/module system -```text +```bash # Extension manifest # extensions/mongodb/manifest.toml [extension] @@ -465,7 +465,7 @@ provisioning extension install mongodb **Method:** Git templates or package templates -```text +```bash # Option 1: GitHub template repository gh repo create my-infra --template provisioning-workspace cd my-infra @@ -486,7 +486,7 @@ provisioning workspace create my-infra --template kubernetes Each repository maintains independent semantic versioning: -```text +```bash provisioning-core: 3.2.1 provisioning-platform: 2.5.3 provisioning-extensions: (per-extension versioning) @@ -497,7 +497,7 @@ provisioning-workspace: 1.4.0 **`provisioning-distribution/version-management/versions.toml`:** -```text +```toml # Version compatibility matrix [compatibility] @@ -536,7 +536,7 @@ workspace = "1.3.0" **Coordinated releases** for major versions: -```text +```bash # Major release: All repos release together provisioning-core: 3.0.0 provisioning-platform: 2.0.0 @@ -553,7 +553,7 @@ provisioning-platform: 2.1.0 (improves orchestrator, core stays 3.1.x) ### Working on Single Repository -```text +```bash # Developer working on core only git clone https://github.com/yourorg/provisioning-core cd provisioning-core @@ -574,7 +574,7 @@ just install-dev ### Working Across Repositories -```text +```bash # Scenario: Adding new feature requiring core + platform changes # 1. Clone both repositories @@ -615,7 +615,7 @@ cargo test ### Testing Cross-Repo Integration -```text +```bash # Integration tests in provisioning-distribution cd provisioning-distribution @@ -636,7 +636,7 @@ just test-bundle stable-3.3 Each repository releases independently: -```text +```bash # Core release cd provisioning-core git tag v3.2.1 @@ -656,7 +656,7 @@ git push --tags Distribution repository creates tested bundles: -```text +```bash cd provisioning-distribution # Create bundle @@ -679,7 +679,7 @@ just publish-bundle stable-3.2 #### Option 1: Bundle Installation (Recommended for Users) -```text +```bash # Install stable bundle (easiest) curl -fsSL https://get.provisioning.io | sh @@ -691,7 +691,7 @@ curl -fsSL https://get.provisioning.io | sh #### Option 2: Individual Component Installation -```text +```bash # Install only core (minimal) curl -fsSL https://get.provisioning.io/core | sh @@ -704,7 +704,7 @@ provisioning extension install mongodb #### Option 3: Custom Combination -```text +```bash # Install specific versions provisioning install core@3.1.0 provisioning install platform@2.4.0 @@ -760,7 +760,7 @@ provisioning install platform@2.4.0 **Core CI (`provisioning-core/.github/workflows/ci.yml`):** -```text +```yaml name: Core CI on: [push, pull_request] @@ -792,7 +792,7 @@ jobs: **Platform CI (`provisioning-platform/.github/workflows/ci.yml`):** -```text +```yaml name: Platform CI on: [push, pull_request] @@ -829,7 +829,7 @@ jobs: **Distribution CI (`provisioning-distribution/.github/workflows/integration.yml`):** -```text +```yaml name: Integration Tests on: @@ -862,7 +862,7 @@ jobs: ### Monorepo Structure -```text +```bash provisioning/ (One repo, ~500 MB) ├── core/ (Nushell) ├── platform/ (Rust) @@ -873,7 +873,7 @@ provisioning/ (One repo, ~500 MB) ### Multi-Repo Structure -```text +```bash provisioning-core/ (Repo 1, ~50 MB) ├── nulib/ ├── cli/ diff --git a/docs/src/architecture/nickel-executable-examples.md b/docs/src/architecture/nickel-executable-examples.md index c7601db..e9d04b3 100644 --- a/docs/src/architecture/nickel-executable-examples.md +++ b/docs/src/architecture/nickel-executable-examples.md @@ -10,7 +10,7 @@ ### Prerequisites -```text +```nickel # Install Nickel brew install nickel # or from source: https://nickel-lang.org/getting-started/ @@ -21,7 +21,7 @@ nickel --version # Should be 1.0+ ### Directory Structure for Examples -```text +```nickel mkdir -p ~/nickel-examples/{simple,complex,production} cd ~/nickel-examples ``` @@ -32,7 +32,7 @@ cd ~/nickel-examples ### Step 1: Create Contract File -```text +```nickel cat > simple/server_contracts.ncl << 'EOF' { ServerConfig = { @@ -47,7 +47,7 @@ EOF ### Step 2: Create Defaults File -```text +```nickel cat > simple/server_defaults.ncl << 'EOF' { web_server = { @@ -76,7 +76,7 @@ EOF ### Step 3: Create Main Module with Hybrid Interface -```text +```nickel cat > simple/server.ncl << 'EOF' let contracts = import "./server_contracts.ncl" in let defaults = import "./server_defaults.ncl" in @@ -110,7 +110,7 @@ EOF ### Test: Export and Validate JSON -```text +```nickel cd simple/ # Export to JSON @@ -133,7 +133,7 @@ nickel export server.ncl --format json | jq '.production_web_server.cpu_cores' ### Usage in Consumer Module -```text +```nickel cat > simple/consumer.ncl << 'EOF' let server = import "./server.ncl" in @@ -162,14 +162,14 @@ nickel export consumer.ncl --format json | jq '.staging_web' ### Create Provider Structure -```text +```nickel mkdir -p complex/upcloud/{contracts,defaults,main} cd complex/upcloud ``` ### Provider Contracts -```text +```nickel cat > upcloud_contracts.ncl << 'EOF' { StorageBackup = { @@ -196,7 +196,7 @@ EOF ### Provider Defaults -```text +```nickel cat > upcloud_defaults.ncl << 'EOF' { backup = { @@ -223,7 +223,7 @@ EOF ### Provider Main Module -```text +```nickel cat > upcloud_main.ncl << 'EOF' let contracts = import "./upcloud_contracts.ncl" in let defaults = import "./upcloud_defaults.ncl" in @@ -281,7 +281,7 @@ EOF ### Test Provider Configuration -```text +```toml # Export provider config nickel export upcloud_main.ncl --format json | jq '.production_high_availability' @@ -296,7 +296,7 @@ nickel export upcloud_main.ncl --format json | jq '.production_high_availability ### Consumer Using Provider -```text +```nickel cat > upcloud_consumer.ncl << 'EOF' let upcloud = import "./upcloud_main.ncl" in @@ -332,7 +332,7 @@ nickel export upcloud_consumer.ncl --format json | jq '.ha_stack | keys' ### Taskserv Contracts (from wuji) -```text +```nickel cat > production/taskserv_contracts.ncl << 'EOF' { Dependency = { @@ -352,7 +352,7 @@ EOF ### Taskserv Defaults -```text +```nickel cat > production/taskserv_defaults.ncl << 'EOF' { kubernetes = { @@ -407,7 +407,7 @@ EOF ### Taskserv Main -```text +```nickel cat > production/taskserv.ncl << 'EOF' let contracts = import "./taskserv_contracts.ncl" in let defaults = import "./taskserv_defaults.ncl" in @@ -453,7 +453,7 @@ EOF ### Test Taskserv Setup -```text +```nickel # Export stack nickel export taskserv.ncl --format json | jq '.wuji_k8s_stack | keys' # Output: ["kubernetes", "cilium", "containerd", "etcd"] @@ -477,7 +477,7 @@ nickel export taskserv.ncl --format json | jq '.staging_stack | length' ### Base Infrastructure -```text +```nickel cat > production/infrastructure.ncl << 'EOF' let servers = import "./server.ncl" in let taskservs = import "./taskserv.ncl" in @@ -520,7 +520,7 @@ nickel export infrastructure.ncl --format json | jq '.production.taskservs | key ### Extending Infrastructure (Nickel Advantage!) -```text +```nickel cat > production/infrastructure_extended.ncl << 'EOF' let infra = import "./infrastructure.ncl" in @@ -557,7 +557,7 @@ nickel export infrastructure_extended.ncl --format json | ### Validation Functions -```text +```nickel cat > production/validation.ncl << 'EOF' let validate_server = fun server => if server.cpu_cores <= 0 then @@ -586,7 +586,7 @@ EOF ### Using Validations -```text +```nickel cat > production/validated_config.ncl << 'EOF' let server = import "./server.ncl" in let taskserv = import "./taskserv.ncl" in @@ -632,7 +632,7 @@ nickel export validated_config.ncl --format json ### Run All Examples -```text +```bash #!/bin/bash # test_all_examples.sh @@ -679,7 +679,7 @@ echo "=== All Tests Passed ✓ ===" ### Common Nickel Operations -```text +```nickel # Validate Nickel syntax nickel export config.ncl @@ -711,7 +711,7 @@ nickel typecheck config.ncl ### Problem: "unexpected token" with multiple let -```text +```nickel # ❌ WRONG let A = {x = 1} let B = {y = 2} @@ -725,7 +725,7 @@ let B = {y = 2} in ### Problem: Function serialization fails -```text +```nickel # ❌ WRONG - function will fail to serialize { get_value = fun x => x + 1, @@ -741,7 +741,7 @@ let B = {y = 2} in ### Problem: Null values cause export issues -```text +```nickel # ❌ WRONG { optional_field = null } diff --git a/docs/src/architecture/nickel-vs-kcl-comparison.md b/docs/src/architecture/nickel-vs-kcl-comparison.md index f5e7d3c..dbe5c7e 100644 --- a/docs/src/architecture/nickel-vs-kcl-comparison.md +++ b/docs/src/architecture/nickel-vs-kcl-comparison.md @@ -8,7 +8,7 @@ ## Quick Decision Tree -```text +```nickel Need to define infrastructure/schemas? ├─ New platform schemas → Use Nickel ✅ ├─ New provider extensions → Use Nickel ✅ @@ -26,7 +26,7 @@ Need to define infrastructure/schemas? #### KCL Approach -```text +```nickel schema ServerDefaults: name: str cpu_cores: int = 2 @@ -51,7 +51,7 @@ server_defaults: ServerDefaults = { **server_contracts.ncl**: -```text +```json { ServerDefaults = { name | String, @@ -64,7 +64,7 @@ server_defaults: ServerDefaults = { **server_defaults.ncl**: -```text +```json { server = { name = "web-server", @@ -77,7 +77,7 @@ server_defaults: ServerDefaults = { **server.ncl**: -```text +```javascript let contracts = import "./server_contracts.ncl" in let defaults = import "./server_defaults.ncl" in @@ -93,7 +93,7 @@ let defaults = import "./server_defaults.ncl" in **Usage**: -```text +```javascript let server = import "./server.ncl" in # Simple override @@ -117,7 +117,7 @@ my_custom = server.defaults.server & { #### KCL (from `provisioning/extensions/providers/upcloud/nickel/` - legacy approach) -```text +```nickel schema StorageBackup: backup_id: str frequency: str @@ -145,7 +145,7 @@ provision_upcloud: ProvisionUpcloud = { **upcloud_contracts.ncl**: -```text +```json { StorageBackup = { backup_id | String, @@ -170,7 +170,7 @@ provision_upcloud: ProvisionUpcloud = { **upcloud_defaults.ncl**: -```text +```json { storage_backup = { backup_id = "", @@ -195,7 +195,7 @@ provision_upcloud: ProvisionUpcloud = { **upcloud_main.ncl** (from actual codebase): -```text +```javascript let contracts = import "./upcloud_contracts.ncl" in let defaults = import "./upcloud_defaults.ncl" in @@ -219,7 +219,7 @@ let defaults = import "./upcloud_defaults.ncl" in **Usage Comparison**: -```text +```nickel # KCL way (KCL no lo permite bien) # Cannot easily extend without schema modification @@ -288,7 +288,7 @@ production_stack = upcloud.make_provision_upcloud { **KCL (Legacy)**: -```text +```nickel schema ServerConfig: name: str zone: str = "us-nyc1" @@ -300,7 +300,7 @@ web_server: ServerConfig = { **Nickel (Recommended)**: -```text +```javascript let defaults = import "./server_defaults.ncl" in web_server = defaults.make_server { name = "web-01" } ``` @@ -313,7 +313,7 @@ web_server = defaults.make_server { name = "web-01" } **KCL** (from wuji infrastructure): -```text +```nickel schema TaskServDependency: name: str wait_for_health: bool = false @@ -343,7 +343,7 @@ taskserv_cilium: TaskServ = { **Nickel** (from wuji/main.ncl): -```text +```javascript let ts_kubernetes = import "./taskservs/kubernetes.ncl" in let ts_cilium = import "./taskservs/cilium.ncl" in let ts_containerd = import "./taskservs/containerd.ncl" in @@ -367,7 +367,7 @@ let ts_containerd = import "./taskservs/containerd.ncl" in **KCL**: -```text +```nickel schema ServerConfig: name: str # Would need to modify schema! @@ -379,7 +379,7 @@ schema ServerConfig: **Nickel**: -```text +```javascript let server = import "./server.ncl" in # Add custom fields without modifying schema! @@ -402,7 +402,7 @@ my_server = server.defaults.server & { **KCL Approach (Legacy)**: -```text +```nickel schema ServerDefaults: cpu: int = 2 memory: int = 4 @@ -423,7 +423,7 @@ server: Server = { **Nickel Approach**: -```text +```nickel # defaults.ncl server_defaults = { cpu = 2, @@ -449,7 +449,7 @@ server = make_server { **KCL Validation (Legacy)** (compile-time, inline): -```text +```nickel schema Config: timeout: int = 5 @@ -465,7 +465,7 @@ schema Config: **Nickel Validation** (runtime, contract-based): -```text +```nickel # contracts.ncl - Pure type definitions Config = { timeout | Number, @@ -495,7 +495,7 @@ my_config = validate_config { timeout = 10 } **Before (KCL - Legacy)**: -```text +```nickel schema Scheduler: strategy: str = "fifo" workers: int = 4 @@ -513,7 +513,7 @@ scheduler_config: Scheduler = { `scheduler_contracts.ncl`: -```text +```json { Scheduler = { strategy | String, @@ -524,7 +524,7 @@ scheduler_config: Scheduler = { `scheduler_defaults.ncl`: -```text +```json { scheduler = { strategy = "fifo", @@ -535,7 +535,7 @@ scheduler_config: Scheduler = { `scheduler.ncl`: -```text +```javascript let contracts = import "./scheduler_contracts.ncl" in let defaults = import "./scheduler_defaults.ncl" in @@ -557,7 +557,7 @@ let defaults = import "./scheduler_defaults.ncl" in **Before (KCL - Legacy)**: -```text +```nickel schema Mode: deployment_type: str = "solo" # "solo" | "multiuser" | "cicd" | "enterprise" @@ -568,7 +568,7 @@ schema Mode: **After (Nickel - Current)**: -```text +```nickel # contracts.ncl { Mode = { @@ -592,7 +592,7 @@ schema Mode: **Before (KCL - Legacy)**: -```text +```nickel schema ServerDefaults: cpu: int = 2 memory: int = 4 @@ -609,7 +609,7 @@ web_server: Server = { **After (Nickel - Current)**: -```text +```nickel # defaults.ncl { server_defaults = { @@ -643,7 +643,7 @@ let make_server = fun config => **Workflow**: -```text +```nickel # Edit workspace config cd workspace_librecloud/nickel vim wuji/main.ncl @@ -658,7 +658,7 @@ nickel export wuji/main.ncl # Uses updated schemas **Imports** (relative, central): -```text +```nickel import "../../provisioning/schemas/main.ncl" import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl" ``` @@ -671,7 +671,7 @@ import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl" **Workflow**: -```text +```nickel # 1. Create immutable snapshot provisioning workspace freeze --version "2025-12-15-prod-v1" @@ -696,7 +696,7 @@ provisioning deploy **Frozen Imports** (rewritten to local): -```text +```nickel # Original in workspace import "../../provisioning/schemas/main.ncl" @@ -720,7 +720,7 @@ import "./provisioning/schemas/main.ncl" **Problem**: -```text +```nickel # ❌ WRONG let A = { x = 1 } let B = { y = 2 } @@ -731,7 +731,7 @@ Error: `unexpected token` **Solution**: Use `let...in` chaining: -```text +```nickel # ✅ CORRECT let A = { x = 1 } in let B = { y = 2 } in @@ -744,7 +744,7 @@ let B = { y = 2 } in **Problem**: -```text +```nickel # ❌ WRONG let StorageVol = { mount_path : String | null = null, @@ -757,7 +757,7 @@ Error: `this can't be used as a contract` **Solution**: Use untyped assignment: -```text +```nickel # ✅ CORRECT let StorageVol = { mount_path = null, @@ -770,7 +770,7 @@ let StorageVol = { **Problem**: -```text +```nickel # ❌ WRONG { get_value = fun x => x + 1, @@ -782,7 +782,7 @@ Error: Functions can't be serialized **Solution**: Mark helper functions `not_exported`: -```text +```nickel # ✅ CORRECT { get_value | not_exported = fun x => x + 1, @@ -796,7 +796,7 @@ Error: Functions can't be serialized **Problem**: -```text +```javascript let defaults = import "./defaults.ncl" in defaults.scheduler_config # But file has "scheduler" ``` @@ -805,7 +805,7 @@ Error: `field not found` **Solution**: Use exact field names: -```text +```javascript let defaults = import "./defaults.ncl" in defaults.scheduler # Correct name from defaults.ncl ``` @@ -818,7 +818,7 @@ defaults.scheduler # Correct name from defaults.ncl **Solution**: Check for circular references or missing `not_exported`: -```text +```nickel # ❌ Slow - functions being serialized { validate_config = fun x => x, @@ -917,7 +917,7 @@ Type-safe prompts, forms, and schemas that **bidirectionally integrate with Nick ### Workflow: Nickel Schemas → Interactive UIs → Nickel Output -```text +```nickel # 1. Define schema in Nickel cat > server.ncl << 'EOF' let contracts = import "./contracts.ncl" in @@ -952,7 +952,7 @@ typedialog form --input form.toml --output nickel ### Example: Infrastructure Wizard -```text +```nickel # User runs provisioning init --wizard @@ -1014,7 +1014,7 @@ provisioning/schemas/config/workspace_config/main.ncl **File**: `provisioning/schemas/main.ncl` (174 lines) -```text +```nickel # Domain-organized architecture { lib | doc "Core library types" @@ -1054,7 +1054,7 @@ provisioning/schemas/config/workspace_config/main.ncl **Usage**: -```text +```javascript let provisioning = import "./main.ncl" in provisioning.lib.Storage @@ -1069,7 +1069,7 @@ provisioning.operations.workflows **File**: `provisioning/extensions/providers/upcloud/nickel/main.ncl` (38 lines) -```text +```javascript let contracts_lib = import "./contracts.ncl" in let defaults_lib = import "./defaults.ncl" in @@ -1109,7 +1109,7 @@ let defaults_lib = import "./defaults.ncl" in **File**: `workspace_librecloud/nickel/wuji/main.ncl` (53 lines) -```text +```javascript let settings_config = import "./settings.ncl" in let ts_cilium = import "./taskservs/cilium.ncl" in let ts_containerd = import "./taskservs/containerd.ncl" in diff --git a/docs/src/architecture/orchestrator-auth-integration.md b/docs/src/architecture/orchestrator-auth-integration.md index 47d7d9b..e378a09 100644 --- a/docs/src/architecture/orchestrator-auth-integration.md +++ b/docs/src/architecture/orchestrator-auth-integration.md @@ -15,7 +15,7 @@ verification, Cedar authorization, rate limiting, and audit logging) into a cohe The middleware chain is applied in this specific order to ensure proper security: -```text +```bash ┌─────────────────────────────────────────────────────────────────┐ │ Incoming HTTP Request │ └────────────────────────┬────────────────────────────────────────┘ @@ -90,7 +90,7 @@ The middleware chain is applied in this specific order to ensure proper security **Example**: -```text +```rust pub struct SecurityContext { pub user_id: String, pub token: ValidatedToken, @@ -164,7 +164,7 @@ impl SecurityContext { **Example**: -```text +```rust fn requires_mfa(method: &str, path: &str) -> bool { if path.contains("/production/") { return true; } if method == "DELETE" { return true; } @@ -190,7 +190,7 @@ fn requires_mfa(method: &str, path: &str) -> bool { **Resource Mapping**: -```text +```bash /api/v1/servers/srv-123 → Resource::Server("srv-123") /api/v1/taskserv/kubernetes → Resource::TaskService("kubernetes") /api/v1/cluster/prod → Resource::Cluster("prod") @@ -199,7 +199,7 @@ fn requires_mfa(method: &str, path: &str) -> bool { **Action Mapping**: -```text +```bash GET → Action::Read POST → Action::Create PUT → Action::Update @@ -223,7 +223,7 @@ DELETE → Action::Delete **Configuration**: -```text +```rust pub struct RateLimitConfig { pub max_requests: u32, // for example, 100 pub window_duration: Duration, // for example, 60 seconds @@ -236,7 +236,7 @@ pub struct RateLimitConfig { **Statistics**: -```text +```rust pub struct RateLimitStats { pub total_ips: usize, // Number of tracked IPs pub total_requests: u32, // Total requests made @@ -261,7 +261,7 @@ pub struct RateLimitStats { **Usage Example**: -```text +```bash use provisioning_orchestrator::security_integration::{ SecurityComponents, SecurityConfig }; @@ -292,7 +292,7 @@ let secured_app = apply_security_middleware(app, &security); ### Updated AppState Structure -```text +```rust pub struct AppState { // Existing fields pub task_storage: Arc, @@ -317,7 +317,7 @@ pub struct AppState { ### Initialization in main.rs -```text +```rust #[tokio::main] async fn main() -> Result<()> { let args = Args::parse(); @@ -398,7 +398,7 @@ async fn main() -> Result<()> { ### Step-by-Step Flow -```text +```bash 1. CLIENT REQUEST ├─ Headers: │ ├─ Authorization: Bearer @@ -485,7 +485,7 @@ async fn main() -> Result<()> { ### Environment Variables -```text +```bash # JWT Configuration JWT_ISSUER=control-center JWT_AUDIENCE=orchestrator @@ -513,7 +513,7 @@ AUDIT_RETENTION_DAYS=365 For development/testing, all security can be disabled: -```text +```bash // In main.rs let security = if env::var("DEVELOPMENT_MODE").unwrap_or("false".to_string()) == "true" { SecurityComponents::disabled(audit_logger.clone()) @@ -544,7 +544,7 @@ Location: `provisioning/platform/orchestrator/tests/security_integration_tests.r **Run Tests**: -```text +```bash cd provisioning/platform/orchestrator cargo test security_integration_tests ``` diff --git a/docs/src/architecture/orchestrator-info.md b/docs/src/architecture/orchestrator-info.md index 7bd5fab..37959f7 100644 --- a/docs/src/architecture/orchestrator-info.md +++ b/docs/src/architecture/orchestrator-info.md @@ -54,7 +54,7 @@ http post { 1. Orchestrator receives and queues: -```text +```bash // Orchestrator receives HTTP request async fn create_server_workflow(request) { let task = Task::new(TaskType::ServerCreate, request); @@ -65,7 +65,7 @@ async fn create_server_workflow(request) { 2. Orchestrator executes via Nushell subprocess: -```text +```nushell // Orchestrator spawns Nushell to run business logic async fn execute_task(task: Task) { let output = Command::new("nu") @@ -80,7 +80,7 @@ async fn execute_task(task: Task) { 3. Nushell executes the actual work: -```text +```nushell # servers/create.nu export def create-server [name: string] { diff --git a/docs/src/architecture/orchestrator-integration-model.md b/docs/src/architecture/orchestrator-integration-model.md index 9c9e925..4142d89 100644 --- a/docs/src/architecture/orchestrator-integration-model.md +++ b/docs/src/architecture/orchestrator-integration-model.md @@ -18,7 +18,7 @@ functionality. **Original Issue:** -```text +```bash Deep call stack in Nushell (template.nu:71) → "Type not supported" errors → Cannot handle complex nested workflows @@ -35,7 +35,7 @@ Deep call stack in Nushell (template.nu:71) ### How It Works Today (Monorepo) -```text +```bash ┌─────────────────────────────────────────────────────────────┐ │ User │ └───────────────────────────┬─────────────────────────────────┘ @@ -80,7 +80,7 @@ Deep call stack in Nushell (template.nu:71) #### Mode 1: Direct Mode (Simple Operations) -```text +```bash # No orchestrator needed provisioning server list provisioning env @@ -92,7 +92,7 @@ provisioning (CLI) → Nushell scripts → Result #### Mode 2: Orchestrated Mode (Complex Operations) -```text +```bash # Uses orchestrator for coordination provisioning server create --orchestrated @@ -104,7 +104,7 @@ provisioning CLI → Orchestrator API → Task Queue → Nushell executor #### Mode 3: Workflow Mode (Batch Operations) -```text +```bash # Complex workflows with dependencies provisioning workflow submit server-cluster.ncl @@ -128,7 +128,7 @@ provisioning CLI → Orchestrator Workflow Engine → Dependency Graph **Nushell CLI (`core/nulib/workflows/server_create.nu`):** -```text +```nushell # Submit server creation workflow to orchestrator export def server_create_workflow [ infra_name: string @@ -153,7 +153,7 @@ export def server_create_workflow [ **Rust Orchestrator (`platform/orchestrator/src/api/workflows.rs`):** -```text +```rust // Receive workflow submission from Nushell CLI #[axum::debug_handler] async fn create_server_workflow( @@ -183,7 +183,7 @@ async fn create_server_workflow( **Flow:** -```text +```bash User → provisioning server create --orchestrated ↓ Nushell CLI prepares task @@ -201,7 +201,7 @@ User can monitor: provisioning workflow monitor **Orchestrator Task Executor (`platform/orchestrator/src/executor.rs`):** -```text +```rust // Orchestrator spawns Nushell to execute business logic pub async fn execute_task(task: Task) -> Result { match task.task_type { @@ -233,7 +233,7 @@ pub async fn execute_task(task: Task) -> Result { **Flow:** -```text +```bash Orchestrator task queue has pending task ↓ Executor picks up task @@ -253,7 +253,7 @@ User monitors via: provisioning workflow status **Nushell Calls Orchestrator API:** -```text +```nushell # Nushell script checks orchestrator status during execution export def check-orchestrator-health [] { let response = (http get http://localhost:9090/health) @@ -276,7 +276,7 @@ export def report-progress [task_id: string, progress: int] { **Orchestrator Monitors Nushell Execution:** -```text +```nushell // Orchestrator tracks Nushell subprocess pub async fn execute_with_monitoring(task: Task) -> Result { let mut child = Command::new("nu") @@ -332,7 +332,7 @@ pub async fn execute_with_monitoring(task: Task) -> Result { **Runtime Integration (Same as Monorepo):** -```text +```bash User installs both packages: provisioning-core-3.2.1 → /usr/local/lib/provisioning/ provisioning-platform-2.5.3 → /usr/local/bin/provisioning-orchestrator @@ -347,7 +347,7 @@ No code dependencies, just runtime coordination! **Core Package (`provisioning-core`) config:** -```text +```toml # /usr/local/share/provisioning/config/config.defaults.toml [orchestrator] @@ -363,7 +363,7 @@ fallback_to_direct = true # Fall back if orchestrator down **Platform Package (`provisioning-platform`) config:** -```text +```toml # /usr/local/share/provisioning/platform/config.toml [orchestrator] @@ -382,7 +382,7 @@ task_timeout_seconds = 3600 **Compatibility Matrix (`provisioning-distribution/versions.toml`):** -```text +```toml [compatibility.platform."2.5.3"] core = "^3.2" # Platform 2.5.3 compatible with core 3.2.x min-core = "3.2.0" @@ -402,7 +402,7 @@ orchestrator-api = "v1" **No Orchestrator Needed:** -```text +```bash provisioning server list # Flow: @@ -414,7 +414,7 @@ CLI → servers/list.nu → Query state → Return results **Using Orchestrator:** -```text +```bash provisioning server create --orchestrated --infra wuji # Detailed Flow: @@ -466,7 +466,7 @@ provisioning server create --orchestrated --infra wuji **Complex Workflow:** -```text +```bash provisioning batch submit multi-cloud-deployment.ncl # Workflow contains: @@ -548,8 +548,7 @@ provisioning batch submit multi-cloud-deployment.ncl 1. **Reliable State Management** -```text - Orchestrator maintains: +``` Orchestrator maintains: - Task queue (survives crashes) - Workflow checkpoints (resume on failure) - Progress tracking (real-time monitoring) @@ -558,8 +557,7 @@ provisioning batch submit multi-cloud-deployment.ncl 1. **Clean Separation** -```text - Orchestrator (Rust): Performance, concurrency, state +``` Orchestrator (Rust): Performance, concurrency, state Business Logic (Nushell): Providers, taskservs, workflows Each does what it's best at! @@ -594,7 +592,7 @@ provisioning batch submit multi-cloud-deployment.ncl **User installs bundle:** -```text +```bash curl -fsSL https://get.provisioning.io | sh # Installs: @@ -614,7 +612,7 @@ curl -fsSL https://get.provisioning.io | sh **Core package expects orchestrator:** -```text +```bash # core/nulib/lib_provisioning/orchestrator/client.nu # Check if orchestrator is running @@ -644,7 +642,7 @@ export def ensure-orchestrator [] { **Platform package executes core scripts:** -```text +```bash // platform/orchestrator/src/executor/nushell.rs pub struct NushellExecutor { @@ -689,7 +687,7 @@ impl NushellExecutor { **`/usr/local/share/provisioning/config/config.defaults.toml`:** -```text +```toml [orchestrator] enabled = true endpoint = "http://localhost:9090" @@ -722,7 +720,7 @@ force_direct = [ **`/usr/local/share/provisioning/platform/config.toml`:** -```text +```toml [server] host = "127.0.0.1" port = 8080 @@ -780,7 +778,7 @@ env_vars = { NU_LIB_DIRS = "/usr/local/lib/provisioning" } The confusing example in the multi-repo doc was **oversimplified**. The real architecture is: -```text +```bash ✅ Orchestrator IS USED and IS ESSENTIAL ✅ Platform (Rust) coordinates Core (Nushell) execution ✅ Loose coupling via CLI + REST API (not code dependencies) diff --git a/docs/src/architecture/package-and-loader-system.md b/docs/src/architecture/package-and-loader-system.md index 22ecac7..3493fd2 100644 --- a/docs/src/architecture/package-and-loader-system.md +++ b/docs/src/architecture/package-and-loader-system.md @@ -41,7 +41,7 @@ Contains fundamental schemas for provisioning: #### Discovery Commands -```text +```bash # Discover available modules module-loader discover taskservs # List all taskservs module-loader discover providers --format yaml # List providers as YAML @@ -58,7 +58,7 @@ module-loader discover clusters redis # Search for redis clusters #### Loading Commands -```text +```bash # Load modules into workspace module-loader load taskservs . [kubernetes, cilium, containerd] module-loader load providers . [upcloud] @@ -81,7 +81,7 @@ module-loader init workspace/infra/production ### New Workspace Layout -```text +```bash workspace/infra/my-project/ ├── kcl.mod # Package dependencies ├── servers.ncl # Main server configuration @@ -110,7 +110,7 @@ workspace/infra/my-project/ #### Before (Old System) -```text +```bash # Hardcoded relative paths import ../../../kcl/server as server import ../../../extensions/taskservs/kubernetes/kcl/kubernetes as k8s @@ -118,7 +118,7 @@ import ../../../extensions/taskservs/kubernetes/kcl/kubernetes as k8s #### After (New System) -```text +```bash # Package-based imports import provisioning.server as server @@ -130,7 +130,7 @@ import .taskservs.nclubernetes.kubernetes as k8s ### Building Core Package -```text +```bash # Build distributable package ./provisioning/tools/kcl-packager.nu build --version 1.0.0 @@ -145,21 +145,21 @@ import .taskservs.nclubernetes.kubernetes as k8s #### Method 1: Local Installation (Recommended for development) -```text +```toml [dependencies] provisioning = { path = "~/.kcl/packages/provisioning", version = "0.0.1" } ``` #### Method 2: Git Repository (For distributed teams) -```text +```toml [dependencies] provisioning = { git = "https://github.com/your-org/provisioning-kcl", version = "v0.0.1" } ``` #### Method 3: KCL Registry (When available) -```text +```toml [dependencies] provisioning = { version = "0.0.1" } ``` @@ -168,7 +168,7 @@ provisioning = { version = "0.0.1" } ### 1. New Project Setup -```text +```bash # Create workspace from template cp -r provisioning/templates/workspaces/kubernetes ./my-k8s-cluster cd my-k8s-cluster @@ -187,7 +187,7 @@ provisioning server create --infra . --check ### 2. Extension Development -```text +```bash # Create new taskserv mkdir -p extensions/taskservs/my-service/kcl cd extensions/taskservs/my-service/kcl @@ -202,7 +202,7 @@ module-loader discover taskservs # Should find your service ### 3. Workspace Migration -```text +```bash # Analyze existing workspace workspace-migrate.nu workspace/infra/old-project dry-run @@ -215,7 +215,7 @@ module-loader validate workspace/infra/old-project ### 4. Multi-Environment Management -```text +```bash # Development environment cd workspace/infra/dev module-loader load taskservs . [redis, postgres] @@ -231,7 +231,7 @@ module-loader load providers . [upcloud, aws] # Multi-cloud ### Listing and Validation -```text +```bash # List loaded modules module-loader list taskservs . module-loader list providers . @@ -246,7 +246,7 @@ workspace-init.nu . info ### Unloading Modules -```text +```bash # Remove specific modules module-loader unload taskservs . redis module-loader unload providers . aws @@ -256,7 +256,7 @@ module-loader unload providers . aws ### Module Information -```text +```bash # Get detailed module info module-loader info taskservs kubernetes module-loader info providers upcloud @@ -267,7 +267,7 @@ module-loader info clusters buildkit ### Pipeline Example -```text +```nushell #!/usr/bin/env nu # deploy-pipeline.nu @@ -292,13 +292,13 @@ provisioning server create --infra $env.WORKSPACE_PATH #### Module Import Errors -```text +```bash Error: module not found ``` **Solution**: Verify modules are loaded and regenerate imports -```text +```bash module-loader list taskservs . module-loader load taskservs . [kubernetes, cilium, containerd] ``` @@ -311,14 +311,14 @@ module-loader load taskservs . [kubernetes, cilium, containerd] **Solution**: Verify core package installation and kcl.mod configuration -```text +```toml kcl-packager.nu install --version latest kcl run --dry-run servers.ncl ``` ### Debug Commands -```text +```bash # Show workspace structure tree -a workspace/infra/my-project @@ -364,25 +364,25 @@ For existing workspaces, follow these steps: ### 1. Backup Current Workspace -```text +```bash cp -r workspace/infra/existing workspace/infra/existing-backup ``` ### 2. Analyze Migration Requirements -```text +```bash workspace-migrate.nu workspace/infra/existing dry-run ``` ### 3. Perform Migration -```text +```bash workspace-migrate.nu workspace/infra/existing ``` ### 4. Load Required Modules -```text +```bash cd workspace/infra/existing module-loader load taskservs . [kubernetes, cilium] module-loader load providers . [upcloud] @@ -390,14 +390,14 @@ module-loader load providers . [upcloud] ### 5. Test and Validate -```text +```bash kcl run servers.ncl module-loader validate . ``` ### 6. Deploy -```text +```bash provisioning server create --infra . --check ``` diff --git a/docs/src/architecture/repo-dist-analysis.md b/docs/src/architecture/repo-dist-analysis.md index 0cf226c..4659405 100644 --- a/docs/src/architecture/repo-dist-analysis.md +++ b/docs/src/architecture/repo-dist-analysis.md @@ -70,7 +70,7 @@ workflow, and user-friendly distribution. ### 1. Monorepo Structure -```text +```bash project-provisioning/ │ ├── provisioning/ # CORE SYSTEM (distribution source) @@ -246,7 +246,7 @@ project-provisioning/ **Installation:** -```text +```bash /usr/local/ ├── bin/ │ └── provisioning @@ -275,7 +275,7 @@ project-provisioning/ **Installation:** -```text +```bash /usr/local/ ├── bin/ │ ├── provisioning-orchestrator @@ -297,7 +297,7 @@ project-provisioning/ **Installation:** -```text +```bash /usr/local/lib/provisioning/extensions/ ├── taskservs/ ├── clusters/ @@ -317,7 +317,7 @@ project-provisioning/ **Installation:** -```text +```bash ~/.config/nushell/plugins/ ``` @@ -325,7 +325,7 @@ project-provisioning/ #### System Installation (Root) -```text +```bash /usr/local/ ├── bin/ │ ├── provisioning # Main CLI @@ -351,7 +351,7 @@ project-provisioning/ #### User Configuration -```text +```toml ~/.provisioning/ ├── config/ │ └── config.user.toml # User overrides @@ -365,7 +365,7 @@ project-provisioning/ #### Project Workspace -```text +```bash ./workspace/ ├── infra/ # Infrastructure definitions │ ├── my-cluster/ @@ -384,7 +384,7 @@ project-provisioning/ ### Configuration Hierarchy -```text +```toml Priority (highest to lowest): 1. CLI flags --debug, --infra=my-cluster 2. Runtime overrides PROVISIONING_DEBUG=true @@ -401,7 +401,7 @@ Priority (highest to lowest): **`provisioning/tools/build/`:** -```text +```bash build/ ├── build-system.nu # Main build orchestrator ├── package-core.nu # Core packaging @@ -417,7 +417,7 @@ build/ **`provisioning/tools/build/build-system.nu`:** -```text +```nushell #!/usr/bin/env nu # Build system for provisioning project @@ -597,7 +597,7 @@ Total packages: (($packages | length))" **`Justfile`:** -```text +```bash # Provisioning Build System # Use 'just --list' to see all available commands @@ -729,7 +729,7 @@ audit: **`distribution/installers/install.nu`:** -```text +```nushell #!/usr/bin/env nu # Provisioning installation script @@ -986,7 +986,7 @@ export def "main upgrade" [ **`distribution/installers/install.sh`:** -```text +```bash #!/usr/bin/env bash # Provisioning installation script (Bash version) # This script installs Nushell first, then runs the Nushell installer @@ -1113,7 +1113,7 @@ main "$@" **Commands:** -```text +```bash # Backup current state cp -r /Users/Akasha/project-provisioning /Users/Akasha/project-provisioning.backup @@ -1138,7 +1138,7 @@ fd workspace -t d > workspace-dirs.txt **Commands:** -```text +```bash # Create distribution directory mkdir -p distribution/{packages,installers,registry} @@ -1412,7 +1412,7 @@ rm -rf NO/ wrks/ presentations/ #### Option 1: Clean Migration -```text +```bash # Backup current workspace cp -r workspace workspace.backup @@ -1425,7 +1425,7 @@ provisioning workspace migrate --from workspace.backup --to workspace/ #### Option 2: In-Place Migration -```text +```bash # Run migration script provisioning migrate --check # Dry run provisioning migrate # Execute migration @@ -1433,7 +1433,7 @@ provisioning migrate # Execute migration ### For Developers -```text +```bash # Pull latest changes git pull origin main @@ -1608,4 +1608,4 @@ enterprise deployments. - Rust cargo packaging conventions - npm/yarn package management patterns - Homebrew formula best practices -- KCL package management design +- KCL package management design \ No newline at end of file diff --git a/docs/src/architecture/system-overview.md b/docs/src/architecture/system-overview.md index 27bae54..82a482a 100644 --- a/docs/src/architecture/system-overview.md +++ b/docs/src/architecture/system-overview.md @@ -11,7 +11,7 @@ The system solves fundamental technical challenges through architectural innovat ### System Diagram -```text +```bash ┌─────────────────────────────────────────────────────────────────┐ │ User Interface Layer │ ├─────────────────┬─────────────────┬─────────────────────────────┤ @@ -149,7 +149,7 @@ The system solves fundamental technical challenges through architectural innovat **Nickel Workflow Definitions**: -```text +```json { batch_workflow = { name = "multi_cloud_deployment", @@ -247,14 +247,14 @@ The system solves fundamental technical challenges through architectural innovat ### Configuration Resolution Flow -```text +```toml 1. Workspace Discovery → 2. Configuration Loading → 3. Hierarchy Merge → 4. Variable Interpolation → 5. Schema Validation → 6. Runtime Application ``` ### Workflow Execution Flow -```text +```bash 1. Workflow Submission → 2. Dependency Analysis → 3. Task Scheduling → 4. Parallel Execution → 5. State Tracking → 6. Result Aggregation → 7. Error Handling → 8. Cleanup/Rollback @@ -262,7 +262,7 @@ The system solves fundamental technical challenges through architectural innovat ### Provider Integration Flow -```text +```bash 1. Provider Discovery → 2. Configuration Validation → 3. Authentication → 4. Resource Planning → 5. Operation Execution → 6. State Persistence → 7. Result Reporting diff --git a/docs/src/architecture/typedialog-nickel-integration.md b/docs/src/architecture/typedialog-nickel-integration.md index cf47a64..0e8c0ee 100644 --- a/docs/src/architecture/typedialog-nickel-integration.md +++ b/docs/src/architecture/typedialog-nickel-integration.md @@ -11,7 +11,7 @@ TypeDialog generates **type-safe interactive forms** from configuration schemas with **bidirectional Nickel integration**. -```text +```nickel Nickel Schema ↓ TypeDialog Form (Auto-generated) @@ -27,7 +27,7 @@ Nickel output config (Type-safe) ### Three Layers -```text +```nickel CLI/TUI/Web Layer ↓ TypeDialog Form Engine @@ -39,7 +39,7 @@ Schema Contracts ### Data Flow -```text +```nickel Input (Nickel) ↓ Form Definition (TOML) @@ -59,7 +59,7 @@ Output (JSON/YAML/TOML/Nickel) ### Installation -```text +```nickel # Clone TypeDialog git clone https://github.com/jesusperezlorenzo/typedialog.git cd typedialog @@ -73,7 +73,7 @@ cargo install --path ./crates/typedialog ### Verify Installation -```text +```nickel typedialog --version typedialog --help ``` @@ -84,7 +84,7 @@ typedialog --help ### Step 1: Define Nickel Schema -```text +```nickel # server_config.ncl let contracts = import "./contracts.ncl" in let defaults = import "./defaults.ncl" in @@ -101,7 +101,7 @@ let defaults = import "./defaults.ncl" in ### Step 2: Define TypeDialog Form (TOML) -```text +```toml # server_form.toml [form] title = "Server Configuration" @@ -155,13 +155,13 @@ help = "Select applicable tags" ### Step 3: Render Form (CLI) -```text +```nickel typedialog form --config server_form.toml --backend cli ``` **Output**: -```text +```nickel Server Configuration Create a new server configuration @@ -179,14 +179,14 @@ Create a new server configuration ### Step 4: Validate Against Nickel Schema -```text +```nickel # Validation happens automatically # If input matches Nickel contract, proceeds to output ``` ### Step 5: Output to Nickel -```text +```nickel typedialog form --config server_form.toml --output nickel @@ -195,7 +195,7 @@ typedialog form **Output file** (`server_config_output.ncl`): -```text +```json { server_name = "web-01", cpu_cores = 4, @@ -216,7 +216,7 @@ You want an interactive CLI wizard for infrastructure provisioning. ### Step 1: Define Nickel Schema for Infrastructure -```text +```nickel # infrastructure_schema.ncl { InfrastructureConfig = { @@ -245,7 +245,7 @@ You want an interactive CLI wizard for infrastructure provisioning. ### Step 2: Create Comprehensive Form -```text +```nickel # infrastructure_wizard.toml [form] title = "Infrastructure Provisioning Wizard" @@ -334,7 +334,7 @@ placeholder = "admin@company.com" ### Step 3: Run Interactive Wizard -```text +```nickel typedialog form --config infrastructure_wizard.toml --backend tui @@ -343,7 +343,7 @@ typedialog form **Output** (`infrastructure_config.ncl`): -```text +```json { workspace_name = "production-eu", deployment_mode = 'enterprise, @@ -358,7 +358,7 @@ typedialog form ### Step 4: Use Output in Infrastructure -```text +```nickel # main_infrastructure.ncl let config = import "./infrastructure_config.ncl" in let schemas = import "../../provisioning/schemas/main.ncl" in @@ -398,7 +398,7 @@ let schemas = import "../../provisioning/schemas/main.ncl" in ### Form Definition (Advanced) -```text +```nickel # server_advanced_form.toml [form] title = "Server Configuration" @@ -532,7 +532,7 @@ options = ["production", "staging", "testing", "development"] ### Output Structure -```text +```json { # Basic server_name = "web-prod-01", @@ -562,7 +562,7 @@ options = ["production", "staging", "testing", "development"] ### TypeDialog REST Endpoints -```text +```nickel # Start TypeDialog server typedialog server --port 8080 @@ -574,7 +574,7 @@ curl -X POST http://localhost:8080/forms ### Response Format -```text +```json { "form_id": "srv_abc123", "status": "rendered", @@ -592,7 +592,7 @@ curl -X POST http://localhost:8080/forms ### Submit Form -```text +```nickel curl -X POST http://localhost:8080/forms/srv_abc123/submit -H "Content-Type: application/json" -d '{ @@ -607,7 +607,7 @@ curl -X POST http://localhost:8080/forms/srv_abc123/submit ### Response -```text +```json { "status": "success", "validation": "passed", @@ -631,7 +631,7 @@ curl -X POST http://localhost:8080/forms/srv_abc123/submit TypeDialog validates user input against Nickel contracts: -```text +```nickel # Nickel contract ServerConfig = { cpu_cores | Number, # Must be number @@ -645,7 +645,7 @@ ServerConfig = { ### Validation Rules in Form -```text +```toml [[fields]] name = "cpu_cores" type = "number" @@ -661,7 +661,7 @@ help = "Must be 1-32 cores" ### Use Case: Infrastructure Initialization -```text +```nickel # 1. User runs initialization provisioning init --wizard @@ -679,7 +679,7 @@ provisioning init --wizard ### Implementation in Nushell -```text +```nushell # provisioning/core/nulib/provisioning_init.nu def provisioning_init_wizard [] { @@ -714,7 +714,7 @@ def provisioning_init_wizard [] { Show/hide fields based on user selections: -```text +```toml [[fields]] name = "backup_retention" label = "Backup Retention (days)" @@ -726,7 +726,7 @@ visible_if = "enable_backup == true" # Only shown if backup enabled Set defaults based on other fields: -```text +```toml [[fields]] name = "deployment_mode" type = "select" @@ -741,7 +741,7 @@ default_from = "deployment_mode" # Can reference other fields ### Custom Validation -```text +```toml [[fields]] name = "memory_gb" type = "number" @@ -755,7 +755,7 @@ help = "Memory must be at least 2 GB per CPU core" TypeDialog can output to multiple formats: -```text +```nickel # Output to Nickel (recommended for IaC) typedialog form --config form.toml --output nickel @@ -777,7 +777,7 @@ TypeDialog supports three rendering backends: ### 1. CLI (Command-line prompts) -```text +```nickel typedialog form --config form.toml --backend cli ``` @@ -786,7 +786,7 @@ typedialog form --config form.toml --backend cli ### 2. TUI (Terminal User Interface - Ratatui) -```text +```nickel typedialog form --config form.toml --backend tui ``` @@ -795,7 +795,7 @@ typedialog form --config form.toml --backend tui ### 3. Web (HTTP Server - Axum) -```text +```nickel typedialog form --config form.toml --backend web --port 3000 # Opens http://localhost:3000 ``` @@ -813,7 +813,7 @@ typedialog form --config form.toml --backend web --port 3000 **Solution**: Verify field definitions match Nickel schema: -```text +```nickel # Form field [[fields]] name = "cpu_cores" # Must match Nickel field name @@ -826,7 +826,7 @@ type = "number" # Must match Nickel type **Solution**: Add help text and validation rules: -```text +```toml [[fields]] name = "cpu_cores" validation_pattern = "^[1-9][0-9]*$" @@ -839,7 +839,7 @@ help = "Must be positive integer" **Solution**: Ensure all required fields in form: -```text +```toml [[fields]] name = "required_field" required = true # User must provide value @@ -851,7 +851,7 @@ required = true # User must provide value ### Step 1: Define Nickel Schema -```text +```nickel # workspace_schema.ncl { workspace = { @@ -866,7 +866,7 @@ required = true # User must provide value ### Step 2: Define Form -```text +```nickel # workspace_form.toml [[fields]] name = "name" @@ -895,14 +895,14 @@ required = true ### Step 3: User Interaction -```text +```nickel $ typedialog form --config workspace_form.toml --backend tui # User fills form interactively ``` ### Step 4: Output -```text +```json { workspace = { name = "production", @@ -916,7 +916,7 @@ $ typedialog form --config workspace_form.toml --backend tui ### Step 5: Use in Provisioning -```text +```nickel # main.ncl let config = import "./workspace.ncl" in let schemas = import "provisioning/schemas/main.ncl" in diff --git a/docs/src/configuration/config-validation.md b/docs/src/configuration/config-validation.md index cf4fbe2..d95ac6e 100644 --- a/docs/src/configuration/config-validation.md +++ b/docs/src/configuration/config-validation.md @@ -10,7 +10,7 @@ The new configuration system includes comprehensive schema validation to catch e Ensures all required fields are present: -```text +```bash # Schema definition [required] fields = ["name", "version", "enabled"] @@ -30,7 +30,7 @@ version = "1.0.0" Validates field types: -```text +```bash # Schema [fields.port] type = "int" @@ -54,7 +54,7 @@ port = "8080" # Error: Expected int, got string Restricts values to predefined set: -```text +```bash # Schema [fields.environment] type = "string" @@ -71,7 +71,7 @@ environment = "production" # Error: Must be one of: dev, staging, prod Validates numeric ranges: -```text +```bash # Schema [fields.port] type = "int" @@ -92,7 +92,7 @@ port = 70000 # Error: Must be <= 65535 Validates string patterns using regex: -```text +```bash # Schema [fields.email] type = "string" @@ -109,7 +109,7 @@ email = "not-an-email" # Error: Does not match pattern Warns about deprecated configuration: -```text +```toml # Schema [deprecated] fields = ["old_field"] @@ -125,7 +125,7 @@ old_field = "value" # Warning: old_field is deprecated. Use new_field instead. ### Command Line -```text +```bash # Validate workspace config provisioning workspace config validate @@ -141,7 +141,7 @@ provisioning workspace config validate --verbose ### Programmatic Usage -```text +```bash use provisioning/core/nulib/lib_provisioning/config/schema_validator.nu * # Load config @@ -171,7 +171,7 @@ if ($result.warnings | length) > 0 { ### Pretty Print Results -```text +```bash # Validate and print formatted results let result = (validate-workspace-config $config) print-validation-results $result @@ -183,7 +183,7 @@ print-validation-results $result File: `/Users/Akasha/project-provisioning/provisioning/config/workspace.schema.toml` -```text +```toml [required] fields = ["workspace", "paths"] @@ -222,7 +222,7 @@ enum = ["debug", "info", "warn", "error"] File: `/Users/Akasha/project-provisioning/provisioning/extensions/providers/aws/config.schema.toml` -```text +```toml [required] fields = ["provider", "credentials"] @@ -279,7 +279,7 @@ old_region_field = "provider.region" File: `/Users/Akasha/project-provisioning/provisioning/platform/orchestrator/config.schema.toml` -```text +```toml [required] fields = ["service", "server"] @@ -325,7 +325,7 @@ type = "string" File: `/Users/Akasha/project-provisioning/provisioning/core/services/kms/config.schema.toml` -```text +```toml [required] fields = ["kms", "encryption"] @@ -372,7 +372,7 @@ old_kms_type = "kms.provider" ### 1. Development -```text +```bash # Create new config vim ~/workspaces/dev/config/provisioning.yaml @@ -386,7 +386,7 @@ provisioning workspace config validate ### 2. CI/CD Pipeline -```text +```bash # GitLab CI validate-config: stage: validate @@ -402,7 +402,7 @@ validate-config: ### 3. Pre-Deployment -```text +```bash # Validate all configurations before deployment provisioning workspace config validate --verbose provisioning provider validate --all @@ -418,7 +418,7 @@ fi ### Clear Error Format -```text +```bash ❌ Validation failed Errors: @@ -445,7 +445,7 @@ Each error includes: ### Pattern 1: Hostname Validation -```text +```toml [fields.hostname] type = "string" pattern = "^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$" @@ -453,7 +453,7 @@ pattern = "^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$" ### Pattern 2: Email Validation -```text +```toml [fields.email] type = "string" pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$" @@ -461,7 +461,7 @@ pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$" ### Pattern 3: Semantic Version -```text +```toml [fields.version] type = "string" pattern = "^\\d+\\.\\d+\\.\\d+(-[a-zA-Z0-9]+)?$" @@ -469,7 +469,7 @@ pattern = "^\\d+\\.\\d+\\.\\d+(-[a-zA-Z0-9]+)?$" ### Pattern 4: URL Validation -```text +```toml [fields.url] type = "string" pattern = "^https?://[a-zA-Z0-9.-]+(:[0-9]+)?(/.*)?$" @@ -477,7 +477,7 @@ pattern = "^https?://[a-zA-Z0-9.-]+(:[0-9]+)?(/.*)?$" ### Pattern 5: IPv4 Address -```text +```toml [fields.ip_address] type = "string" pattern = "^(?:[0-9]{1,3}\\.){3}[0-9]{1,3}$" @@ -485,7 +485,7 @@ pattern = "^(?:[0-9]{1,3}\\.){3}[0-9]{1,3}$" ### Pattern 6: AWS Resource ID -```text +```toml [fields.instance_id] type = "string" pattern = "^i-[a-f0-9]{8,17}$" @@ -503,14 +503,14 @@ pattern = "^vpc-[a-f0-9]{8,17}$" ### Unit Tests -```text +```bash # Run validation test suite nu provisioning/tests/config_validation_tests.nu ``` ### Integration Tests -```text +```bash # Test with real configs provisioning test validate --workspace dev provisioning test validate --workspace staging @@ -519,7 +519,7 @@ provisioning test validate --workspace prod ### Custom Validation -```text +```bash # Create custom validation function def validate-custom-config [config: record] { let result = (validate-workspace-config $config) @@ -543,7 +543,7 @@ def validate-custom-config [config: record] { ### 1. Validate Early -```text +```bash # Validate during development provisioning workspace config validate @@ -552,7 +552,7 @@ provisioning workspace config validate ### 2. Use Strict Schemas -```text +```bash # Be explicit about types and constraints [fields.port] type = "int" @@ -564,7 +564,7 @@ max = 65535 ### 3. Document Patterns -```text +```bash # Include examples in schema [fields.email] type = "string" @@ -574,7 +574,7 @@ pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$" ### 4. Handle Deprecation -```text +```bash # Always provide replacement guidance [deprecated_replacements] old_field = "new_field" # Clear migration path @@ -582,7 +582,7 @@ old_field = "new_field" # Clear migration path ### 5. Test Schemas -```text +```bash # Include test cases in comments # Valid: "admin@example.com" # Invalid: "not-an-email" @@ -592,7 +592,7 @@ old_field = "new_field" # Clear migration path ### Schema File Not Found -```text +```bash # Error: Schema file not found: /path/to/schema.toml # Solution: Ensure schema exists @@ -601,7 +601,7 @@ ls -la /Users/Akasha/project-provisioning/provisioning/config/*.schema.toml ### Pattern Not Matching -```text +```bash # Error: Field hostname does not match pattern # Debug: Test pattern separately @@ -610,7 +610,7 @@ echo "my-hostname" | grep -E "^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$" ### Type Mismatch -```text +```bash # Error: Expected int, got string # Check config diff --git a/docs/src/development/auth-metadata-guide.md b/docs/src/development/auth-metadata-guide.md index 04e0ab4..1070fce 100644 --- a/docs/src/development/auth-metadata-guide.md +++ b/docs/src/development/auth-metadata-guide.md @@ -28,7 +28,7 @@ This guide describes the metadata-driven authentication system implemented over ### System Components -```text +```bash ┌─────────────────────────────────────────────────────────────┐ │ User Command │ └────────────────────────────────┬──────────────────────────────┘ @@ -89,7 +89,7 @@ This guide describes the metadata-driven authentication system implemented over ### Installation Steps -```text +```bash # 1. Clone or update repository git clone https://github.com/your-org/project-provisioning.git cd project-provisioning @@ -113,7 +113,7 @@ nu tests/test-metadata-cache-benchmark.nu ### Basic Commands -```text +```bash # Initialize authentication provisioning login @@ -135,7 +135,7 @@ provisioning server create --name test --check ### Authentication Flow -```text +```bash # 1. Login (required for production operations) $ provisioning login Username: alice@example.com @@ -160,7 +160,7 @@ Auth check: Check auth for destructive operation ### Check Mode (Bypass Auth for Testing) -```text +```bash # Dry-run without auth checks provisioning server create --name test --check @@ -172,7 +172,7 @@ Dry-run mode - no changes will be made ### Non-Interactive CI/CD Mode -```text +```bash # Automated mode - skip confirmations provisioning server create --name web-01 --yes @@ -189,7 +189,7 @@ PROVISIONING_NON_INTERACTIVE=1 provisioning server create --name web-02 --yes **Old Pattern** (Before Fase 5): -```text +```bash # Hardcoded auth check let response = (input "Delete server? (yes/no): ") if $response != "yes" { exit 1 } @@ -203,7 +203,7 @@ export def delete-server [name: string, --yes] { **New Pattern** (After Fase 5): -```text +```bash # Metadata header # [command] # name = "server delete" @@ -226,7 +226,7 @@ export def delete-server [name: string, --yes] { 1. Add metadata header after shebang: -```text +```nushell #!/usr/bin/env nu # [command] # name = "server create" @@ -241,7 +241,7 @@ export def create-server [name: string] { 1. Register in `provisioning/schemas/main.ncl`: -```text +```javascript let server_create = { name = "server create", domain = "infrastructure", @@ -259,7 +259,7 @@ server_create 1. Handler integration (happens in dispatcher): -```text +```bash # Dispatcher automatically: # 1. Loads metadata for "server create" # 2. Validates auth based on requirements @@ -269,7 +269,7 @@ server_create ### Phase 3: Validating Migration -```text +```bash # Validate metadata headers nu utils/validate-metadata-headers.nu @@ -292,7 +292,7 @@ nu utils/search-scripts.nu list **Step 1: Create metadata in main.ncl** -```text +```javascript let new_feature_command = { name = "feature command", domain = "infrastructure", @@ -310,7 +310,7 @@ new_feature_command **Step 2: Add metadata header to script** -```text +```nushell #!/usr/bin/env nu # [command] # name = "feature command" @@ -325,7 +325,7 @@ export def feature-command [param: string] { **Step 3: Implement handler function** -```text +```bash # Handler registered in dispatcher export def handle-feature-command [ action: string @@ -342,7 +342,7 @@ export def handle-feature-command [ **Step 4: Test with check mode** -```text +```bash # Dry-run without auth provisioning feature command --check @@ -389,7 +389,7 @@ provisioning feature command --yes **Pattern 1: For Long Operations** -```text +```bash # Use orchestrator for operations >2 seconds if (get-operation-duration "my-operation") > 2000 { submit-to-orchestrator $operation @@ -399,7 +399,7 @@ if (get-operation-duration "my-operation") > 2000 { **Pattern 2: For Batch Operations** -```text +```bash # Use batch workflows for multiple operations nu -c " use core/nulib/workflows/batch.nu * @@ -409,7 +409,7 @@ batch submit workflows/batch-deploy.ncl --parallel-limit 5 **Pattern 3: For Metadata Overhead** -```text +```bash # Cache hit rate optimization # Current: 40-100x faster with warm cache # Target: >95% cache hit rate @@ -420,7 +420,7 @@ batch submit workflows/batch-deploy.ncl --parallel-limit 5 ### Running Tests -```text +```bash # End-to-End Integration Tests nu tests/test-fase5-e2e.nu @@ -456,7 +456,7 @@ for test in tests/test-*.nu { nu $test } **Solution**: Ensure metadata is registered in `main.ncl` -```text +```nickel # Check if command is in metadata grep "command_name" provisioning/schemas/main.ncl ``` @@ -465,7 +465,7 @@ grep "command_name" provisioning/schemas/main.ncl **Solution**: Verify user has required permission level -```text +```bash # Check current user permissions provisioning auth whoami @@ -480,7 +480,7 @@ get-command-metadata 'server create' **Solution**: Check cache status -```text +```bash # Force cache reload rm ~/.cache/provisioning/command_metadata.json @@ -492,7 +492,7 @@ nu tests/test-metadata-cache-benchmark.nu **Solution**: Run compliance check -```text +```bash # Validate Nushell compliance nu --ide-check 100 @@ -514,7 +514,7 @@ grep "let mut" # Should be empty ### Real-World Impact -```text +```bash Scenario: 20 sequential commands Without cache: 20 × 200 ms = 4 seconds With cache: 1 × 200 ms + 19 × 5 ms = 295 ms diff --git a/docs/src/development/build-system.md b/docs/src/development/build-system.md index d1c489f..102b654 100644 --- a/docs/src/development/build-system.md +++ b/docs/src/development/build-system.md @@ -30,7 +30,7 @@ The build system is a comprehensive, Makefile-based solution that orchestrates: ## Quick Start -```text +```bash # Navigate to build system cd src/tools @@ -61,7 +61,7 @@ make status **Variables**: -```text +```bash # Project metadata PROJECT_NAME := provisioning VERSION := $(git describe --tags --always --dirty) @@ -95,7 +95,7 @@ PARALLEL := true **`make build-platform`** - Build platform binaries for all targets -```text +```bash make build-platform # Equivalent to: nu tools/build/compile-platform.nu @@ -107,7 +107,7 @@ nu tools/build/compile-platform.nu **`make build-core`** - Bundle core Nushell libraries -```text +```nushell make build-core # Equivalent to: nu tools/build/bundle-core.nu @@ -119,7 +119,7 @@ nu tools/build/bundle-core.nu **`make validate-nickel`** - Validate and compile Nickel schemas -```text +```nickel make validate-nickel # Equivalent to: nu tools/build/validate-nickel.nu @@ -142,7 +142,7 @@ nu tools/build/validate-nickel.nu **`make dist-generate`** - Generate complete distributions -```text +```bash make dist-generate # Advanced usage: make dist-generate PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete @@ -176,7 +176,7 @@ make dist-generate PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete **`make release`** - Create a complete release (requires VERSION) -```text +```bash make release VERSION=2.1.0 ``` @@ -217,7 +217,7 @@ Features: **`make dev-build`** - Quick development build -```text +```bash make dev-build # Fast build with minimal validation ``` @@ -250,7 +250,7 @@ make dev-build **`make docs`** - Generate documentation -```text +```bash make docs # Generates API docs, user guides, and examples ``` @@ -265,7 +265,7 @@ make docs **`make clean`** - Clean all build artifacts -```text +```bash make clean # Removes all build, distribution, and package directories ``` @@ -290,7 +290,7 @@ make clean **`make status`** - Show build system status -```text +```bash make status # Output: # Build System Status @@ -345,21 +345,21 @@ make status **`make linux`** - Build for Linux only -```text +```bash make linux # Sets PLATFORMS=linux-amd64 ``` **`make macos`** - Build for macOS only -```text +```bash make macos # Sets PLATFORMS=macos-amd64 ``` **`make windows`** - Build for Windows only -```text +```bash make windows # Sets PLATFORMS=windows-amd64 ``` @@ -368,7 +368,7 @@ make windows **`make debug`** - Build with debug information -```text +```bash make debug # Sets BUILD_MODE=debug VERBOSE=true ``` @@ -398,7 +398,7 @@ All build tools are implemented as Nushell scripts with comprehensive parameter **Usage**: -```text +```nushell nu compile-platform.nu [options] Options: @@ -412,7 +412,7 @@ Options: **Example**: -```text +```nushell nu compile-platform.nu --target x86_64-apple-darwin --release @@ -435,7 +435,7 @@ nu compile-platform.nu **Usage**: -```text +```nushell nu bundle-core.nu [options] Options: @@ -468,7 +468,7 @@ Options: **Usage**: -```text +```nushell nu validate-nickel.nu [options] Options: @@ -490,7 +490,7 @@ Options: **Usage**: -```text +```nushell nu test-distribution.nu [options] Options: @@ -514,7 +514,7 @@ Options: **Usage**: -```text +```nushell nu clean-build.nu [options] Options: @@ -544,7 +544,7 @@ Options: **Usage**: -```text +```nushell nu generate-distribution.nu [command] [options] Commands: @@ -566,7 +566,7 @@ Options: **Advanced Examples**: -```text +```bash # Complete multi-platform release nu generate-distribution.nu --version 2.1.0 @@ -599,7 +599,7 @@ nu generate-distribution.nu status **Usage**: -```text +```nushell nu create-installer.nu DISTRIBUTION_DIR [options] Options: @@ -660,7 +660,7 @@ Options: **Usage**: -```text +```nushell nu create-release.nu [options] Options: @@ -694,7 +694,7 @@ Options: **Install Rust Targets**: -```text +```rust # Install additional targets rustup target add x86_64-apple-darwin rustup target add x86_64-pc-windows-gnu @@ -706,7 +706,7 @@ rustup target add aarch64-apple-darwin **macOS Cross-Compilation**: -```text +```bash # Install osxcross toolchain brew install FiloSottile/musl-cross/musl-cross brew install mingw-w64 @@ -714,7 +714,7 @@ brew install mingw-w64 **Windows Cross-Compilation**: -```text +```bash # Install Windows dependencies brew install mingw-w64 # or on Linux: @@ -725,7 +725,7 @@ sudo apt-get install gcc-mingw-w64 **Single Platform**: -```text +```bash # Build for macOS from Linux make build-platform RUST_TARGET=x86_64-apple-darwin @@ -735,7 +735,7 @@ make build-platform RUST_TARGET=x86_64-pc-windows-gnu **Multiple Platforms**: -```text +```bash # Build for all configured platforms make build-cross @@ -745,7 +745,7 @@ make build-cross PLATFORMS=linux-amd64,macos-amd64,windows-amd64 **Platform-Specific Targets**: -```text +```bash # Quick platform builds make linux # Linux AMD64 make macos # macOS AMD64 @@ -775,7 +775,7 @@ make windows # Windows AMD64 **Check Dependencies**: -```text +```bash make info # Shows versions of all required tools @@ -789,7 +789,7 @@ make info **Install Missing Dependencies**: -```text +```bash # Install Nushell cargo install nu @@ -810,7 +810,7 @@ cargo install cross **Build Cache Management**: -```text +```bash # Clean Cargo cache cargo clean @@ -829,7 +829,7 @@ make clean SCOPE=cache **Error**: `linker 'cc' not found` -```text +```bash # Solution: Install build essentials sudo apt-get install build-essential # Linux xcode-select --install # macOS @@ -837,14 +837,14 @@ xcode-select --install # macOS **Error**: `target not found` -```text +```bash # Solution: Install target rustup target add x86_64-unknown-linux-gnu ``` **Error**: Cross-compilation linking errors -```text +```bash # Solution: Use cross instead of cargo cargo install cross make build-platform CROSS=true @@ -854,7 +854,7 @@ make build-platform CROSS=true **Error**: `command not found` -```text +```bash # Solution: Ensure Nushell is in PATH which nu export PATH="$HOME/.cargo/bin:$PATH" @@ -862,14 +862,14 @@ export PATH="$HOME/.cargo/bin:$PATH" **Error**: Permission denied -```text +```bash # Solution: Make scripts executable chmod +x src/tools/build/*.nu ``` **Error**: Module not found -```text +```bash # Solution: Check working directory cd src/tools nu build/compile-platform.nu --help @@ -879,7 +879,7 @@ nu build/compile-platform.nu --help **Error**: `nickel command not found` -```text +```nickel # Solution: Install Nickel cargo install nickel # or @@ -888,7 +888,7 @@ brew install nickel **Error**: Schema validation failed -```text +```bash # Solution: Check Nickel syntax nickel fmt schemas/ nickel check schemas/ @@ -900,7 +900,7 @@ nickel check schemas/ **Optimizations**: -```text +```bash # Enable parallel builds make build-all PARALLEL=true @@ -913,7 +913,7 @@ export CARGO_BUILD_JOBS=8 **Cargo Configuration** (`~/.cargo/config.toml`): -```text +```toml [build] jobs = 8 @@ -925,7 +925,7 @@ linker = "lld" **Solutions**: -```text +```bash # Reduce parallel jobs export CARGO_BUILD_JOBS=2 @@ -942,7 +942,7 @@ make clean-dist **Validation**: -```text +```bash # Test distribution make test-dist @@ -954,7 +954,7 @@ nu src/tools/package/validate-package.nu dist/ **Optimizations**: -```text +```bash # Strip binaries make package-binaries STRIP=true @@ -969,7 +969,7 @@ make dist-generate VARIANTS=minimal **Enable Debug Logging**: -```text +```bash # Set environment export PROVISIONING_DEBUG=true export RUST_LOG=debug @@ -983,7 +983,7 @@ make build-all VERBOSE=true **Debug Information**: -```text +```bash # Show debug information make debug-info @@ -1000,7 +1000,7 @@ make info **Example Workflow** (`.github/workflows/build.yml`): -```text +```yaml name: Build and Test on: [push, pull_request] @@ -1034,7 +1034,7 @@ jobs: **Release Workflow**: -```text +```bash name: Release on: push: @@ -1061,7 +1061,7 @@ jobs: **Test CI Pipeline Locally**: -```text +```bash # Run CI build pipeline make ci-build @@ -1073,4 +1073,4 @@ make ci-release ``` This build system provides a comprehensive, maintainable foundation for the provisioning project's development lifecycle, from local development to -production releases. +production releases. \ No newline at end of file diff --git a/docs/src/development/command-handler-guide.md b/docs/src/development/command-handler-guide.md index c4f5ba8..e15f9a9 100644 --- a/docs/src/development/command-handler-guide.md +++ b/docs/src/development/command-handler-guide.md @@ -19,7 +19,7 @@ work with this architecture. ### Architecture Components -```text +```bash provisioning/core/nulib/ ├── provisioning (211 lines) - Main entry point ├── main_provisioning/ @@ -58,7 +58,7 @@ Commands are organized by domain. Choose the appropriate handler: Edit `provisioning/core/nulib/main_provisioning/commands/infrastructure.nu`: -```text +```nushell # Add to the handle_infrastructure_command match statement export def handle_infrastructure_command [ command: string @@ -102,7 +102,7 @@ If you want shortcuts like `provisioning s status`: Edit `provisioning/core/nulib/main_provisioning/dispatcher.nu`: -```text +```javascript export def get_command_registry []: nothing -> record { { # Infrastructure commands @@ -127,7 +127,7 @@ Let's say you want to add better error handling to the taskserv command: **Before:** -```text +```python def handle_taskserv [ops: string, flags: record] { let args = build_module_args $flags $ops run_module $args "taskserv" --exec @@ -136,7 +136,7 @@ def handle_taskserv [ops: string, flags: record] { **After:** -```text +```python def handle_taskserv [ops: string, flags: record] { # Validate taskserv name if provided let first_arg = ($ops | split row " " | get -o 0) @@ -163,7 +163,7 @@ def handle_taskserv [ops: string, flags: record] { The `flags.nu` module provides centralized flag handling: -```text +```nushell # Parse all flags into normalized record let parsed_flags = (parse_common_flags { version: $version, v: $v, info: $info, @@ -210,7 +210,7 @@ If you need to add a new flag: **Example: Adding `--timeout` flag** -```text +```bash # 1. In provisioning main file (parameter list) def main [ # ... existing parameters @@ -253,7 +253,7 @@ export def build_module_args [flags: record, extra: string = ""]: nothing -> str Edit `provisioning/core/nulib/main_provisioning/dispatcher.nu`: -```text +```javascript export def get_command_registry []: nothing -> record { { # ... existing shortcuts @@ -273,7 +273,7 @@ export def get_command_registry []: nothing -> record { ### Running the Test Suite -```text +```bash # Run comprehensive test suite nu tests/test_provisioning_refactor.nu ``` @@ -293,7 +293,7 @@ The test suite validates: Edit `tests/test_provisioning_refactor.nu`: -```text +```nushell # Add your test function export def test_my_new_feature [] { print " @@ -318,7 +318,7 @@ export def main [] { ### Manual Testing -```text +```bash # Test command execution provisioning/core/cli/provisioning my-command test --check @@ -336,7 +336,7 @@ provisioning/core/cli/provisioning help my-command # Bi-directional **Use Case**: Command just needs to execute a module with standard flags -```text +```python def handle_simple_command [ops: string, flags: record] { let args = build_module_args $flags $ops run_module $args "module_name" --exec @@ -347,7 +347,7 @@ def handle_simple_command [ops: string, flags: record] { **Use Case**: Need to validate input before execution -```text +```python def handle_validated_command [ops: string, flags: record] { # Validate let first_arg = ($ops | split row " " | get -o 0) @@ -367,7 +367,7 @@ def handle_validated_command [ops: string, flags: record] { **Use Case**: Command has multiple subcommands (like `server create`, `server delete`) -```text +```python def handle_complex_command [ops: string, flags: record] { let subcommand = ($ops | split row " " | get -o 0) let rest_ops = ($ops | split row " " | skip 1 | str join " ") @@ -389,7 +389,7 @@ def handle_complex_command [ops: string, flags: record] { **Use Case**: Command behavior changes based on flags -```text +```python def handle_flag_routed_command [ops: string, flags: record] { if $flags.check_mode { # Dry-run mode @@ -415,7 +415,7 @@ Each handler should do **one thing well**: ### 2. Use Descriptive Error Messages -```text +```bash # ❌ Bad print "Error" @@ -434,7 +434,7 @@ print "Use 'provisioning taskserv list' to see all available taskservs" Don't repeat code - use centralized functions: -```text +```bash # ❌ Bad: Repeating flag handling def handle_bad [ops: string, flags: record] { let use_check = if $flags.check_mode { "--check " } else { "" } @@ -479,7 +479,7 @@ Before committing: **Fix**: Use relative imports with `.nu` extension: -```text +```nushell # ✅ Correct use ../flags.nu * use ../../lib_provisioning * @@ -495,7 +495,7 @@ use lib_provisioning * **Fix**: Use proper Nushell 0.107 type signature: -```text +```nushell # ✅ Correct export def my_function [param: string]: nothing -> string { "result" @@ -513,7 +513,7 @@ export def my_function [param: string] -> string { **Fix**: Add to `dispatcher.nu:get_command_registry`: -```text +```nushell "myshortcut" => "domain command" ``` @@ -523,7 +523,7 @@ export def my_function [param: string] -> string { **Fix**: Use centralized flag builder: -```text +```javascript let args = build_module_args $flags $ops run_module $args "module" --exec ``` @@ -532,7 +532,7 @@ run_module $args "module" --exec ### File Locations -```text +```bash provisioning/core/nulib/ ├── provisioning - Main entry, flag definitions ├── main_provisioning/ @@ -551,7 +551,7 @@ docs/ ### Key Functions -```text +```bash # In flags.nu parse_common_flags [flags: record]: nothing -> record build_module_args [flags: record, extra: string = ""]: nothing -> string @@ -575,7 +575,7 @@ handle_*_command [command: string, ops: string, flags: record] ### Testing Commands -```text +```bash # Run full test suite nu tests/test_provisioning_refactor.nu diff --git a/docs/src/development/command-reference.md b/docs/src/development/command-reference.md index 39b3ec8..e6e057c 100644 --- a/docs/src/development/command-reference.md +++ b/docs/src/development/command-reference.md @@ -19,7 +19,7 @@ This guide includes: ### Essential Commands -```text +```bash # System status provisioning status provisioning health diff --git a/docs/src/development/ctrl-c-implementation-notes.md b/docs/src/development/ctrl-c-implementation-notes.md index 894acd8..038df27 100644 --- a/docs/src/development/ctrl-c-implementation-notes.md +++ b/docs/src/development/ctrl-c-implementation-notes.md @@ -44,7 +44,7 @@ to signal cancellation and let each layer of the call stack handle it gracefully ### 1. Helper Functions (ssh.nu:11-32) -```text +```python def check_sudo_cached []: nothing -> bool { let result = (do --ignore-errors { ^sudo -n true } | complete) $result.exit_code == 0 @@ -71,7 +71,7 @@ def run_sudo_with_interrupt_check [ ### 2. Pre-emptive Warning (ssh.nu:155-160) -```text +```nushell if $server.fix_local_hosts and not (check_sudo_cached) { print " ⚠ Sudo access required for --fix-local-hosts" @@ -87,7 +87,7 @@ if $server.fix_local_hosts and not (check_sudo_cached) { All sudo commands wrapped with detection: -```text +```javascript let result = (do --ignore-errors { ^sudo } | complete) if $result.exit_code == 1 and ($result.stderr | str contains "password is required") { print " @@ -102,7 +102,7 @@ if $result.exit_code == 1 and ($result.stderr | str contains "password is requir Using Nushell's `reduce` instead of mutable variables: -```text +```javascript let all_succeeded = ($settings.data.servers | reduce -f true { |server, acc| if $text_match == null or $server.hostname == $text_match { let result = (on_server_ssh $settings $server $ip_type $request_from $run) @@ -117,7 +117,7 @@ let all_succeeded = ($settings.data.servers | reduce -f true { |server, acc| ### 5. Caller Handling (create.nu:262-266, generate.nu:269-273) -```text +```javascript let ssh_result = (on_server_ssh $settings $server "pub" "create" false) if not $ssh_result { _print " @@ -130,7 +130,7 @@ if not $ssh_result { ## Error Flow Diagram -```text +```bash User presses CTRL-C during password prompt ↓ sudo exits with code 1, stderr: "password is required" @@ -162,7 +162,7 @@ Clean exit, no cryptic errors Captures both stdout, stderr, and exit code without throwing: -```text +```javascript let result = (do --ignore-errors { ^sudo command } | complete) # result = { stdout: "...", stderr: "...", exit_code: 1 } ``` @@ -171,7 +171,7 @@ let result = (do --ignore-errors { ^sudo command } | complete) Instead of mutable variables in loops: -```text +```bash # ❌ BAD - mutable capture in closure mut all_succeeded = true $servers | each { |s| @@ -186,7 +186,7 @@ let all_succeeded = ($servers | reduce -f true { |s, acc| ### 3. Early Returns for Error Handling -```text +```bash if not $condition { print "Error message" return false @@ -198,7 +198,7 @@ if not $condition { ### Scenario 1: CTRL-C During First Sudo Command -```text +```bash provisioning -c server create # Password: [CTRL-C] @@ -210,7 +210,7 @@ provisioning -c server create ### Scenario 2: Pre-cached Credentials -```text +```bash sudo -v provisioning -c server create @@ -219,7 +219,7 @@ provisioning -c server create ### Scenario 3: Wrong Password 3 Times -```text +```bash provisioning -c server create # Password: [wrong] # Password: [wrong] @@ -230,7 +230,7 @@ provisioning -c server create ### Scenario 4: Multiple Servers, Cancel on Second -```text +```bash # If creating multiple servers and CTRL-C on second: # - First server completes successfully # - Second server shows cancellation message @@ -250,7 +250,7 @@ When adding new sudo commands to the codebase: Example template: -```text +```javascript let result = (do --ignore-errors { ^sudo new-command } | complete) if $result.exit_code == 1 and ($result.stderr | str contains "password is required") { print " diff --git a/docs/src/development/dev-configuration.md b/docs/src/development/dev-configuration.md index 6c37972..3499402 100644 --- a/docs/src/development/dev-configuration.md +++ b/docs/src/development/dev-configuration.md @@ -42,7 +42,7 @@ hierarchical TOML configuration system with comprehensive validation and interpo The configuration system implements a clear precedence hierarchy (lowest to highest precedence): -```text +```toml Configuration Hierarchy (Low → High Precedence) ┌─────────────────────────────────────────────────┐ │ 1. config.defaults.toml │ ← System defaults @@ -69,7 +69,7 @@ Configuration Hierarchy (Low → High Precedence) **Configuration Accessor Functions**: -```text +```toml # Core configuration access use core/nulib/lib_provisioning/config/accessor.nu @@ -93,7 +93,7 @@ let data_path = (get-config-interpolated "paths.data") # Resolves {{paths.base} **Before (ENV-based)**: -```text +```javascript export PROVISIONING_UPCLOUD_API_KEY="your-key" export PROVISIONING_UPCLOUD_API_URL="https://api.upcloud.com" export PROVISIONING_LOG_LEVEL="debug" @@ -102,7 +102,7 @@ export PROVISIONING_BASE_PATH="/usr/local/provisioning" **After (Config-based)**: -```text +```toml # config.user.toml [providers.upcloud] api_key = "your-key" @@ -123,7 +123,7 @@ base = "/usr/local/provisioning" **Location**: Root of the repository **Modification**: Should only be modified by system maintainers -```text +```bash # System-wide defaults - DO NOT MODIFY in production # Copy values to config.user.toml for customization @@ -203,7 +203,7 @@ sample_rate = 0.1 **Location**: User's configuration directory **Modification**: Users should customize this file for their needs -```text +```toml # User configuration - customizations and personal preferences # This file overrides system defaults @@ -249,7 +249,7 @@ commit_prefix = "[{{env.USER}}]" **Location**: Project root directory **Version Control**: Should be committed to version control -```text +```bash # Project-specific configuration # Shared settings for this project/repository @@ -296,7 +296,7 @@ developers = ["dev-team@company.com"] **Location**: Infrastructure directory **Usage**: Overrides for specific infrastructure deployments -```text +```bash # Infrastructure-specific configuration # Overrides for this specific infrastructure deployment @@ -345,7 +345,7 @@ retention_days = 30 **Purpose**: Development-optimized settings **Features**: Enhanced debugging, local providers, relaxed validation -```text +```toml # Development environment configuration # Optimized for local development and testing @@ -404,7 +404,7 @@ mock_external_apis = true **Purpose**: Testing-specific configuration **Features**: Mock services, isolated environments, comprehensive logging -```text +```toml # Testing environment configuration # Optimized for automated testing and CI/CD @@ -453,7 +453,7 @@ fail_fast = true **Purpose**: Production-optimized settings **Features**: Performance optimization, security hardening, comprehensive monitoring -```text +```toml # Production environment configuration # Optimized for performance, reliability, and security @@ -513,7 +513,7 @@ connection_pooling = true **Creating User Configuration**: -```text +```toml # Create user config directory mkdir -p ~/.config/provisioning @@ -526,7 +526,7 @@ $EDITOR ~/.config/provisioning/config.toml **Common User Customizations**: -```text +```bash # Personal configuration customizations [paths] @@ -561,7 +561,7 @@ slack_webhook = "{{env.SLACK_WEBHOOK_URL}}" **Workspace Integration**: -```text +```bash # Workspace-aware configuration # workspace/config/developer.toml @@ -590,7 +590,7 @@ auto_create = true **Built-in Validation**: -```text +```bash # Validate current configuration provisioning validate config @@ -606,7 +606,7 @@ provisioning config debug **Validation Rules**: -```text +```bash # Configuration validation in Nushell def validate_configuration [config: record] -> record { let errors = [] @@ -645,7 +645,7 @@ def validate_configuration [config: record] -> record { **Configuration-Driven Error Handling**: -```text +```toml # Never patch with hardcoded fallbacks - use configuration def get_api_endpoint [provider: string] -> string { # Good: Configuration-driven with clear error @@ -675,7 +675,7 @@ def get_api_endpoint_bad [provider: string] -> string { **Comprehensive Error Context**: -```text +```python def load_provider_config [provider: string] -> record { let config_section = $"providers.($provider)" @@ -704,7 +704,7 @@ def load_provider_config [provider: string] -> record { **Supported Interpolation Variables**: -```text +```bash # Environment variables base_path = "{{env.HOME}}/provisioning" user_name = "{{env.USER}}" @@ -732,7 +732,7 @@ architecture = "{{system.arch}}" **Dynamic Path Resolution**: -```text +```toml [paths] base = "{{env.HOME}}/.local/share/provisioning" config = "{{paths.base}}/config" @@ -747,7 +747,7 @@ log_file = "{{paths.logs}}/upcloud-{{now.date}}.log" **Environment-Aware Configuration**: -```text +```toml [core] name = "provisioning-{{system.hostname}}-{{env.USER}}" version = "{{release.version}}+{{git.commit}}.{{now.timestamp}}" @@ -770,7 +770,7 @@ tags = { **Custom Interpolation Logic**: -```text +```bash # Interpolation resolver def resolve_interpolation [template: string, context: record] -> string { let interpolations = ($template | parse --regex '\{\{([^}]+)\}\}') @@ -816,7 +816,7 @@ def resolve_interpolation_key [key_path: string, context: record] -> string { **Backward Compatibility**: -```text +```bash # Configuration accessor with ENV fallback def get-config-with-env-fallback [ config_key: string, @@ -855,7 +855,7 @@ def get-config-with-env-fallback [ **Available Migration Scripts**: -```text +```bash # Migrate existing ENV-based setup to configuration nu src/tools/migration/env-to-config.nu --scan-environment --create-config @@ -874,7 +874,7 @@ nu src/tools/migration/generate-config.nu --output-file config.migrated.toml **Error**: `Configuration file not found` -```text +```toml # Solution: Check configuration file paths provisioning config paths @@ -889,7 +889,7 @@ provisioning config debug **Error**: `Invalid TOML syntax in configuration file` -```text +```toml # Solution: Validate TOML syntax nu -c "open config.user.toml | from toml" @@ -904,7 +904,7 @@ provisioning config check --verbose **Error**: `Failed to resolve interpolation: {{env.MISSING_VAR}}` -```text +```bash # Solution: Check available interpolation variables provisioning config interpolation --list-variables @@ -919,7 +919,7 @@ provisioning config debug --show-interpolation **Error**: `Provider 'upcloud' configuration invalid` -```text +```toml # Solution: Validate provider configuration provisioning validate config --section providers.upcloud @@ -934,7 +934,7 @@ provisioning providers upcloud test --dry-run **Configuration Debugging**: -```text +```toml # Show complete resolved configuration provisioning config show --resolved @@ -955,7 +955,7 @@ provisioning config interpolation --debug "{{paths.data}}/{{env.USER}}" **Configuration Caching**: -```text +```toml # Enable configuration caching export PROVISIONING_CONFIG_CACHE=true @@ -968,7 +968,7 @@ provisioning config cache --stats **Startup Optimization**: -```text +```bash # Optimize configuration loading [performance] lazy_loading = true diff --git a/docs/src/development/dev-workspace-management.md b/docs/src/development/dev-workspace-management.md index 56222cd..22e05b0 100644 --- a/docs/src/development/dev-workspace-management.md +++ b/docs/src/development/dev-workspace-management.md @@ -34,7 +34,7 @@ The workspace system provides isolated development environments for the provisio ### Directory Structure -```text +```bash workspace/ ├── config/ # Development configuration │ ├── dev-defaults.toml # Development environment defaults @@ -97,7 +97,7 @@ workspace/ ### Quick Start -```text +```bash # Navigate to workspace cd workspace/tools @@ -110,7 +110,7 @@ nu workspace.nu init --user-name developer --infra-name my-dev-infra ### Complete Initialization -```text +```bash # Full initialization with all options nu workspace.nu init --user-name developer @@ -134,7 +134,7 @@ nu workspace.nu init **Verify Installation**: -```text +```bash # Check workspace health nu workspace.nu health --detailed @@ -147,7 +147,7 @@ nu workspace.nu list **Configure Development Environment**: -```text +```toml # Create user-specific configuration cp workspace/config/local-overrides.toml.example workspace/config/$USER.toml @@ -170,7 +170,7 @@ The workspace implements a sophisticated path resolution system that prioritizes ### Using Path Resolution -```text +```bash # Import path resolver use workspace/lib/path-resolver.nu @@ -188,7 +188,7 @@ let new_path = (path-resolver resolve_path "infra" "my-infra" --create-missing) **Hierarchical Configuration Loading**: -```text +```toml # Resolve configuration with full hierarchy let config = (path-resolver resolve_config "user" --workspace-user "developer") @@ -203,7 +203,7 @@ let merged = (path-resolver resolve_config "merged" --workspace-user "developer" **Automatic Extension Discovery**: -```text +```bash # Find custom provider extension let provider = (path-resolver resolve_extension "providers" "my-aws-provider") @@ -218,7 +218,7 @@ let cluster = (path-resolver resolve_extension "clusters" "development-cluster") **Workspace Health Validation**: -```text +```bash # Check workspace health with automatic fixes let health = (path-resolver check_workspace_health --workspace-user "developer" --fix-issues) @@ -244,7 +244,7 @@ let runtime_status = (path-resolver check_runtime_health --workspace-user "devel **Development Environment** (`workspace/config/dev-defaults.toml`): -```text +```toml [core] name = "provisioning-dev" version = "dev-${git.branch}" @@ -273,7 +273,7 @@ max_size = "10 MB" **Testing Environment** (`workspace/config/test-defaults.toml`): -```text +```toml [core] name = "provisioning-test" version = "test-${build.timestamp}" @@ -302,7 +302,7 @@ test_output = true **User-Specific Configuration** (`workspace/config/{user}.toml`): -```text +```toml [core] name = "provisioning-${workspace.user}" version = "1.0.0-dev" @@ -339,7 +339,7 @@ email = "developer@company.com" **Workspace Configuration Management**: -```text +```toml # Show current configuration nu workspace.nu config show @@ -370,7 +370,7 @@ The workspace provides templates and tools for developing three types of extensi **Create New Provider**: -```text +```bash # Copy template cp -r workspace/extensions/providers/template workspace/extensions/providers/my-provider @@ -381,7 +381,7 @@ nu init.nu --provider-name my-provider --author developer **Provider Structure**: -```text +```bash workspace/extensions/providers/my-provider/ ├── kcl/ │ ├── provider.ncl # Provider configuration schema @@ -402,7 +402,7 @@ workspace/extensions/providers/my-provider/ **Test Provider**: -```text +```bash # Run provider tests nu workspace/extensions/providers/my-provider/nulib/provider.nu test @@ -417,7 +417,7 @@ nu workspace/extensions/providers/my-provider/tests/integration/basic-test.nu **Create New Task Service**: -```text +```bash # Copy template cp -r workspace/extensions/taskservs/template workspace/extensions/taskservs/my-service @@ -428,7 +428,7 @@ nu init.nu --service-name my-service --service-type database **Task Service Structure**: -```text +```bash workspace/extensions/taskservs/my-service/ ├── kcl/ │ ├── taskserv.ncl # Service configuration schema @@ -452,7 +452,7 @@ workspace/extensions/taskservs/my-service/ **Create New Cluster**: -```text +```bash # Copy template cp -r workspace/extensions/clusters/template workspace/extensions/clusters/my-cluster @@ -463,7 +463,7 @@ nu init.nu --cluster-name my-cluster --cluster-type web-stack **Testing Extensions**: -```text +```bash # Test extension syntax nu workspace.nu tools validate-extension providers/my-provider @@ -480,7 +480,7 @@ nu workspace.nu tools deploy-test clusters/my-cluster --infra test-env **Per-User Isolation**: -```text +```bash runtime/ ├── workspaces/ │ ├── developer/ # Developer's workspace data @@ -516,7 +516,7 @@ runtime/ **Initialize Runtime Environment**: -```text +```bash # Initialize for current user nu workspace/tools/runtime-manager.nu init @@ -526,7 +526,7 @@ nu workspace/tools/runtime-manager.nu init --user-name developer **Runtime Cleanup**: -```text +```bash # Clean cache older than 30 days nu workspace/tools/runtime-manager.nu cleanup --type cache --age 30d @@ -539,7 +539,7 @@ nu workspace/tools/runtime-manager.nu cleanup --type temp --force **Log Management**: -```text +```bash # View recent logs nu workspace/tools/runtime-manager.nu logs --action tail --lines 100 @@ -555,7 +555,7 @@ nu workspace/tools/runtime-manager.nu logs --action archive --older-than 7d **Cache Management**: -```text +```bash # Show cache statistics nu workspace/tools/runtime-manager.nu cache --action stats @@ -571,7 +571,7 @@ nu workspace/tools/runtime-manager.nu cache --action refresh --selective **Monitoring**: -```text +```bash # Monitor runtime usage nu workspace/tools/runtime-manager.nu monitor --duration 5m --interval 30s @@ -601,7 +601,7 @@ The workspace provides comprehensive health monitoring with automatic repair cap **Basic Health Check**: -```text +```bash # Quick health check nu workspace.nu health @@ -617,7 +617,7 @@ nu workspace.nu health --report-format json > health-report.json **Component-Specific Health Checks**: -```text +```bash # Check directory structure nu workspace/tools/workspace-health.nu check-directories --workspace-user developer @@ -635,7 +635,7 @@ nu workspace/tools/workspace-health.nu check-extensions --workspace-user develop **Example Health Report**: -```text +```json { "workspace_health": { "user": "developer", @@ -704,7 +704,7 @@ nu workspace/tools/workspace-health.nu check-extensions --workspace-user develop **Create Backup**: -```text +```bash # Basic backup nu workspace.nu backup @@ -732,7 +732,7 @@ nu workspace.nu backup --components config,extensions --name my-backup **List Available Backups**: -```text +```bash # List all backups nu workspace.nu restore --list-backups @@ -745,7 +745,7 @@ nu workspace.nu restore --show-contents --backup-name workspace-developer-202509 **Restore Operations**: -```text +```bash # Restore latest backup nu workspace.nu restore --latest @@ -771,7 +771,7 @@ nu workspace.nu restore --backup-name my-backup --restore-to different-user **Workspace Reset**: -```text +```bash # Reset with backup nu workspace.nu reset --backup-first @@ -784,7 +784,7 @@ nu workspace.nu reset --force --no-backup **Cleanup Operations**: -```text +```bash # Clean old data with dry-run nu workspace.nu cleanup --type old --age 14d --dry-run @@ -803,7 +803,7 @@ nu workspace.nu cleanup --user-name old-user --type all **Error**: `Workspace for user 'developer' not found` -```text +```bash # Solution: Initialize workspace nu workspace.nu init --user-name developer ``` @@ -812,7 +812,7 @@ nu workspace.nu init --user-name developer **Error**: `Path resolution failed for config/user` -```text +```toml # Solution: Fix with health check nu workspace.nu health --fix-issues @@ -824,7 +824,7 @@ nu workspace/lib/path-resolver.nu resolve_path "config" "user" --create-missing **Error**: `Invalid configuration syntax in user.toml` -```text +```toml # Solution: Validate and fix configuration nu workspace.nu config validate --user-name developer @@ -836,7 +836,7 @@ cp workspace/config/local-overrides.toml.example workspace/config/developer.toml **Error**: `Runtime directory permissions error` -```text +```bash # Solution: Reinitialize runtime nu workspace/tools/runtime-manager.nu init --user-name developer --force @@ -848,7 +848,7 @@ chmod -R 755 workspace/runtime/workspaces/developer **Error**: `Extension 'my-provider' not found or invalid` -```text +```bash # Solution: Validate extension nu workspace.nu tools validate-extension providers/my-provider @@ -860,7 +860,7 @@ cp -r workspace/extensions/providers/template workspace/extensions/providers/my- **Enable Debug Logging**: -```text +```bash # Set debug environment export PROVISIONING_DEBUG=true export PROVISIONING_LOG_LEVEL=debug @@ -874,7 +874,7 @@ nu workspace.nu health --detailed **Slow Operations**: -```text +```bash # Check disk space df -h workspace/ @@ -890,7 +890,7 @@ nu workspace/tools/runtime-manager.nu cache --action optimize **Corrupted Workspace**: -```text +```bash # 1. Backup current state nu workspace.nu backup --name corrupted-backup --force diff --git a/docs/src/development/distribution-process.md b/docs/src/development/distribution-process.md index dc64418..77bc2b1 100644 --- a/docs/src/development/distribution-process.md +++ b/docs/src/development/distribution-process.md @@ -37,7 +37,7 @@ automated release management. ### Distribution Components -```text +```bash Distribution Ecosystem ├── Core Components │ ├── Platform Binaries # Rust-compiled binaries @@ -59,7 +59,7 @@ Distribution Ecosystem ### Build Pipeline -```text +```bash Build Pipeline Flow ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ Source Code │ -> │ Build Stage │ -> │ Package Stage │ @@ -116,7 +116,7 @@ Build Pipeline Flow **Pre-Release Checklist**: -```text +```bash # Update dependencies and security cargo update cargo audit @@ -133,7 +133,7 @@ make validate-all **Version Planning**: -```text +```bash # Check current version git describe --tags --always @@ -148,7 +148,7 @@ nu src/tools/release/create-release.nu --dry-run --version 2.1.0 **Complete Build**: -```text +```bash # Clean build environment make clean @@ -161,7 +161,7 @@ make test-dist **Build with Specific Parameters**: -```text +```bash # Build for specific platforms make all PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete @@ -176,7 +176,7 @@ make all PARALLEL=true **Create Distribution Packages**: -```text +```bash # Generate complete distributions make dist-generate @@ -192,7 +192,7 @@ make create-installers **Package Validation**: -```text +```bash # Validate packages make test-dist @@ -208,7 +208,7 @@ make uninstall **Automated Release**: -```text +```bash # Create complete release make release VERSION=2.1.0 @@ -235,7 +235,7 @@ nu src/tools/release/create-release.nu **Upload Artifacts**: -```text +```bash # Upload to GitHub Releases make upload-artifacts @@ -248,7 +248,7 @@ make notify-release **Registry Updates**: -```text +```bash # Update Homebrew formula nu src/tools/release/update-registry.nu --registries homebrew @@ -266,7 +266,7 @@ nu src/tools/release/update-registry.nu **Complete Automated Release**: -```text +```bash # Full release pipeline make cd-deploy VERSION=2.1.0 @@ -294,7 +294,7 @@ make notify-release **Create Binary Packages**: -```text +```bash # Standard binary packages make package-binaries @@ -320,7 +320,7 @@ nu src/tools/package/package-binaries.nu **Container Build Process**: -```text +```bash # Build container images make package-containers @@ -363,7 +363,7 @@ nu src/tools/package/build-containers.nu **Create Installers**: -```text +```bash # Generate all installer types make create-installers @@ -411,7 +411,7 @@ nu src/tools/distribution/create-installer.nu **Cross-Compilation Setup**: -```text +```bash # Install cross-compilation targets rustup target add aarch64-unknown-linux-gnu rustup target add x86_64-apple-darwin @@ -424,7 +424,7 @@ cargo install cross **Platform-Specific Builds**: -```text +```bash # Build for specific platform make build-platform RUST_TARGET=aarch64-apple-darwin @@ -441,7 +441,7 @@ make windows **Generated Distributions**: -```text +```bash Distribution Matrix: provisioning-{version}-{platform}-{variant}.{format} @@ -466,7 +466,7 @@ Examples: **Validation Pipeline**: -```text +```bash # Complete validation make test-dist @@ -497,7 +497,7 @@ nu src/tools/build/test-distribution.nu **Test Execution**: -```text +```bash # Run all tests make ci-test @@ -511,7 +511,7 @@ nu src/tools/build/test-distribution.nu --test-types complete **Package Integrity**: -```text +```bash # Validate package structure nu src/tools/package/validate-package.nu dist/ @@ -524,7 +524,7 @@ gpg --verify packages/provisioning-2.1.0.tar.gz.sig **Installation Testing**: -```text +```bash # Test installation process ./packages/installers/install-provisioning-2.1.0.sh --dry-run @@ -541,7 +541,7 @@ docker run --rm provisioning:2.1.0 provisioning --version **GitHub Release Integration**: -```text +```bash # Create GitHub release nu src/tools/release/create-release.nu --version 2.1.0 @@ -568,7 +568,7 @@ nu src/tools/release/create-release.nu **Version Detection**: -```text +```bash # Auto-detect next version nu src/tools/release/create-release.nu --release-type minor @@ -591,7 +591,7 @@ nu src/tools/release/create-release.nu --version 2.1.0-rc.1 --pre-release **Upload and Distribution**: -```text +```bash # Upload to GitHub Releases make upload-artifacts @@ -618,7 +618,7 @@ make update-registry **Automated Rollback**: -```text +```bash # Rollback latest release nu src/tools/release/rollback-release.nu --version 2.1.0 @@ -632,7 +632,7 @@ nu src/tools/release/rollback-release.nu **Manual Rollback Steps**: -```text +```bash # 1. Identify target version git tag -l | grep -v 2.1.0 | tail -5 @@ -665,7 +665,7 @@ nu src/tools/release/notify-users.nu **Rollback Testing**: -```text +```bash # Test rollback in staging nu src/tools/release/rollback-release.nu --version 2.1.0 @@ -681,7 +681,7 @@ make test-dist DIST_VERSION=2.0.5 **Critical Security Rollback**: -```text +```bash # Emergency rollback (bypasses normal procedures) nu src/tools/release/rollback-release.nu --version 2.1.0 @@ -692,7 +692,7 @@ nu src/tools/release/rollback-release.nu **Infrastructure Failure Recovery**: -```text +```bash # Failover to backup infrastructure nu src/tools/release/rollback-release.nu --infrastructure-failover @@ -706,7 +706,7 @@ nu src/tools/release/rollback-release.nu **Build Workflow** (`.github/workflows/build.yml`): -```text +```yaml name: Build and Distribute on: push: @@ -745,7 +745,7 @@ jobs: **Release Workflow** (`.github/workflows/release.yml`): -```text +```yaml name: Release on: push: @@ -777,7 +777,7 @@ jobs: **GitLab CI Configuration** (`.gitlab-ci.yml`): -```text +```yaml stages: - build - package @@ -817,7 +817,7 @@ release: **Jenkinsfile**: -```text +```bash pipeline { agent any @@ -860,7 +860,7 @@ pipeline { **Rust Compilation Errors**: -```text +```rust # Solution: Clean and rebuild make clean cargo clean @@ -873,7 +873,7 @@ rustup update **Cross-Compilation Issues**: -```text +```bash # Solution: Install missing targets rustup target list --installed rustup target add x86_64-apple-darwin @@ -887,7 +887,7 @@ make build-platform CROSS=true **Missing Dependencies**: -```text +```bash # Solution: Install build tools sudo apt-get install build-essential brew install gnu-tar @@ -898,7 +898,7 @@ make info **Permission Errors**: -```text +```bash # Solution: Fix permissions chmod +x src/tools/build/*.nu chmod +x src/tools/distribution/*.nu @@ -909,7 +909,7 @@ chmod +x src/tools/package/*.nu **Package Integrity Issues**: -```text +```bash # Solution: Regenerate packages make clean-dist make package-all @@ -920,7 +920,7 @@ sha256sum packages/*.tar.gz **Installation Test Failures**: -```text +```bash # Solution: Test in clean environment docker run --rm -v $(pwd):/work ubuntu:latest /work/packages/installers/install.sh @@ -934,7 +934,7 @@ docker run --rm -v $(pwd):/work ubuntu:latest /work/packages/installers/install. **Network Issues**: -```text +```bash # Solution: Retry with backoff nu src/tools/release/upload-artifacts.nu --retry-count 5 @@ -946,7 +946,7 @@ gh release upload v2.1.0 packages/*.tar.gz **Authentication Failures**: -```text +```bash # Solution: Refresh tokens gh auth refresh docker login ghcr.io @@ -960,7 +960,7 @@ docker system info **Homebrew Formula Issues**: -```text +```bash # Solution: Manual PR creation git clone https://github.com/Homebrew/homebrew-core cd homebrew-core @@ -973,7 +973,7 @@ git commit -m "provisioning 2.1.0" **Debug Mode**: -```text +```bash # Enable debug logging export PROVISIONING_DEBUG=true export RUST_LOG=debug @@ -989,7 +989,7 @@ nu src/tools/distribution/generate-distribution.nu **Monitoring Build Progress**: -```text +```bash # Monitor build logs tail -f src/tools/build.log @@ -1002,4 +1002,4 @@ df -h ``` This distribution process provides a robust, automated pipeline for creating, validating, and distributing provisioning across multiple platforms -while maintaining high quality and reliability standards. +while maintaining high quality and reliability standards. \ No newline at end of file diff --git a/docs/src/development/glossary.md b/docs/src/development/glossary.md index 6639d7f..5443fd7 100644 --- a/docs/src/development/glossary.md +++ b/docs/src/development/glossary.md @@ -137,7 +137,7 @@ orchestrator). **Commands**: -```text +```bash provisioning batch submit workflow.ncl provisioning batch list provisioning batch status @@ -161,7 +161,7 @@ provisioning batch status **Commands**: -```text +```bash provisioning break-glass request "reason" provisioning break-glass approve ``` @@ -220,7 +220,7 @@ provisioning break-glass approve **Examples**: -```text +```bash provisioning server create provisioning taskserv install kubernetes provisioning workspace switch prod @@ -249,7 +249,7 @@ provisioning workspace switch prod **Commands**: -```text +```bash provisioning cluster create provisioning cluster list provisioning cluster delete @@ -383,7 +383,7 @@ provisioning cluster delete **Commands**: -```text +```bash provisioning status provisioning diagnostics run ``` @@ -427,7 +427,7 @@ provisioning diagnostics run **Usage**: -```text +```bash PROVISIONING_ENV=prod provisioning server list ``` @@ -492,7 +492,7 @@ PROVISIONING_ENV=prod provisioning server list **Commands**: -```text +```bash provisioning compliance gdpr export provisioning compliance gdpr delete ``` @@ -529,7 +529,7 @@ provisioning compliance gdpr delete **Commands**: -```text +```bash provisioning guide from-scratch provisioning guide update provisioning guide customize @@ -555,7 +555,7 @@ provisioning guide customize **Example**: -```text +```bash health_check = { endpoint = "http://localhost:6443/healthz" timeout = 30 @@ -602,7 +602,7 @@ health_check = { **Commands**: -```text +```bash provisioning infra list provisioning generate infra --new ``` @@ -719,7 +719,7 @@ provisioning generate infra --new **Commands**: -```text +```bash provisioning taskserv create kubernetes provisioning test quick kubernetes ``` @@ -778,7 +778,7 @@ provisioning test quick kubernetes **Commands**: -```text +```bash provisioning mfa totp enroll provisioning mfa webauthn enroll provisioning mfa verify @@ -818,7 +818,7 @@ provisioning mfa verify **Commands**: -```text +```bash provisioning module discover provider provisioning module load provider provisioning module list taskserv @@ -896,7 +896,7 @@ provisioning module list taskserv **Commands**: -```text +```bash cd provisioning/platform/orchestrator ./scripts/start-orchestrator.nu --background ``` @@ -953,7 +953,7 @@ cd provisioning/platform/orchestrator **Commands**: -```text +```bash provisioning plugin list provisioning plugin install ``` @@ -980,7 +980,7 @@ provisioning plugin install **Commands**: -```text +```bash provisioning module discover provider provisioning providers list ``` @@ -1005,7 +1005,7 @@ provisioning providers list **Commands**: -```text +```bash provisioning sc # Fastest provisioning guide quickstart ``` @@ -1080,7 +1080,7 @@ provisioning guide quickstart **Commands**: -```text +```bash provisioning batch rollback ``` @@ -1118,7 +1118,7 @@ provisioning batch rollback **Example**: -```text +```javascript let ServerConfig = { hostname | string, cores | number, @@ -1177,7 +1177,7 @@ ServerConfig **Commands**: -```text +```bash provisioning server create provisioning server list provisioning server ssh @@ -1241,7 +1241,7 @@ provisioning server ssh **Commands**: -```text +```bash provisioning sops edit ``` @@ -1261,7 +1261,7 @@ provisioning sops edit **Commands**: -```text +```bash provisioning server ssh provisioning ssh connect ``` @@ -1316,7 +1316,7 @@ provisioning ssh connect **Commands**: -```text +```bash provisioning taskserv create provisioning taskserv list provisioning test quick @@ -1356,7 +1356,7 @@ provisioning test quick **Commands**: -```text +```bash provisioning test quick provisioning test env single provisioning test env cluster @@ -1396,7 +1396,7 @@ provisioning test env cluster **Commands**: -```text +```bash provisioning mfa totp enroll provisioning mfa totp verify ``` @@ -1449,7 +1449,7 @@ provisioning mfa totp verify **Commands**: -```text +```bash provisioning version check provisioning version apply ``` @@ -1474,7 +1474,7 @@ provisioning version apply **Commands**: -```text +```bash provisioning validate config provisioning validate infrastructure ``` @@ -1497,7 +1497,7 @@ provisioning validate infrastructure **Commands**: -```text +```bash provisioning version provisioning version check provisioning taskserv check-updates @@ -1521,7 +1521,7 @@ provisioning taskserv check-updates **Commands**: -```text +```bash provisioning mfa webauthn enroll provisioning mfa webauthn verify ``` @@ -1542,7 +1542,7 @@ provisioning mfa webauthn verify **Commands**: -```text +```bash provisioning workflow list provisioning workflow status provisioning workflow monitor @@ -1568,7 +1568,7 @@ provisioning workflow monitor **Commands**: -```text +```bash provisioning workspace list provisioning workspace switch provisioning workspace create diff --git a/docs/src/development/implementation-guide.md b/docs/src/development/implementation-guide.md index f22c947..d905e7f 100644 --- a/docs/src/development/implementation-guide.md +++ b/docs/src/development/implementation-guide.md @@ -43,7 +43,7 @@ specific commands, validation steps, and rollback procedures. #### Step 1.1: Create Complete Backup -```text +```bash # Create timestamped backup BACKUP_DIR="/Users/Akasha/project-provisioning-backup-$(date +%Y%m%d)" cp -r /Users/Akasha/project-provisioning "$BACKUP_DIR" @@ -59,7 +59,7 @@ echo "✅ Backup created: $BACKUP_DIR" #### Step 1.2: Analyze Current State -```text +```bash cd /Users/Akasha/project-provisioning # Count workspace directories @@ -96,7 +96,7 @@ echo "✅ Analysis complete: docs/development/current-state-analysis.txt" #### Step 1.3: Identify Dependencies -```text +```bash # Find all hardcoded paths echo "=== Hardcoded Paths in Nushell Scripts ===" rg -t nu "workspace/|_workspace/|backup-workspace/" provisioning/core/nulib/ | tee hardcoded-paths.txt @@ -114,7 +114,7 @@ echo "✅ Dependencies mapped" #### Step 1.4: Create Implementation Branch -```text +```bash # Create and switch to implementation branch git checkout -b feat/repo-restructure @@ -138,7 +138,7 @@ echo "✅ Implementation branch created: feat/repo-restructure" #### Step 2.1: Create New Directory Structure -```text +```bash cd /Users/Akasha/project-provisioning # Create distribution directory structure @@ -156,7 +156,7 @@ tree -L 2 distribution/ workspace/ #### Step 2.2: Move Build Artifacts -```text +```bash # Move Rust build artifacts if [ -d "target" ]; then mv target distribution/target @@ -178,7 +178,7 @@ done #### Step 2.3: Consolidate Workspaces -```text +```bash # Identify active workspace echo "=== Current Workspace Status ===" ls -la workspace/ _workspace/ backup-workspace/ 2>/dev/null @@ -221,7 +221,7 @@ echo "✅ Workspaces consolidated" #### Step 2.4: Remove Obsolete Directories -```text +```bash # Remove build artifacts (already moved) rm -rf wrks/ echo "✅ Removed wrks/" @@ -248,7 +248,7 @@ echo "✅ Cleanup complete" #### Step 2.5: Update .gitignore -```text +```bash # Backup existing .gitignore cp .gitignore .gitignore.backup @@ -318,7 +318,7 @@ echo "✅ Updated .gitignore" #### Step 2.6: Commit Restructuring -```text +```bash # Stage changes git add -A @@ -355,7 +355,7 @@ echo "✅ Restructuring committed" #### Step 3.1: Create Path Update Script -```text +```bash # Create migration script cat > provisioning/tools/migration/update-paths.nu << 'EOF' #!/usr/bin/env nu @@ -407,7 +407,7 @@ chmod +x provisioning/tools/migration/update-paths.nu #### Step 3.2: Run Path Updates -```text +```bash # Create backup before updates git stash git checkout -b feat/path-updates @@ -424,7 +424,7 @@ nu -c "use provisioning/core/nulib/servers/create.nu; print 'OK'" #### Step 3.3: Update CLAUDE.md -```text +```bash # Update CLAUDE.md with new paths cat > CLAUDE.md.new << 'EOF' # CLAUDE.md @@ -461,7 +461,7 @@ mv CLAUDE.md.new CLAUDE.md #### Step 3.4: Update Documentation -```text +```bash # Find all documentation files fd -e md . docs/ @@ -478,7 +478,7 @@ echo "Files listed in: docs-to-update.txt" #### Step 3.5: Commit Path Updates -```text +```bash git add -A git commit -m "refactor: update all path references for new structure @@ -505,7 +505,7 @@ echo "✅ Path updates committed" #### Step 4.1: Automated Validation -```text +```bash # Create validation script cat > provisioning/tools/validation/validate-structure.nu << 'EOF' #!/usr/bin/env nu @@ -592,7 +592,7 @@ nu provisioning/tools/validation/validate-structure.nu #### Step 4.2: Functional Testing -```text +```bash # Test core commands echo "=== Testing Core Commands ===" @@ -621,7 +621,7 @@ echo "✅ Functional tests passed" #### Step 4.3: Integration Testing -```text +```bash # Test workflow system echo "=== Testing Workflow System ===" @@ -641,7 +641,7 @@ echo "✅ Integration tests passed" #### Step 4.4: Create Test Report -```text +```json { echo "# Repository Restructuring - Validation Report" echo "Date: $(date)" @@ -669,7 +669,7 @@ echo "✅ Test report created: docs/development/phase1-validation-report.md" #### Step 4.5: Update README -```text +```bash # Update main README with new structure # This is manual - review and update README.md @@ -681,7 +681,7 @@ echo " - Update quick start guide" #### Step 4.6: Finalize Phase 1 -```text +```bash # Commit validation and reports git add -A git commit -m "test: add validation for repository restructuring @@ -718,7 +718,7 @@ echo "✅ Phase 1 complete and merged" #### Step 5.1: Create Build Tools Directory -```text +```bash mkdir -p provisioning/tools/build cd provisioning/tools/build @@ -730,7 +730,7 @@ echo "✅ Build tools directory created" #### Step 5.2: Implement Core Build System -```text +```bash # Create main build orchestrator # See full implementation in repo-dist-analysis.md # Copy build-system.nu from the analysis document @@ -741,7 +741,7 @@ nu build-system.nu status #### Step 5.3: Implement Core Packaging -```text +```bash # Create package-core.nu # This packages Nushell libraries, KCL schemas, templates @@ -751,7 +751,7 @@ nu build-system.nu build-core --version dev #### Step 5.4: Create Justfile -```text +```bash # Create Justfile in project root # See full Justfile in repo-dist-analysis.md @@ -779,7 +779,7 @@ just status #### Step 9.1: Create install.nu -```text +```nushell mkdir -p distribution/installers # Create install.nu @@ -788,7 +788,7 @@ mkdir -p distribution/installers #### Step 9.2: Test Installation -```text +```bash # Test installation to /tmp nu distribution/installers/install.nu --prefix /tmp/provisioning-test @@ -812,7 +812,7 @@ nu distribution/installers/install.nu uninstall --prefix /tmp/provisioning-test ### If Phase 1 Fails -```text +```bash # Restore from backup rm -rf /Users/Akasha/project-provisioning cp -r "$BACKUP_DIR" /Users/Akasha/project-provisioning @@ -825,7 +825,7 @@ git branch -D feat/repo-restructure ### If Build System Fails -```text +```bash # Revert build system commits git checkout feat/repo-restructure git revert @@ -833,7 +833,7 @@ git revert ### If Installation Fails -```text +```bash # Clean up test installation rm -rf /tmp/provisioning-test sudo rm -rf /usr/local/lib/provisioning diff --git a/docs/src/development/infrastructure-specific-extensions.md b/docs/src/development/infrastructure-specific-extensions.md index 56727a0..dbb3c49 100644 --- a/docs/src/development/infrastructure-specific-extensions.md +++ b/docs/src/development/infrastructure-specific-extensions.md @@ -31,7 +31,7 @@ Before creating custom extensions, assess your infrastructure requirements: #### 1. Application Inventory -```text +```bash # Document existing applications cat > infrastructure-assessment.yaml << EOF applications: @@ -75,7 +75,7 @@ EOF #### 2. Gap Analysis -```text +```bash # Analyze what standard modules don't cover ./provisioning/core/cli/module-loader discover taskservs > available-modules.txt @@ -107,7 +107,7 @@ EOF #### Business Requirements Template -```text +```bash """ Business Requirements Schema for Custom Extensions Use this template to document requirements before development @@ -179,7 +179,7 @@ schema Integration: #### Example: Legacy ERP System Integration -```text +```bash # Create company-specific taskserv mkdir -p extensions/taskservs/company-specific/legacy-erp/nickel cd extensions/taskservs/company-specific/legacy-erp/nickel @@ -187,7 +187,7 @@ cd extensions/taskservs/company-specific/legacy-erp/nickel Create `legacy-erp.ncl`: -```text +```nickel """ Legacy ERP System Taskserv Handles deployment and management of company's legacy ERP system @@ -437,7 +437,7 @@ legacy_erp_default: LegacyERPTaskserv = { Create `compliance-monitor.ncl`: -```text +```nickel """ Compliance Monitoring Taskserv Automated compliance checking and reporting for regulated environments @@ -607,7 +607,7 @@ compliance_monitor_default: ComplianceMonitorTaskserv = { When working with specialized or private cloud providers: -```text +```bash # Create custom provider extension mkdir -p extensions/providers/company-private-cloud/nickel cd extensions/providers/company-private-cloud/nickel @@ -615,7 +615,7 @@ cd extensions/providers/company-private-cloud/nickel Create `provision_company-private-cloud.ncl`: -```text +```nickel """ Company Private Cloud Provider Integration with company's private cloud infrastructure @@ -762,7 +762,7 @@ company_private_cloud_defaults: defaults.ServerDefaults = { Create environment-specific extensions that handle different deployment patterns: -```text +```bash # Create environment management extension mkdir -p extensions/clusters/company-environments/nickel cd extensions/clusters/company-environments/nickel @@ -770,7 +770,7 @@ cd extensions/clusters/company-environments/nickel Create `company-environments.ncl`: -```text +```nickel """ Company Environment Management Standardized environment configurations for different deployment stages @@ -950,7 +950,7 @@ environment_templates = { Create integration patterns for common legacy system scenarios: -```text +```bash # Create integration patterns mkdir -p extensions/taskservs/integrations/legacy-bridge/nickel cd extensions/taskservs/integrations/legacy-bridge/nickel @@ -958,7 +958,7 @@ cd extensions/taskservs/integrations/legacy-bridge/nickel Create `legacy-bridge.ncl`: -```text +```nickel """ Legacy System Integration Bridge Provides standardized integration patterns for legacy systems @@ -1161,21 +1161,21 @@ legacy_bridge_dependencies: deps.TaskservDependencies = { ### Example 1: Financial Services Company -```text +```bash # Financial services specific extensions mkdir -p extensions/taskservs/financial-services/{trading-system,risk-engine,compliance-reporter}/nickel ``` ### Example 2: Healthcare Organization -```text +```bash # Healthcare specific extensions mkdir -p extensions/taskservs/healthcare/{hl7-processor,dicom-storage,hipaa-audit}/nickel ``` ### Example 3: Manufacturing Company -```text +```bash # Manufacturing specific extensions mkdir -p extensions/taskservs/manufacturing/{iot-gateway,scada-bridge,quality-system}/nickel ``` @@ -1184,7 +1184,7 @@ mkdir -p extensions/taskservs/manufacturing/{iot-gateway,scada-bridge,quality-sy #### Loading Infrastructure-Specific Extensions -```text +```bash # Load company-specific extensions cd workspace/infra/production module-loader load taskservs . [legacy-erp, compliance-monitor, legacy-bridge] @@ -1198,7 +1198,7 @@ module-loader validate . #### Using in Server Configuration -```text +```toml # Import loaded extensions import .taskservs.legacy-erp.legacy-erp as erp import .taskservs.compliance-monitor.compliance-monitor as compliance diff --git a/docs/src/development/integration.md b/docs/src/development/integration.md index edab5b8..65b001a 100644 --- a/docs/src/development/integration.md +++ b/docs/src/development/integration.md @@ -30,7 +30,7 @@ existing production systems while providing clear migration pathways. **Integration Architecture**: -```text +```bash Integration Ecosystem ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ Legacy Core │ ←→ │ Bridge Layer │ ←→ │ New Systems │ @@ -48,7 +48,7 @@ Integration Ecosystem **Seamless CLI Compatibility**: -```text +```bash # All existing commands continue to work unchanged ./core/nulib/provisioning server create web-01 2xCPU-4 GB ./core/nulib/provisioning taskserv install kubernetes @@ -61,7 +61,7 @@ nu workspace/tools/workspace.nu health --detailed **Path Resolution Integration**: -```text +```bash # Automatic path resolution between systems use workspace/lib/path-resolver.nu @@ -76,7 +76,7 @@ let provider_path = (path-resolver resolve_extension "providers" "upcloud") **Dual Configuration Support**: -```text +```toml # Configuration bridge supports both ENV and TOML def get-config-value-bridge [key: string, default: string = ""] -> string { # Try new TOML configuration first @@ -113,7 +113,7 @@ def get-config-value-bridge [key: string, default: string = ""] -> string { **Shared Data Access**: -```text +```bash # Unified data access across old and new systems def get-server-info [server_name: string] -> record { # Try new orchestrator data store first @@ -142,7 +142,7 @@ def get-server-info [server_name: string] -> record { **Hybrid Process Management**: -```text +```bash # Orchestrator-aware process management def create-server-integrated [ name: string, @@ -179,7 +179,7 @@ def check-orchestrator-available [] -> bool { **Version Header Support**: -```text +```bash # API calls with version specification curl -H "API-Version: v1" http://localhost:9090/servers curl -H "API-Version: v2" http://localhost:9090/workflows/servers/create @@ -190,7 +190,7 @@ curl -H "API-Version: v3" http://localhost:9090/workflows/batch/submit **Backward Compatible Endpoints**: -```text +```bash // Rust API compatibility layer #[derive(Debug, Serialize, Deserialize)] struct ApiRequest { @@ -233,7 +233,7 @@ async fn handle_v1_request(payload: serde_json::Value) -> Result record { # Initialize SurrealDB connection let db = (connect-surrealdb) @@ -420,7 +420,7 @@ def migrate_filesystem_to_surrealdb [] -> record { **Migration Verification**: -```text +```python def verify-migration [from: string, to: string] -> record { print "Verifying data integrity..." @@ -466,7 +466,7 @@ def verify-migration [from: string, to: string] -> record { **Hybrid Deployment Model**: -```text +```bash Deployment Architecture ┌─────────────────────────────────────────────────────────────────┐ │ Load Balancer / Reverse Proxy │ @@ -488,7 +488,7 @@ Deployment Architecture **Blue-Green Deployment**: -```text +```bash # Blue-Green deployment with integration bridge # Phase 1: Deploy new system alongside existing (Green environment) cd src/tools @@ -519,7 +519,7 @@ nginx-traffic-split --new-backend 100% **Rolling Update**: -```text +```bash def rolling-deployment [ --target-version: string, --batch-size: int = 3, @@ -576,7 +576,7 @@ def rolling-deployment [ **Environment-Specific Deployment**: -```text +```bash # Development deployment PROVISIONING_ENV=dev ./deploy.sh --config-source config.dev.toml @@ -602,7 +602,7 @@ PROVISIONING_ENV=prod ./deploy.sh **Docker Deployment with Bridge**: -```text +```bash # Multi-stage Docker build supporting both systems FROM rust:1.70 as builder WORKDIR /app @@ -630,7 +630,7 @@ CMD ["/app/bin/bridge-start.sh"] **Kubernetes Integration**: -```text +```yaml # Kubernetes deployment with bridge sidecar apiVersion: apps/v1 kind: Deployment @@ -678,7 +678,7 @@ spec: **Monitoring Stack Integration**: -```text +```bash Observability Architecture ┌─────────────────────────────────────────────────────────────────┐ │ Monitoring Dashboard │ @@ -714,7 +714,7 @@ Observability Architecture **Unified Metrics Collection**: -```text +```bash # Metrics bridge for legacy and new systems def collect-system-metrics [] -> record { let legacy_metrics = collect-legacy-metrics @@ -770,7 +770,7 @@ def collect-new-metrics [] -> record { **Unified Logging Strategy**: -```text +```bash # Structured logging bridge def log-integrated [ level: string, @@ -805,7 +805,7 @@ def log-integrated [ **Comprehensive Health Monitoring**: -```text +```bash def health-check-integrated [] -> record { let health_checks = [ {name: "legacy-system", check: (check-legacy-health)}, @@ -844,7 +844,7 @@ def health-check-integrated [] -> record { **Bridge Component Design**: -```text +```bash # Legacy system bridge module export module bridge { # Bridge state management @@ -905,7 +905,7 @@ export module bridge { **Compatibility Mode**: -```text +```bash # Full compatibility with legacy system def run-compatibility-mode [] { print "Starting bridge in compatibility mode..." @@ -931,7 +931,7 @@ def run-compatibility-mode [] { **Migration Mode**: -```text +```bash # Gradual migration with traffic splitting def run-migration-mode [ --new-system-percentage: int = 50 @@ -986,7 +986,7 @@ def run-migration-mode [ **Automated Migration Orchestration**: -```text +```bash def execute-migration-plan [ migration_plan: string, --dry-run: bool = false, @@ -1041,7 +1041,7 @@ def execute-migration-plan [ **Migration Validation**: -```text +```bash def validate-migration-readiness [] -> record { let checks = [ {name: "backup-available", check: (check-backup-exists)}, @@ -1079,7 +1079,7 @@ def validate-migration-readiness [] -> record { **Problem**: Version mismatch between client and server -```text +```bash # Diagnosis curl -H "API-Version: v1" http://localhost:9090/health curl -H "API-Version: v2" http://localhost:9090/health @@ -1095,7 +1095,7 @@ export PROVISIONING_API_VERSION=v2 **Problem**: Configuration not found in either system -```text +```toml # Diagnosis def diagnose-config-issue [key: string] -> record { let toml_result = try { @@ -1131,7 +1131,7 @@ def migrate-single-config [key: string] { **Problem**: Data inconsistency between systems -```text +```bash # Diagnosis and repair def repair-data-consistency [] -> record { let legacy_data = (read-legacy-data) @@ -1166,7 +1166,7 @@ def repair-data-consistency [] -> record { **Integration Debug Mode**: -```text +```bash # Enable comprehensive debugging export PROVISIONING_DEBUG=true export PROVISIONING_LOG_LEVEL=debug @@ -1179,7 +1179,7 @@ provisioning server create test-server 2xCPU-4 GB --debug-integration **Health Check Debugging**: -```text +```bash def debug-integration-health [] -> record { print "=== Integration Health Debug ===" diff --git a/docs/src/development/kms-simplification.md b/docs/src/development/kms-simplification.md index 0290050..91d6f7f 100644 --- a/docs/src/development/kms-simplification.md +++ b/docs/src/development/kms-simplification.md @@ -66,7 +66,7 @@ If you were using **Vault** or **AWS KMS** for development: #### Step 1: Install Age -```text +```bash # macOS brew install age @@ -79,7 +79,7 @@ go install filippo.io/age/cmd/...@latest #### Step 2: Generate Age Keys -```text +```bash mkdir -p ~/.config/provisioning/age age-keygen -o ~/.config/provisioning/age/private_key.txt age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisioning/age/public_key.txt @@ -91,7 +91,7 @@ Replace your old Vault/AWS config: **Old (Vault)**: -```text +```toml [kms] type = "vault" address = "http://localhost:8200" @@ -101,7 +101,7 @@ mount_point = "transit" **New (Age)**: -```text +```toml [kms] environment = "dev" @@ -112,7 +112,7 @@ private_key_path = "~/.config/provisioning/age/private_key.txt" #### Step 4: Re-encrypt Development Secrets -```text +```bash # Export old secrets (if using Vault) vault kv get -format=json secret/dev > dev-secrets.json @@ -133,7 +133,7 @@ Choose one of these options: **Option A: Cosmian Cloud (Managed)** -```text +```bash # Sign up at https://cosmian.com # Get API credentials export COSMIAN_KMS_URL=https://kms.cosmian.cloud @@ -142,7 +142,7 @@ export COSMIAN_API_KEY=your-api-key **Option B: Self-Hosted Cosmian KMS** -```text +```bash # Deploy Cosmian KMS server # See: https://docs.cosmian.com/kms/deployment/ @@ -153,7 +153,7 @@ export COSMIAN_API_KEY=your-api-key #### Step 2: Create Master Key in Cosmian -```text +```bash # Using Cosmian CLI cosmian-kms create-key --algorithm AES @@ -175,7 +175,7 @@ curl -X POST $COSMIAN_KMS_URL/api/v1/keys **From Vault to Cosmian**: -```text +```bash # Export secrets from Vault vault kv get -format=json secret/prod > prod-secrets.json @@ -197,7 +197,7 @@ cat prod-secrets.enc | **From AWS KMS to Cosmian**: -```text +```bash # Decrypt with AWS KMS aws kms decrypt --ciphertext-blob fileb://encrypted-data @@ -216,7 +216,7 @@ curl -X POST $COSMIAN_KMS_URL/api/v1/encrypt **Old (AWS KMS)**: -```text +```toml [kms] type = "aws-kms" region = "us-east-1" @@ -225,7 +225,7 @@ key_id = "arn:aws:kms:us-east-1:123456789012:key/..." **New (Cosmian)**: -```text +```toml [kms] environment = "prod" @@ -239,7 +239,7 @@ use_confidential_computing = false # Enable if using SGX/SEV #### Step 5: Test Production Setup -```text +```bash # Set environment export PROVISIONING_ENV=prod export COSMIAN_KMS_URL=https://kms.example.com @@ -263,7 +263,7 @@ curl -X POST http://localhost:8082/api/v1/kms/decrypt ### Before (4 Backends) -```text +```bash # Development could use any backend [kms] type = "vault" # or "aws-kms" @@ -279,7 +279,7 @@ key_id = "arn:aws:kms:..." ### After (2 Backends) -```text +```bash # Clear environment-based selection [kms] dev_backend = "age" @@ -314,14 +314,14 @@ tls_verify = true **Before**: -```text +```bash KmsError::VaultError(String) KmsError::AwsKmsError(String) ``` **After**: -```text +```bash KmsError::AgeError(String) KmsError::CosmianError(String) ``` @@ -330,7 +330,7 @@ KmsError::CosmianError(String) **Before**: -```text +```bash enum KmsBackendConfig { Vault { address, token, mount_point, ... }, AwsKms { region, key_id, assume_role }, @@ -339,7 +339,7 @@ enum KmsBackendConfig { **After**: -```text +```bash enum KmsBackendConfig { Age { public_key_path, private_key_path }, Cosmian { server_url, api_key, default_key_id, tls_verify }, @@ -352,7 +352,7 @@ enum KmsBackendConfig { **Before (AWS KMS)**: -```text +```bash use kms_service::{KmsService, KmsBackendConfig}; let config = KmsBackendConfig::AwsKms { @@ -366,7 +366,7 @@ let kms = KmsService::new(config).await?; **After (Cosmian)**: -```text +```bash use kms_service::{KmsService, KmsBackendConfig}; let config = KmsBackendConfig::Cosmian { @@ -383,7 +383,7 @@ let kms = KmsService::new(config).await?; **Before (Vault)**: -```text +```bash # Set Vault environment $env.VAULT_ADDR = "http://localhost:8200" $env.VAULT_TOKEN = "root" @@ -394,7 +394,7 @@ kms encrypt "secret-data" **After (Age for dev)**: -```text +```bash # Set environment $env.PROVISIONING_ENV = "dev" @@ -406,7 +406,7 @@ kms encrypt "secret-data" If you need to rollback to Vault/AWS KMS: -```text +```bash # Checkout previous version git checkout tags/v0.1.0 @@ -423,7 +423,7 @@ cp provisioning/config/kms.toml.backup provisioning/config/kms.toml ### Development Testing -```text +```bash # 1. Generate Age keys age-keygen -o /tmp/test_private.txt age-keygen -y /tmp/test_private.txt > /tmp/test_public.txt @@ -442,7 +442,7 @@ cargo run --bin kms-service ### Production Testing -```text +```bash # 1. Set up test Cosmian instance export COSMIAN_KMS_URL=https://kms-staging.example.com export COSMIAN_API_KEY=test-api-key @@ -464,7 +464,7 @@ cargo run --bin kms-service ### Age Keys Not Found -```text +```bash # Check keys exist ls -la ~/.config/provisioning/age/ @@ -475,7 +475,7 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisionin ### Cosmian Connection Failed -```text +```bash # Check network connectivity curl -v $COSMIAN_KMS_URL/api/v1/health @@ -489,7 +489,7 @@ openssl s_client -connect kms.example.com:443 ### Compilation Errors -```text +```bash # Clean and rebuild cd provisioning/platform/kms-service cargo clean diff --git a/docs/src/development/mcp-server.md b/docs/src/development/mcp-server.md index aa2b07f..7122c4a 100644 --- a/docs/src/development/mcp-server.md +++ b/docs/src/development/mcp-server.md @@ -11,7 +11,7 @@ Replaces the Python implementation with significant performance improvements whi ## Performance Results -```text +```bash 🚀 Rust MCP Server Performance Analysis ================================================== @@ -35,7 +35,7 @@ Replaces the Python implementation with significant performance improvements whi ## Architecture -```text +```bash src/ ├── simple_main.rs # Lightweight MCP server entry point ├── main.rs # Full MCP server (with SDK integration) @@ -67,7 +67,7 @@ src/ ## Usage -```text +```bash # Build and run cargo run --bin provisioning-mcp-server --release @@ -85,7 +85,7 @@ cargo run --bin provisioning-mcp-server --release Set via environment variables: -```text +```javascript export PROVISIONING_PATH=/path/to/provisioning export PROVISIONING_AI_PROVIDER=openai export OPENAI_API_KEY=your-key diff --git a/docs/src/development/project-structure.md b/docs/src/development/project-structure.md index bce52a8..028a9c2 100644 --- a/docs/src/development/project-structure.md +++ b/docs/src/development/project-structure.md @@ -27,7 +27,7 @@ This reorganization enables efficient development workflows while maintaining fu ### New Development Structure (`/src/`) -```text +```bash src/ ├── config/ # System configuration ├── control-center/ # Control center application @@ -47,7 +47,7 @@ src/ ### Legacy Structure (Preserved) -```text +```bash repo-cnz/ ├── cluster/ # Cluster configurations (preserved) ├── core/ # Core system (preserved) @@ -62,7 +62,7 @@ repo-cnz/ ### Development Workspace (`/workspace/`) -```text +```bash workspace/ ├── config/ # Development configuration ├── extensions/ # Extension development @@ -92,7 +92,7 @@ workspace/ **Key Components**: -```text +```bash tools/ ├── build/ # Build tools │ ├── compile-platform.nu # Platform-specific compilation @@ -163,20 +163,20 @@ The workspace provides a sophisticated development environment: **Initialization**: -```text +```bash cd workspace/tools nu workspace.nu init --user-name developer --infra-name my-infra ``` **Health Monitoring**: -```text +```nushell nu workspace.nu health --detailed --fix-issues ``` **Path Resolution**: -```text +```bash use lib/path-resolver.nu let config = (path-resolver resolve_config "user" --workspace-user "john") ``` @@ -232,7 +232,7 @@ The workspace implements a sophisticated configuration cascade: **Core System Entry Points**: -```text +```bash # Main CLI (development version) /src/core/nulib/provisioning @@ -245,7 +245,7 @@ The workspace implements a sophisticated configuration cascade: **Build System**: -```text +```bash # Main build system cd /src/tools && make help @@ -258,7 +258,7 @@ make all **Configuration Files**: -```text +```toml # System defaults /config.defaults.toml @@ -271,7 +271,7 @@ make all **Extension Development**: -```text +```bash # Provider template /workspace/extensions/providers/template/ @@ -286,7 +286,7 @@ make all **1. Development Setup**: -```text +```bash # Initialize workspace cd workspace/tools nu workspace.nu init --user-name $USER @@ -297,7 +297,7 @@ nu workspace.nu health --detailed **2. Building Distribution**: -```text +```bash # Complete build cd src/tools make all @@ -310,7 +310,7 @@ make windows **3. Extension Development**: -```text +```bash # Create new provider cp -r workspace/extensions/providers/template workspace/extensions/providers/my-provider @@ -322,7 +322,7 @@ nu workspace/extensions/providers/my-provider/nulib/provider.nu test **Existing Commands Still Work**: -```text +```bash # All existing commands preserved ./core/nulib/provisioning server create ./core/nulib/provisioning taskserv install kubernetes diff --git a/docs/src/development/providers/provider-agnostic-architecture.md b/docs/src/development/providers/provider-agnostic-architecture.md index c5b6273..8c991e6 100644 --- a/docs/src/development/providers/provider-agnostic-architecture.md +++ b/docs/src/development/providers/provider-agnostic-architecture.md @@ -15,7 +15,7 @@ backup) Defines the contract that all providers must implement: -```text +```bash # Standard interface functions - query_servers - server_info @@ -38,7 +38,7 @@ Defines the contract that all providers must implement: Manages provider discovery and registration: -```text +```bash # Initialize registry init-provider-registry @@ -60,7 +60,7 @@ is-provider-available "aws" Handles dynamic provider loading and validation: -```text +```bash # Load provider dynamically load-provider "aws" @@ -82,7 +82,7 @@ call-provider-function "aws" "query_servers" $find $cols Each provider implements a standard adapter: -```text +```bash provisioning/extensions/providers/ ├── aws/provider.nu # AWS adapter ├── upcloud/provider.nu # UpCloud adapter @@ -92,7 +92,7 @@ provisioning/extensions/providers/ **Adapter Structure:** -```text +```bash # AWS Provider Adapter export def query_servers [find?: string, cols?: string] { aws_query_servers $find $cols @@ -107,7 +107,7 @@ export def create_server [settings: record, server: record, check: bool, wait: b The new middleware that uses dynamic dispatch: -```text +```bash # No hardcoded imports! export def mw_query_servers [settings: record, find?: string, cols?: string] { $settings.data.servers | each { |server| @@ -121,7 +121,7 @@ export def mw_query_servers [settings: record, find?: string, cols?: string] { ### Example: Mixed Provider Infrastructure -```text +```javascript let servers = [ { hostname = "compute-01", @@ -144,7 +144,7 @@ servers ### Multi-Provider Deployment -```text +```bash # Deploy across multiple providers automatically mw_deploy_multi_provider_infra $settings $deployment_plan @@ -160,7 +160,7 @@ mw_suggest_deployment_strategy { Providers declare their capabilities: -```text +```bash capabilities: { server_management: true network_management: true @@ -177,7 +177,7 @@ capabilities: { **Before (hardcoded):** -```text +```bash # middleware.nu use ../aws/nulib/aws/servers.nu * use ../upcloud/nulib/upcloud/servers.nu * @@ -190,7 +190,7 @@ match $server.provider { **After (provider-agnostic):** -```text +```bash # middleware_provider_agnostic.nu # No hardcoded imports! @@ -224,7 +224,7 @@ dispatch_provider_function $server.provider "query_servers" $find $cols Create `provisioning/extensions/providers/{name}/provider.nu`: -```text +```nushell # Digital Ocean Provider Example export def get-provider-metadata [] { { @@ -255,7 +255,7 @@ The registry will automatically discover the new provider on next initialization ### 3. Test New Provider -```text +```bash # Check if discovered is-provider-available "digitalocean" @@ -283,7 +283,7 @@ check-provider-health "digitalocean" ### Profile-Based Security -```text +```bash # Environment profiles can restrict providers PROVISIONING_PROFILE=production # Only allows certified providers PROVISIONING_PROFILE=development # Allows all providers including local @@ -310,7 +310,7 @@ PROVISIONING_PROFILE=development # Allows all providers including local ### Debug Commands -```text +```bash # Registry diagnostics get-provider-stats list-providers --verbose @@ -341,7 +341,7 @@ get-loader-stats See the interface specification for complete function documentation: -```text +```bash get-provider-interface-docs | table ``` diff --git a/docs/src/development/providers/provider-comparison.md b/docs/src/development/providers/provider-comparison.md index 99d02de..8866caf 100644 --- a/docs/src/development/providers/provider-comparison.md +++ b/docs/src/development/providers/provider-comparison.md @@ -374,7 +374,7 @@ Outbound data transfer (per GB): Use this matrix to quickly select a provider: -```text +```bash If you need: Then use: ───────────────────────────────────────────────────────────── Lowest cost compute Hetzner diff --git a/docs/src/development/providers/provider-development-guide.md b/docs/src/development/providers/provider-development-guide.md index 28febc0..9842afc 100644 --- a/docs/src/development/providers/provider-development-guide.md +++ b/docs/src/development/providers/provider-development-guide.md @@ -19,7 +19,7 @@ A cloud provider is **production-ready** when it completes all 4 tasks: ### Execution Sequence -```text +```bash Tarea 4 (5 min) ──────┐ Tarea 1 (main) ───┐ ├──> Tarea 2 (tests) Tarea 3 (parallel)┘ │ @@ -33,19 +33,19 @@ Tarea 3 (parallel)┘ │ These rules are **mandatory** for all provider Nushell code: ### Rule 1: Module System & Imports -```text +```nushell use mod.nu use api.nu use servers.nu ``` ### Rule 2: Function Signatures -```text +```python def function_name [param: type, optional: type = default] { } ``` ### Rule 3: Return Early, Fail Fast -```text +```python def operation [resource: record] { if ($resource | get -o id | is-empty) { error make {msg: "Resource ID required"} @@ -56,7 +56,7 @@ def operation [resource: record] { ### Rule 4: Modern Error Handling (CRITICAL) **❌ FORBIDDEN** - Deprecated try-catch: -```text +```bash try { ^external_command } catch {|err| @@ -65,7 +65,7 @@ try { ``` **✅ REQUIRED** - Modern do/complete pattern: -```text +```javascript let result = (do { ^external_command } | complete) if $result.exit_code != 0 { @@ -79,7 +79,7 @@ $result.stdout All operations must fully succeed or fully fail. No partial state changes. ### Rule 12: Structured Error Returns -```text +```bash error make { msg: "Human-readable message", label: {text: "Error context", span: (metadata error).span} @@ -103,7 +103,7 @@ All Nickel schemas follow this pattern: ### contracts.ncl: Type Definitions -```text +```json { Server = { id | String, @@ -123,7 +123,7 @@ All Nickel schemas follow this pattern: ### defaults.ncl: Default Values -```text +```json { Server = { instance_type = "t3.micro", @@ -139,7 +139,7 @@ All Nickel schemas follow this pattern: ### main.ncl: Public API -```text +```javascript let contracts = import "contracts.ncl" in let defaults = import "defaults.ncl" in @@ -151,7 +151,7 @@ let defaults = import "defaults.ncl" in ### version.ncl: Version Tracking -```text +```json { provider_version = "1.0.0", cli_tools = { @@ -162,7 +162,7 @@ let defaults = import "defaults.ncl" in ``` **Validation**: -```text +```bash nickel typecheck nickel/contracts.ncl nickel typecheck nickel/defaults.ncl nickel typecheck nickel/main.ncl @@ -176,7 +176,7 @@ nickel export nickel/main.ncl ### Identify Violations -```text +```bash cd provisioning/extensions/providers/{PROVIDER} grep -r "try {" nulib/ --include="*.nu" | wc -l @@ -188,7 +188,7 @@ All three commands should return `0`. ### Fix Mutable Loops: Accumulation Pattern -```text +```bash def retry_with_backoff [ closure: closure, max_attempts: int @@ -226,7 +226,7 @@ def retry_with_backoff [ ### Fix Mutable Loops: Recursive Pattern -```text +```bash def _wait_for_state [ resource_id: string, target_state: string, @@ -252,7 +252,7 @@ def _wait_for_state [ ### Fix Error Handling -```text +```python def create_server [config: record] { if ($config | get -o name | is-empty) { error make {msg: "Server name required"} @@ -280,7 +280,7 @@ def create_server [config: record] { ### Validation -```text +```bash cd provisioning/extensions/providers/{PROVIDER} for file in nulib/*/\*.nu; do @@ -298,7 +298,7 @@ echo "✅ Nushell compliance complete" ### Directory Structure -```text +```bash tests/ ├── mocks/ │ └── mock_api_responses.json @@ -313,7 +313,7 @@ tests/ ### Mock API Responses -```text +```json { "list_servers": { "servers": [ @@ -335,7 +335,7 @@ tests/ ### Unit Tests: 14 Tests -```text +```python def test-result [name: string, result: bool] { if $result { print $"✓ ($name)" @@ -529,7 +529,7 @@ main ### Test Orchestrator -```text +```bash def main [] { print "=== Provider Test Suite ===" @@ -567,7 +567,7 @@ exit (if $result.success {0} else {1}) ### Validation -```text +```bash cd provisioning/extensions/providers/{PROVIDER} nu tests/run_{provider}_tests.nu ``` @@ -580,7 +580,7 @@ Expected: 51 tests passing, exit code 0 ### Directory Structure -```text +```bash templates/ ├── {provider}_servers.j2 ├── {provider}_networks.j2 @@ -589,7 +589,8 @@ templates/ ### Template Example -```jinja2 +```bash +jinja2 #!/bin/bash # {{ provider_name }} Server Provisioning set -e @@ -627,7 +628,7 @@ echo "Server provisioning complete" ### Validation -```text +```bash cd provisioning/extensions/providers/{PROVIDER} for template in templates/*.j2; do @@ -641,7 +642,7 @@ echo "✅ Templates valid" ## Tarea 4: Nickel Schema Validation -```text +```nickel cd provisioning/extensions/providers/{PROVIDER} nickel typecheck nickel/contracts.ncl || exit 1 @@ -658,7 +659,7 @@ echo "✅ Nickel schemas validated" ## Complete Validation Script -```text +```bash #!/bin/bash set -e @@ -705,7 +706,7 @@ Use these as templates for new providers. ## Quick Start -```text +```bash cd provisioning/extensions/providers/{PROVIDER} # Validate completeness diff --git a/docs/src/development/providers/provider-distribution-guide.md b/docs/src/development/providers/provider-distribution-guide.md index 452e416..3164523 100644 --- a/docs/src/development/providers/provider-distribution-guide.md +++ b/docs/src/development/providers/provider-distribution-guide.md @@ -38,7 +38,7 @@ Fast, local development with direct access to provider source code. ### How It Works -```text +```bash # Install provider for infrastructure (creates symlinks) provisioning providers install upcloud wuji @@ -67,7 +67,7 @@ provisioning providers install upcloud wuji ### Example Workflow -```text +```bash # 1. List available providers provisioning providers list @@ -90,7 +90,7 @@ provisioning providers remove upcloud wuji ### File Structure -```text +```bash extensions/providers/upcloud/ ├── nickel/ │ ├── manifest.toml @@ -117,7 +117,7 @@ Create versioned, distributable artifacts for production deployments and team co ### How It Works -```text +```bash # Package providers into distributable artifacts export PROVISIONING=/Users/Akasha/project-provisioning/provisioning ./provisioning/core/cli/pack providers @@ -148,7 +148,7 @@ export PROVISIONING=/Users/Akasha/project-provisioning/provisioning ### Example Workflow -```text +```bash # Set environment variable export PROVISIONING=/Users/Akasha/project-provisioning/provisioning @@ -176,7 +176,7 @@ export PROVISIONING=/Users/Akasha/project-provisioning/provisioning ### File Structure -```text +```bash provisioning/ ├── distribution/ │ ├── packages/ @@ -194,7 +194,7 @@ provisioning/ ### Package Metadata Example -```text +```json { "name": "upcloud_prov", "version": "0.0.1", @@ -232,7 +232,7 @@ provisioning/ ### Development Phase -```text +```bash # 1. Start with module-loader for development provisioning providers list provisioning providers install upcloud wuji @@ -248,7 +248,7 @@ nickel export workspace/infra/wuji/main.ncl ### Release Phase -```text +```bash # 4. Create release packages export PROVISIONING=/Users/Akasha/project-provisioning/provisioning ./provisioning/core/cli/pack providers @@ -266,7 +266,7 @@ rsync distribution/packages/*.tar user@repo.jesusperez.pro:/registry/v0.0.2/ ### Production Deployment -```text +```bash # 8. Download specific version from registry wget https://repo.jesusperez.pro/registry/v0.0.2/upcloud_prov_0.0.2.tar @@ -283,7 +283,7 @@ tar -xf upcloud_prov_0.0.2.tar -C infrastructure/providers/ ### Module-Loader Commands -```text +```bash # List all available providers provisioning providers list [--kcl] [--format table|json|yaml] @@ -308,7 +308,7 @@ provisioning providers validate ### Provider Pack Commands -```text +```bash # Set environment variable (required) export PROVISIONING=/path/to/provisioning @@ -338,7 +338,7 @@ export PROVISIONING=/path/to/provisioning **Recommendation**: Module-Loader only -```text +```bash # Simple and fast providers install upcloud homelab providers install aws cloud-backup @@ -355,7 +355,7 @@ providers install aws cloud-backup **Recommendation**: Module-Loader + Git -```text +```bash # Each developer git clone repo providers install upcloud project-x @@ -377,7 +377,7 @@ git pull **Recommendation**: Hybrid (Module-Loader dev + Provider Packs releases) -```text +```bash # Development (team member) providers install upcloud staging-env # Make changes... @@ -402,7 +402,7 @@ git tag v0.2.0 **Recommendation**: Provider Packs only -```text +```bash # CI/CD Pipeline pack providers # Build artifacts # Run tests on packages @@ -426,7 +426,7 @@ pack providers # Build artifacts **Recommendation**: Provider Packs + Registry -```text +```bash # Maintainer pack providers # Create release on GitHub @@ -521,7 +521,7 @@ wget https://github.com/project/releases/v1.0.0/upcloud_prov_1.0.0.tar When you're ready to move to production: -```text +```bash # 1. Clean up development setup providers remove upcloud wuji @@ -544,7 +544,7 @@ nickel eval defs/servers.ncl When you need to debug or develop: -```text +```bash # 1. Remove vendored version rm -rf workspace/infra/wuji/vendor/upcloud_prov @@ -564,7 +564,7 @@ nickel eval defs/servers.ncl ### Environment Variables -```text +```bash # Required for pack commands export PROVISIONING=/path/to/provisioning @@ -576,7 +576,7 @@ export PROVISIONING_CONFIG=/path/to/provisioning Distribution settings in `provisioning/config/config.defaults.toml`: -```text +```toml [distribution] pack_path = "{{paths.base}}/distribution/packages" registry_path = "{{paths.base}}/distribution/registry" @@ -605,7 +605,7 @@ modules_dir = ".kcl-modules" **Problem**: Provider not found after install -```text +```bash # Check provider exists providers list | grep upcloud @@ -618,7 +618,7 @@ ls -la workspace/infra/wuji/.kcl-modules/ **Problem**: Changes not reflected -```text +```bash # Verify symlink is correct readlink workspace/infra/wuji/.kcl-modules/upcloud_prov @@ -629,7 +629,7 @@ readlink workspace/infra/wuji/.kcl-modules/upcloud_prov **Problem**: No .tar file created -```text +```bash # Check KCL version (need 0.11.3+) kcl version @@ -639,7 +639,7 @@ ls extensions/providers/upcloud/kcl/kcl.mod **Problem**: PROVISIONING environment variable not set -```text +```bash # Set it export PROVISIONING=/Users/Akasha/project-provisioning/provisioning @@ -678,4 +678,4 @@ echo 'export PROVISIONING=/path/to/provisioning' >> ~/.zshrc **Document Version**: 1.0.0 **Last Updated**: 2025-09-29 -**Maintained by**: JesusPerezLorenzo +**Maintained by**: JesusPerezLorenzo \ No newline at end of file diff --git a/docs/src/development/providers/quick-provider-guide.md b/docs/src/development/providers/quick-provider-guide.md index ffe3192..decddb0 100644 --- a/docs/src/development/providers/quick-provider-guide.md +++ b/docs/src/development/providers/quick-provider-guide.md @@ -12,14 +12,14 @@ This guide shows how to quickly add a new provider to the provider-agnostic infr ### Step 1: Create Provider Directory -```text +```bash mkdir -p provisioning/extensions/providers/{provider_name} mkdir -p provisioning/extensions/providers/{provider_name}/nulib/{provider_name} ``` ### Step 2: Copy Template and Customize -```text +```bash # Copy the local provider as a template cp provisioning/extensions/providers/local/provider.nu provisioning/extensions/providers/{provider_name}/provider.nu @@ -29,7 +29,7 @@ cp provisioning/extensions/providers/local/provider.nu Edit `provisioning/extensions/providers/{provider_name}/provider.nu`: -```text +```javascript export def get-provider-metadata []: nothing -> record { { name: "your_provider_name" @@ -51,7 +51,7 @@ export def get-provider-metadata []: nothing -> record { The provider interface requires these essential functions: -```text +```bash # Required: Server operations export def query_servers [find?: string, cols?: string]: nothing -> list { # Call your provider's server listing API @@ -87,7 +87,7 @@ export def server_state [server: record, new_state: string, error_exit: bool, wa Create `provisioning/extensions/providers/{provider_name}/nulib/{provider_name}/servers.nu`: -```text +```nushell # Example: DigitalOcean provider functions export def digitalocean_query_servers [find?: string, cols?: string]: nothing -> list { # Use DigitalOcean API to list droplets @@ -122,7 +122,7 @@ export def digitalocean_create_server [settings: record, server: record, check: ### Step 6: Test Your Provider -```text +```bash # Test provider discovery nu -c "use provisioning/core/nulib/lib_provisioning/providers/registry.nu *; init-provider-registry; list-providers" @@ -137,7 +137,7 @@ nu -c "use provisioning/extensions/providers/your_provider_name/provider.nu *; q Add to your Nickel configuration: -```text +```nickel # workspace/infra/example/servers.ncl let servers = [ { @@ -156,7 +156,7 @@ servers For cloud providers (AWS, GCP, Azure, etc.): -```text +```bash # Use HTTP calls to cloud APIs export def cloud_query_servers [find?: string, cols?: string]: nothing -> list { let auth_header = { Authorization: $"Bearer ($env.PROVIDER_TOKEN)" } @@ -170,7 +170,7 @@ export def cloud_query_servers [find?: string, cols?: string]: nothing -> list { For container platforms (Docker, Podman, etc.): -```text +```bash # Use CLI commands for container platforms export def container_query_servers [find?: string, cols?: string]: nothing -> list { let containers = (docker ps --format json | from json) @@ -183,7 +183,7 @@ export def container_query_servers [find?: string, cols?: string]: nothing -> li For bare metal or existing servers: -```text +```bash # Use SSH or local commands export def baremetal_query_servers [find?: string, cols?: string]: nothing -> list { # Read from inventory file or ping servers @@ -197,7 +197,7 @@ export def baremetal_query_servers [find?: string, cols?: string]: nothing -> li ### 1. Error Handling -```text +```javascript export def provider_operation []: nothing -> any { try { # Your provider operation @@ -212,7 +212,7 @@ export def provider_operation []: nothing -> any { ### 2. Authentication -```text +```bash # Check for required environment variables def check_auth []: nothing -> bool { if ($env | get -o PROVIDER_TOKEN) == null { @@ -225,7 +225,7 @@ def check_auth []: nothing -> bool { ### 3. Rate Limiting -```text +```bash # Add delays for API rate limits def api_call_with_retry [url: string]: nothing -> any { mut attempts = 0 @@ -248,7 +248,7 @@ def api_call_with_retry [url: string]: nothing -> any { Set capabilities accurately: -```text +```bash capabilities: { server_management: true # Can create/delete servers network_management: true # Can manage networks/VPCs @@ -281,7 +281,7 @@ capabilities: { ### Provider Not Found -```text +```bash # Check provider directory structure ls -la provisioning/extensions/providers/your_provider_name/ @@ -291,14 +291,14 @@ grep "get-provider-metadata" provisioning/extensions/providers/your_provider_nam ### Interface Validation Failed -```text +```bash # Check which functions are missing nu -c "use provisioning/core/nulib/lib_provisioning/providers/interface.nu *; validate-provider-interface 'your_provider_name'" ``` ### Authentication Errors -```text +```bash # Check environment variables env | grep PROVIDER diff --git a/docs/src/development/taskservs/taskserv-quick-guide.md b/docs/src/development/taskservs/taskserv-quick-guide.md index 14977a2..6510946 100644 --- a/docs/src/development/taskservs/taskserv-quick-guide.md +++ b/docs/src/development/taskservs/taskserv-quick-guide.md @@ -4,13 +4,13 @@ ### Create a New Taskserv (Interactive) -```text +```nushell nu provisioning/tools/create-taskserv-helper.nu interactive ``` ### Create a New Taskserv (Direct) -```text +```nushell nu provisioning/tools/create-taskserv-helper.nu create my-api --category development --port 8080 @@ -27,7 +27,7 @@ nu provisioning/tools/create-taskserv-helper.nu create my-api ### 2. Basic Structure -```text +```bash my-service/ ├── nickel/ │ ├── manifest.toml # Package definition @@ -43,7 +43,7 @@ my-service/ **manifest.toml** (package definition): -```text +```toml [package] name = "my-service" version = "1.0.0" @@ -55,7 +55,7 @@ k8s = { oci = "oci://ghcr.io/kcl-lang/k8s", tag = "1.30" } **my-service.ncl** (main schema): -```text +```javascript let MyService = { name | String, version | String, @@ -75,7 +75,7 @@ let MyService = { ### 4. Test Your Taskserv -```text +```bash # Discover your taskserv nu -c "use provisioning/core/nulib/taskservs/discover.nu *; get-taskserv-info my-service" @@ -90,7 +90,7 @@ provisioning/core/cli/provisioning taskserv create my-service --infra wuji --che ### Web Service -```text +```javascript let WebService = { name | String, version | String | default = "latest", @@ -111,7 +111,7 @@ WebService ### Database Service -```text +```javascript let DatabaseService = { name | String, version | String | default = "latest", @@ -132,7 +132,7 @@ DatabaseService ### Background Worker -```text +```javascript let BackgroundWorker = { name | String, version | String | default = "latest", @@ -154,7 +154,7 @@ BackgroundWorker ### Discovery -```text +```bash # List all taskservs nu -c "use provisioning/core/nulib/taskservs/discover.nu *; discover-taskservs | select name group" @@ -167,7 +167,7 @@ nu -c "use provisioning/workspace/tools/layer-utils.nu *; show_layer_stats" ### Development -```text +```bash # Check Nickel syntax nickel typecheck provisioning/extensions/taskservs/{category}/{name}/schemas/{name}.ncl @@ -181,7 +181,7 @@ provisioning/core/cli/provisioning taskserv check-updates ### Testing -```text +```bash # Dry run deployment provisioning/core/cli/provisioning taskserv create {name} --infra {infra} --check @@ -205,7 +205,7 @@ nu -c "use provisioning/workspace/tools/layer-utils.nu *; test_layer_resolution ### Taskserv Not Found -```text +```bash # Check if discovered nu -c "use provisioning/core/nulib/taskservs/discover.nu *; discover-taskservs | where name == my-service" @@ -215,7 +215,7 @@ ls provisioning/extensions/taskservs/{category}/my-service/kcl/kcl.mod ### Layer Resolution Issues -```text +```bash # Debug resolution nu -c "use provisioning/workspace/tools/layer-utils.nu *; test_layer_resolution my-service wuji upcloud" @@ -225,7 +225,7 @@ ls provisioning/workspace/templates/taskservs/{category}/my-service.ncl ### Nickel Syntax Errors -```text +```nickel # Check syntax nickel typecheck provisioning/extensions/taskservs/{category}/my-service/schemas/my-service.ncl diff --git a/docs/src/development/typedialog-platform-config-guide.md b/docs/src/development/typedialog-platform-config-guide.md index e30f993..1e87a6a 100644 --- a/docs/src/development/typedialog-platform-config-guide.md +++ b/docs/src/development/typedialog-platform-config-guide.md @@ -27,7 +27,7 @@ files, you answer questions in an interactive form, and TypeDialog generates val ### 1. Configure a Platform Service (5 minutes) -```text +```toml # Launch interactive form for orchestrator provisioning config platform orchestrator @@ -51,14 +51,14 @@ This opens an interactive form with sections for: After completing the form, TypeDialog generates `config.ncl`: -```text +```nickel # View what was generated cat workspace_librecloud/config/config.ncl ``` ### 3. Validate Configuration -```text +```toml # Check Nickel syntax is valid nickel typecheck workspace_librecloud/config/config.ncl @@ -70,7 +70,7 @@ provisioning config export Platform services automatically load the exported TOML: -```text +```toml # Orchestrator reads config/generated/platform/orchestrator.toml provisioning start orchestrator @@ -108,7 +108,7 @@ cat workspace_librecloud/config/generated/platform/orchestrator.toml All configuration lives in one Nickel file with three sections: -```text +```nickel # workspace_librecloud/config/config.ncl { # SECTION 1: Workspace metadata @@ -186,7 +186,7 @@ All configuration lives in one Nickel file with three sections: **Example**: -```text +```bash platform = { orchestrator = { enabled = true, @@ -223,7 +223,7 @@ platform = { **Example**: -```text +```bash platform = { kms = { enabled = true, @@ -246,7 +246,7 @@ platform = { **Example**: -```text +```bash platform = { control_center = { enabled = true, @@ -271,7 +271,7 @@ All platform services support four deployment modes, each with different resourc **Mode-based Configuration Loading**: -```text +```toml # Load a specific mode's configuration export VAULT_MODE=enterprise export REGISTRY_MODE=multiuser @@ -308,7 +308,7 @@ export RAG_MODE=cicd **Environment Variable Overrides**: -```text +```bash VAULT_CONFIG=/path/to/vault.toml # Explicit config path VAULT_MODE=enterprise # Mode-specific config VAULT_SERVER_URL=http://localhost:8200 # Server URL @@ -319,7 +319,7 @@ VAULT_TLS_VERIFY=true # TLS verification **Example Configuration**: -```text +```toml platform = { vault_service = { enabled = true, @@ -366,7 +366,7 @@ platform = { **Environment Variable Overrides**: -```text +```bash REGISTRY_CONFIG=/path/to/registry.toml # Explicit config path REGISTRY_MODE=multiuser # Mode-specific config REGISTRY_SERVER_HOST=0.0.0.0 # Server host @@ -380,7 +380,7 @@ REGISTRY_OCI_NAMESPACE=provisioning # OCI namespace **Example Configuration**: -```text +```toml platform = { extension_registry = { enabled = true, @@ -428,7 +428,7 @@ platform = { **Environment Variable Overrides**: -```text +```bash RAG_CONFIG=/path/to/rag.toml # Explicit config path RAG_MODE=multiuser # Mode-specific config RAG_ENABLED=true # Enable/disable RAG @@ -442,7 +442,7 @@ RAG_VECTOR_DB_TYPE=surrealdb # Vector DB type **Example Configuration**: -```text +```toml platform = { rag = { enabled = true, @@ -489,7 +489,7 @@ platform = { **Environment Variable Overrides**: -```text +```bash AI_SERVICE_CONFIG=/path/to/ai.toml # Explicit config path AI_SERVICE_MODE=enterprise # Mode-specific config AI_SERVICE_SERVER_PORT=8082 # Server port @@ -501,7 +501,7 @@ AI_SERVICE_DAG_MAX_CONCURRENT_TASKS=50 # Max concurrent tasks **Example Configuration**: -```text +```toml platform = { ai_service = { enabled = true, @@ -550,7 +550,7 @@ platform = { **Environment Variable Overrides**: -```text +```bash DAEMON_CONFIG=/path/to/daemon.toml # Explicit config path DAEMON_MODE=enterprise # Mode-specific config DAEMON_POLL_INTERVAL=30 # Polling interval (seconds) @@ -562,7 +562,7 @@ DAEMON_AUTO_UPDATE=true # Enable auto updates **Example Configuration**: -```text +```toml platform = { provisioning_daemon = { enabled = true, @@ -607,21 +607,21 @@ platform = { **Environment Variables**: -```text +```bash api_user = "{{env.UPCLOUD_USER}}" api_password = "{{env.UPCLOUD_PASSWORD}}" ``` **Workspace Paths**: -```text +```bash data_dir = "{{workspace.path}}/.orchestrator/data" logs_dir = "{{workspace.path}}/.orchestrator/logs" ``` **KMS Decryption**: -```text +```bash api_password = "{{kms.decrypt('upcloud_pass')}}" ``` @@ -629,7 +629,7 @@ api_password = "{{kms.decrypt('upcloud_pass')}}" ### Validating Configuration -```text +```toml # Check Nickel syntax nickel typecheck workspace_librecloud/config/config.ncl @@ -642,7 +642,7 @@ provisioning config export ### Exporting to Service Formats -```text +```bash # One-time export provisioning config export @@ -693,7 +693,7 @@ provisioning/schemas/platform/ All 5 new services come with pre-built TOML configs for each deployment mode: -```text +```toml # View available schemas for vault service ls -la provisioning/schemas/platform/schemas/vault-service.ncl ls -la provisioning/schemas/platform/defaults/vault-service-defaults.ncl @@ -725,7 +725,7 @@ export DAEMON_MODE=multiuser If you prefer interactive updating: -```text +```bash # Re-run TypeDialog form (overwrites config.ncl) provisioning config platform orchestrator @@ -741,7 +741,7 @@ typedialog form .typedialog/provisioning/platform/orchestrator/form.toml **Solution**: Check form.toml syntax and verify required fields are present (name, description, locales_path, templates_path) -```text +```toml head -10 .typedialog/provisioning/platform/orchestrator/form.toml ``` @@ -751,7 +751,7 @@ head -10 .typedialog/provisioning/platform/orchestrator/form.toml **Solution**: Check for syntax errors and correct field names -```text +```bash nickel typecheck workspace_librecloud/config/config.ncl 2>&1 | less ``` @@ -763,7 +763,7 @@ Common issues: Missing closing braces, incorrect field names, wrong data types **Solution**: Verify config.ncl exports to JSON and check all required sections exist -```text +```nickel nickel export --format json workspace_librecloud/config/config.ncl | head -20 ``` @@ -781,7 +781,7 @@ nickel export --format json workspace_librecloud/config/config.ncl | head -20 ### Development Setup -```text +```json { workspace = { name = "dev", @@ -815,7 +815,7 @@ nickel export --format json workspace_librecloud/config/config.ncl | head -20 ### Production Setup -```text +```json { workspace = { name = "prod", @@ -859,7 +859,7 @@ nickel export --format json workspace_librecloud/config/config.ncl | head -20 ### Multi-Provider Setup -```text +```json { workspace = { name = "multi", @@ -904,7 +904,7 @@ nickel export --format json workspace_librecloud/config/config.ncl | head -20 Start with TypeDialog forms for the best experience: -```text +```bash provisioning config platform orchestrator ``` @@ -920,7 +920,7 @@ Only edit the source `.ncl` file, not the generated TOML files. Always validate before deploying changes: -```text +```bash nickel typecheck workspace_librecloud/config/config.ncl provisioning config export ``` @@ -973,14 +973,14 @@ Add comments explaining custom settings in the Nickel file. Get detailed error messages and check available fields: -```text +```bash nickel typecheck workspace_librecloud/config/config.ncl 2>&1 | less grep "prompt =" .typedialog/provisioning/platform/orchestrator/form.toml ``` ### Configuration Questions -```text +```toml # Show all available config commands provisioning config --help @@ -994,7 +994,7 @@ provisioning config services list ### Test Configuration -```text +```toml # Validate without deploying nickel typecheck workspace_librecloud/config/config.ncl diff --git a/docs/src/development/workflow.md b/docs/src/development/workflow.md index 831c40f..178c6f8 100644 --- a/docs/src/development/workflow.md +++ b/docs/src/development/workflow.md @@ -42,7 +42,7 @@ quality, and efficiency. **1. Clone and Navigate**: -```text +```bash # Clone repository git clone https://github.com/company/provisioning-system.git cd provisioning-system @@ -53,7 +53,7 @@ cd workspace/tools **2. Initialize Workspace**: -```text +```bash # Initialize development workspace nu workspace.nu init --user-name $USER --infra-name dev-env @@ -63,7 +63,7 @@ nu workspace.nu health --detailed --fix-issues **3. Configure Development Environment**: -```text +```toml # Create user configuration cp workspace/config/local-overrides.toml.example workspace/config/$USER.toml @@ -73,7 +73,7 @@ $EDITOR workspace/config/$USER.toml **4. Set Up Build System**: -```text +```bash # Navigate to build tools cd src/tools @@ -88,7 +88,7 @@ make dev-build **Required Tools**: -```text +```bash # Install Nushell cargo install nu @@ -103,7 +103,7 @@ cargo install cargo-watch # File watching **Optional Development Tools**: -```text +```bash # Install development enhancers cargo install nu_plugin_tera # Template plugin cargo install sops # Secrets management @@ -114,7 +114,7 @@ brew install k9s # Kubernetes management **VS Code Setup** (`.vscode/settings.json`): -```text +```json { "files.associations": { "*.nu": "shellscript", @@ -143,7 +143,7 @@ brew install k9s # Kubernetes management **1. Sync and Update**: -```text +```bash # Sync with upstream git pull origin main @@ -157,7 +157,7 @@ nu workspace.nu status --detailed **2. Review Current State**: -```text +```bash # Check current infrastructure provisioning show servers provisioning show settings @@ -170,7 +170,7 @@ nu workspace.nu status **1. Feature Development**: -```text +```bash # Create feature branch git checkout -b feature/new-provider-support @@ -184,7 +184,7 @@ $EDITOR workspace/extensions/providers/new-provider/nulib/provider.nu **2. Incremental Testing**: -```text +```bash # Test syntax during development nu --check workspace/extensions/providers/new-provider/nulib/provider.nu @@ -197,7 +197,7 @@ nu workspace.nu tools test-extension providers/new-provider **3. Build and Validate**: -```text +```bash # Quick development build cd src/tools make dev-build @@ -213,7 +213,7 @@ make test-dist **Unit Testing**: -```text +```bash # Add test examples to functions def create-server [name: string] -> record { # @test: "test-server" -> {name: "test-server", status: "created"} @@ -223,7 +223,7 @@ def create-server [name: string] -> record { **Integration Testing**: -```text +```bash # Test with real infrastructure nu workspace/extensions/providers/new-provider/nulib/provider.nu create-server test-server --dry-run @@ -236,7 +236,7 @@ PROVISIONING_WORKSPACE_USER=$USER provisioning server create test-server --check **1. Commit Progress**: -```text +```bash # Stage changes git add . @@ -254,7 +254,7 @@ git push origin feature/new-provider-support **2. Workspace Maintenance**: -```text +```bash # Clean up development data nu workspace.nu cleanup --type cache --age 1d @@ -271,7 +271,7 @@ nu workspace.nu health **File Organization**: -```text +```bash Extension Structure: ├── nulib/ │ ├── main.nu # Main entry point @@ -293,7 +293,7 @@ Extension Structure: **Function Naming Conventions**: -```text +```bash # Use kebab-case for commands def create-server [name: string] -> record { ... } def validate-config [config: record] -> bool { ... } @@ -310,7 +310,7 @@ def list-available-zones [] -> list { ... } **Error Handling Pattern**: -```text +```bash def create-server [ name: string --dry-run: bool = false @@ -347,7 +347,7 @@ def create-server [ **Project Organization**: -```text +```bash src/ ├── lib.rs # Library root ├── main.rs # Binary entry point @@ -367,7 +367,7 @@ src/ **Error Handling**: -```text +```bash use anyhow::{Context, Result}; use thiserror::Error; @@ -404,7 +404,7 @@ pub fn create_server(name: &str) -> Result { **Schema Structure**: -```text +```bash # Base schema definitions let ServerConfig = { name | string, @@ -446,7 +446,7 @@ InfrastructureConfig **Unit Test Pattern**: -```text +```bash # Function with embedded test def validate-server-name [name: string] -> bool { # @test: "valid-name" -> true @@ -482,7 +482,7 @@ def test_validate_server_name [] { **Integration Test Pattern**: -```text +```bash # tests/integration/server-lifecycle-test.nu def test_complete_server_lifecycle [] { # Setup @@ -509,7 +509,7 @@ def test_complete_server_lifecycle [] { **Unit Testing**: -```text +```bash #[cfg(test)] mod tests { use super::*; @@ -540,7 +540,7 @@ mod tests { **Integration Testing**: -```text +```bash #[cfg(test)] mod integration_tests { use super::*; @@ -570,7 +570,7 @@ mod integration_tests { **Schema Validation Testing**: -```text +```bash # Test Nickel schemas nickel check schemas/ @@ -585,7 +585,7 @@ nickel eval schemas/server.ncl **Continuous Testing**: -```text +```bash # Watch for changes and run tests cargo watch -x test -x check @@ -602,7 +602,7 @@ nu workspace.nu tools test-all --watch **Enable Debug Mode**: -```text +```bash # Environment variables export PROVISIONING_DEBUG=true export PROVISIONING_LOG_LEVEL=debug @@ -617,7 +617,7 @@ export PROVISIONING_WORKSPACE_USER=$USER **Debug Techniques**: -```text +```bash # Debug prints def debug-server-creation [name: string] { print $"🐛 Creating server: ($name)" @@ -658,7 +658,7 @@ def debug-interactive [] { **Error Investigation**: -```text +```bash # Comprehensive error handling def safe-server-creation [name: string] { try { @@ -691,7 +691,7 @@ def safe-server-creation [name: string] { **Debug Logging**: -```text +```bash use tracing::{debug, info, warn, error, instrument}; #[instrument] @@ -720,7 +720,7 @@ pub async fn create_server(name: &str) -> Result { **Interactive Debugging**: -```text +```bash // Use debugger breakpoints #[cfg(debug_assertions)] { @@ -734,7 +734,7 @@ pub async fn create_server(name: &str) -> Result { **Log Monitoring**: -```text +```bash # Follow all logs tail -f workspace/runtime/logs/$USER/*.log @@ -750,7 +750,7 @@ jq '.level == "ERROR"' workspace/runtime/logs/$USER/structured.jsonl **Debug Log Levels**: -```text +```bash # Different verbosity levels PROVISIONING_LOG_LEVEL=trace provisioning server create test PROVISIONING_LOG_LEVEL=debug provisioning server create test @@ -763,7 +763,7 @@ PROVISIONING_LOG_LEVEL=info provisioning server create test **Working with Legacy Components**: -```text +```bash # Test integration with existing system provisioning --version # Legacy system src/core/nulib/provisioning --version # New system @@ -780,7 +780,7 @@ nu workspace.nu config validate **REST API Testing**: -```text +```bash # Test orchestrator API curl -X GET http://localhost:9090/health curl -X GET http://localhost:9090/tasks @@ -798,7 +798,7 @@ curl -X GET http://localhost:9090/workflows/batch/status/workflow-id **SurrealDB Integration**: -```text +```bash # Test database connectivity use core/nulib/lib_provisioning/database/surreal.nu let db = (connect-database) @@ -814,7 +814,7 @@ assert ($status.status == "pending") **Container Integration**: -```text +```bash # Test with Docker docker run --rm -v $(pwd):/work provisioning:dev provisioning --version @@ -841,7 +841,7 @@ make test-dist PLATFORM=kubernetes **Workflow**: -```text +```bash # Start new feature git checkout main git pull origin main @@ -869,7 +869,7 @@ gh pr create --title "Add new provider support" --body "..." **Review Commands**: -```text +```bash # Test PR locally gh pr checkout 123 cd src/tools && make ci-test @@ -886,7 +886,7 @@ nu --check $(find . -name "*.nu") **Code Documentation**: -```text +```bash # Function documentation def create-server [ name: string # Server name (must be unique) @@ -925,7 +925,7 @@ def create-server [ **Automated Quality Gates**: -```text +```bash # Pre-commit hooks pre-commit install @@ -949,7 +949,7 @@ cargo audit **Performance Testing**: -```text +```bash # Benchmark builds make benchmark @@ -962,7 +962,7 @@ ab -n 1000 -c 10 http://localhost:9090/health **Resource Monitoring**: -```text +```bash # Monitor during development nu workspace/tools/runtime-manager.nu monitor --duration 5m @@ -977,7 +977,7 @@ df -h **Never Hardcode**: -```text +```bash # Bad def get-api-url [] { "https://api.upcloud.com" } @@ -991,7 +991,7 @@ def get-api-url [] { **Comprehensive Error Context**: -```text +```python def create-server [name: string] { try { validate-server-name $name @@ -1017,7 +1017,7 @@ def create-server [name: string] { **Clean Up Resources**: -```text +```python def with-temporary-server [name: string, action: closure] { let server = (create-server $name) @@ -1038,7 +1038,7 @@ def with-temporary-server [name: string, action: closure] { **Test Isolation**: -```text +```python def test-with-isolation [test_name: string, test_action: closure] { let test_workspace = $"test-($test_name)-(date now | format date '%Y%m%d%H%M%S')" diff --git a/docs/src/getting-started/01-prerequisites.md b/docs/src/getting-started/01-prerequisites.md index 52c5edd..6c6a7b2 100644 --- a/docs/src/getting-started/01-prerequisites.md +++ b/docs/src/getting-started/01-prerequisites.md @@ -76,7 +76,7 @@ Before proceeding, verify your system has the core dependencies installed: ### Nushell -```text +```nushell # Check Nushell version nu --version @@ -85,7 +85,7 @@ nu --version ### Nickel -```text +```nickel # Check Nickel version nickel --version @@ -94,7 +94,7 @@ nickel --version ### Docker -```text +```bash # Check Docker version docker --version @@ -106,7 +106,7 @@ docker ps ### SOPS -```text +```bash # Check SOPS version sops --version @@ -115,7 +115,7 @@ sops --version ### Age -```text +```bash # Check Age version age --version @@ -126,7 +126,7 @@ age --version ### macOS (using Homebrew) -```text +```bash # Install Homebrew if not already installed /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" @@ -151,7 +151,7 @@ brew install k9s glow bat ### Ubuntu/Debian -```text +```bash # Update package list sudo apt update @@ -184,7 +184,7 @@ sudo apt install -y age ### Fedora/RHEL -```text +```bash # Install Nushell sudo dnf install -y nushell diff --git a/docs/src/getting-started/02-installation.md b/docs/src/getting-started/02-installation.md index bab7ad0..5474066 100644 --- a/docs/src/getting-started/02-installation.md +++ b/docs/src/getting-started/02-installation.md @@ -15,7 +15,7 @@ Estimated time: 15-20 minutes ## Step 1: Clone the Repository -```text +```bash # Clone the repository git clone https://github.com/provisioning/provisioning-platform.git cd provisioning-platform @@ -30,7 +30,7 @@ The platform uses multiple Nushell plugins for enhanced functionality. ### Install nu_plugin_tera (Template Rendering) -```text +```bash # Install from crates.io cargo install nu_plugin_tera @@ -40,7 +40,7 @@ nu -c "plugin add ~/.cargo/bin/nu_plugin_tera; plugin use tera" ### Verify Plugin Installation -```text +```bash # Start Nushell nu @@ -55,7 +55,7 @@ plugin list Make the `provisioning` command available globally: -```text +```bash # Option 1: Symlink to /usr/local/bin (recommended) sudo ln -s "$(pwd)/provisioning/core/cli/provisioning" /usr/local/bin/provisioning @@ -71,7 +71,7 @@ provisioning --version Generate keys for encrypting sensitive configuration: -```text +```toml # Create Age key directory mkdir -p ~/.config/provisioning/age @@ -90,7 +90,7 @@ chmod 644 ~/.config/provisioning/age/public_key.txt Set up basic environment variables: -```text +```bash # Create environment file cat > ~/.provisioning/env << 'ENVEOF' # Provisioning Environment Configuration @@ -110,7 +110,7 @@ echo 'source ~/.provisioning/env' >> ~/.bashrc # or ~/.zshrc Create your first workspace: -```text +```bash # Initialize a new workspace provisioning workspace init my-first-workspace @@ -127,7 +127,7 @@ provisioning workspace list Run the installation verification: -```text +```bash # Check system configuration provisioning validate config @@ -149,7 +149,7 @@ Expected output should show: If you plan to use platform services (orchestrator, control center, etc.): -```text +```bash # Build platform services cd provisioning/platform @@ -176,7 +176,7 @@ ls */target/release/ Use the interactive installer for a guided setup: -```text +```bash # Build the installer cd provisioning/platform/installer cargo build --release @@ -194,7 +194,7 @@ cargo build --release If plugins aren't recognized: -```text +```bash # Rebuild plugin registry nu -c "plugin list; plugin use tera" ``` @@ -203,7 +203,7 @@ nu -c "plugin list; plugin use tera" If you encounter permission errors: -```text +```bash # Ensure proper ownership sudo chown -R $USER:$USER ~/.config/provisioning @@ -215,7 +215,7 @@ echo $PATH | grep provisioning If encryption fails: -```text +```bash # Verify keys exist ls -la ~/.config/provisioning/age/ diff --git a/docs/src/getting-started/03-first-deployment.md b/docs/src/getting-started/03-first-deployment.md index 5c9278d..4584a2e 100644 --- a/docs/src/getting-started/03-first-deployment.md +++ b/docs/src/getting-started/03-first-deployment.md @@ -17,7 +17,7 @@ Estimated time: 10-15 minutes Create a basic infrastructure configuration: -```text +```toml # Generate infrastructure template provisioning generate infra --new my-infra @@ -30,14 +30,14 @@ provisioning generate infra --new my-infra Edit the generated configuration: -```text +```toml # Edit with your preferred editor $EDITOR workspace/infra/my-infra/settings.ncl ``` Example configuration: -```text +```toml import provisioning.settings as cfg # Infrastructure settings @@ -62,7 +62,7 @@ servers = [ First, run in check mode to see what would happen: -```text +```bash # Check mode - no actual changes provisioning server create --infra my-infra --check @@ -78,7 +78,7 @@ provisioning server create --infra my-infra --check If check mode looks good, create the server: -```text +```bash # Create server provisioning server create --infra my-infra @@ -93,7 +93,7 @@ provisioning server create --infra my-infra Check server status: -```text +```bash # List all servers provisioning server list @@ -108,7 +108,7 @@ provisioning server ssh dev-server-01 Install a task service on the server: -```text +```bash # Check mode first provisioning taskserv create kubernetes --infra my-infra --check @@ -126,7 +126,7 @@ provisioning taskserv create kubernetes --infra my-infra --check Proceed with installation: -```text +```bash # Install Kubernetes provisioning taskserv create kubernetes --infra my-infra --wait @@ -145,7 +145,7 @@ provisioning workflow monitor Check that Kubernetes is running: -```text +```yaml # List installed task services provisioning taskserv list --infra my-infra @@ -164,7 +164,7 @@ provisioning server exec dev-server-01 -- kubectl get nodes Create multiple servers at once: -```text +```bash servers = [ {hostname = "web-01", cores = 2, memory = 4096}, {hostname = "web-02", cores = 2, memory = 4096}, @@ -172,7 +172,7 @@ servers = [ ] ``` -```text +```bash provisioning server create --infra my-infra --servers web-01,web-02,db-01 ``` @@ -180,7 +180,7 @@ provisioning server create --infra my-infra --servers web-01,web-02,db-01 Install multiple services on one server: -```text +```bash provisioning taskserv create kubernetes,cilium,postgres --infra my-infra --servers web-01 ``` @@ -188,7 +188,7 @@ provisioning taskserv create kubernetes,cilium,postgres --infra my-infra --serve Deploy a complete cluster configuration: -```text +```toml provisioning cluster create buildkit --infra my-infra ``` @@ -196,7 +196,7 @@ provisioning cluster create buildkit --infra my-infra The typical deployment workflow: -```text +```bash # 1. Initialize workspace provisioning workspace init production @@ -230,7 +230,7 @@ provisioning taskserv list ### Server Creation Fails -```text +```bash # Check logs provisioning server logs dev-server-01 @@ -240,7 +240,7 @@ provisioning --debug server create --infra my-infra ### Task Service Installation Fails -```text +```bash # Check task service logs provisioning taskserv logs kubernetes @@ -250,7 +250,7 @@ provisioning taskserv create kubernetes --infra my-infra --force ### SSH Connection Issues -```text +```bash # Verify SSH key ls -la ~/.ssh/ diff --git a/docs/src/getting-started/04-verification.md b/docs/src/getting-started/04-verification.md index 38ef6f3..1148195 100644 --- a/docs/src/getting-started/04-verification.md +++ b/docs/src/getting-started/04-verification.md @@ -15,7 +15,7 @@ After completing your first deployment, verify: Check that all configuration is valid: -```text +```toml # Validate all configuration provisioning validate config @@ -25,7 +25,7 @@ provisioning validate config # ✓ All required fields present ``` -```text +```bash # Check environment variables provisioning env @@ -37,7 +37,7 @@ provisioning allenv Check that servers are accessible and healthy: -```text +```bash # List all servers provisioning server list @@ -49,7 +49,7 @@ provisioning server list # └───────────────┴──────────┴───────┴────────┴──────────────┴──────────┘ ``` -```text +```bash # Check server details provisioning server info dev-server-01 @@ -61,7 +61,7 @@ provisioning server ssh dev-server-01 -- echo "SSH working" Check installed task services: -```text +```bash # List task services provisioning taskserv list @@ -75,7 +75,7 @@ provisioning taskserv list # └────────────┴─────────┴────────────────┴──────────┘ ``` -```text +```bash # Check specific task service provisioning taskserv status kubernetes @@ -87,7 +87,7 @@ provisioning taskserv logs kubernetes --tail 50 If you installed Kubernetes, verify it's working: -```text +```yaml # Check Kubernetes nodes provisioning server ssh dev-server-01 -- kubectl get nodes @@ -96,7 +96,7 @@ provisioning server ssh dev-server-01 -- kubectl get nodes # dev-server-01 Ready control-plane 10m v1.28.0 ``` -```text +```bash # Check Kubernetes pods provisioning server ssh dev-server-01 -- kubectl get pods -A @@ -109,7 +109,7 @@ If you installed platform services: ### Orchestrator -```text +```bash # Check orchestrator health curl http://localhost:8080/health @@ -117,14 +117,14 @@ curl http://localhost:8080/health # {"status":"healthy","version":"0.1.0"} ``` -```text +```bash # List tasks curl http://localhost:8080/tasks ``` ### Control Center -```text +```bash # Check control center health curl http://localhost:9090/health @@ -136,7 +136,7 @@ curl -X POST http://localhost:9090/policies/evaluate ### KMS Service -```text +```bash # Check KMS health curl http://localhost:8082/api/v1/kms/health @@ -148,7 +148,7 @@ echo "test" | provisioning kms encrypt Run comprehensive health checks: -```text +```bash # Check all components provisioning health check @@ -165,7 +165,7 @@ provisioning health check If you used workflows: -```text +```bash # List all workflows provisioning workflow list @@ -180,7 +180,7 @@ provisioning workflow stats ### DNS Resolution (If CoreDNS Installed) -```text +```bash # Test DNS resolution dig @localhost test.provisioning.local @@ -190,7 +190,7 @@ provisioning server ssh dev-server-01 -- systemctl status coredns ### Network Connectivity -```text +```bash # Test server-to-server connectivity provisioning server ssh dev-server-01 -- ping -c 3 dev-server-02 @@ -200,7 +200,7 @@ provisioning server ssh dev-server-01 -- sudo iptables -L ### Storage and Resources -```text +```bash # Check disk usage provisioning server ssh dev-server-01 -- df -h @@ -215,7 +215,7 @@ provisioning server ssh dev-server-01 -- top -bn1 | head -20 ### Configuration Validation Failed -```text +```toml # View detailed error provisioning validate config --verbose @@ -225,7 +225,7 @@ provisioning validate config --infra my-infra ### Server Unreachable -```text +```bash # Check server logs provisioning server logs dev-server-01 @@ -235,7 +235,7 @@ provisioning --debug server ssh dev-server-01 ### Task Service Not Running -```text +```bash # Check service logs provisioning taskserv logs kubernetes @@ -245,7 +245,7 @@ provisioning taskserv restart kubernetes --infra my-infra ### Platform Service Down -```text +```bash # Check service status provisioning platform status orchestrator @@ -260,7 +260,7 @@ provisioning platform restart orchestrator ### Response Time Tests -```text +```bash # Measure server response time time provisioning server info dev-server-01 @@ -273,7 +273,7 @@ time provisioning workflow submit test-workflow.ncl ### Resource Usage -```text +```bash # Check platform resource usage docker stats # If using Docker @@ -285,7 +285,7 @@ provisioning system resources ### Encryption -```text +```bash # Verify encryption keys ls -la ~/.config/provisioning/age/ @@ -295,7 +295,7 @@ echo "test" | provisioning kms encrypt | provisioning kms decrypt ### Authentication (If Enabled) -```text +```bash # Test login provisioning login --username admin diff --git a/docs/src/getting-started/05-platform-configuration.md b/docs/src/getting-started/05-platform-configuration.md index e9f7035..6af33b2 100644 --- a/docs/src/getting-started/05-platform-configuration.md +++ b/docs/src/getting-started/05-platform-configuration.md @@ -51,7 +51,7 @@ Choose a deployment mode based on your needs: The configuration system is managed by a standalone script that doesn't require the main installer: -```text +```toml # Navigate to the provisioning directory cd /path/to/project-provisioning @@ -70,7 +70,7 @@ TypeDialog provides an interactive form-based configuration interface available #### Quick Interactive Setup (All Services at Once) -```text +```bash # Run interactive setup - prompts for choices ./provisioning/scripts/setup-platform-config.sh @@ -83,7 +83,7 @@ TypeDialog provides an interactive form-based configuration interface available #### Configure Specific Service with TypeDialog -```text +```toml # Configure orchestrator in solo mode with web UI ./provisioning/scripts/setup-platform-config.sh --service orchestrator @@ -103,7 +103,7 @@ TypeDialog provides an interactive form-based configuration interface available Quick mode automatically creates all service configurations from defaults overlaid with mode-specific tuning. -```text +```toml # Quick setup for solo development mode ./provisioning/scripts/setup-platform-config.sh --quick-mode --mode solo @@ -123,7 +123,7 @@ Quick mode automatically creates all service configurations from defaults overla For advanced users who prefer editing configuration files directly: -```text +```toml # View schema definition cat provisioning/schemas/platform/schemas/orchestrator.ncl @@ -153,7 +153,7 @@ nickel typecheck provisioning/config/runtime/orchestrator.solo.ncl The configuration system uses layered composition: -```text +```toml 1. Schema (Type contract) ↓ Defines valid fields and constraints @@ -179,7 +179,7 @@ All layers are automatically composed and validated. After running the setup script, verify the configuration was created: -```text +```toml # List generated runtime configurations ls -la provisioning/config/runtime/ @@ -198,7 +198,7 @@ After successful configuration, services can be started: ### Running a Single Service -```text +```bash # Set deployment mode export ORCHESTRATOR_MODE=solo @@ -209,7 +209,7 @@ cargo run -p orchestrator ### Running Multiple Services -```text +```bash # Terminal 1: Vault Service (secrets management) export VAULT_MODE=solo cargo run -p vault-service @@ -227,7 +227,7 @@ cargo run -p control-center ### Docker-Based Deployment -```text +```bash # Start all services in Docker (requires docker-compose.yml) cd provisioning/platform/infrastructure/docker docker-compose -f docker-compose.solo.yml up @@ -238,7 +238,7 @@ docker-compose -f docker-compose.enterprise.yml up ## Step 6: Verify Services Are Running -```text +```bash # Check orchestrator status curl http://localhost:9000/health @@ -256,7 +256,7 @@ cargo run -p orchestrator -- --log-level debug If you need to switch from solo to multiuser mode: -```text +```bash # Option 1: Re-run setup with new mode ./provisioning/scripts/setup-platform-config.sh --quick-mode --mode multiuser @@ -271,7 +271,7 @@ If you need to switch from solo to multiuser mode: If you need fine-grained control: -```text +```bash # 1. Edit the Nickel configuration directly vim provisioning/config/runtime/orchestrator.solo.ncl @@ -296,7 +296,7 @@ cargo run -p orchestrator For workspace-specific customization: -```text +```bash # Create workspace override file mkdir -p workspace_myworkspace/config cat > workspace_myworkspace/config/platform-overrides.ncl <<'EOF' @@ -321,7 +321,7 @@ EOF ## Available Configuration Commands -```text +```toml # List all available modes ./provisioning/scripts/setup-platform-config.sh --list-modes # Output: solo, multiuser, cicd, enterprise @@ -344,7 +344,7 @@ EOF ### Public Definitions (Part of repository) -```text +```bash provisioning/schemas/platform/ ├── schemas/ # Type contracts (Nickel) ├── defaults/ # Base configuration values @@ -356,7 +356,7 @@ provisioning/schemas/platform/ ### Private Runtime Configs (Gitignored) -```text +```toml provisioning/config/runtime/ # User-specific deployments ├── orchestrator.solo.ncl # Editable config ├── orchestrator.multiuser.ncl @@ -367,7 +367,7 @@ provisioning/config/runtime/ # User-specific deployments ### Examples (Reference) -```text +```bash provisioning/config/examples/ ├── orchestrator.solo.example.ncl # Solo mode reference └── orchestrator.enterprise.example.ncl # Enterprise mode reference @@ -377,7 +377,7 @@ provisioning/config/examples/ ### Issue: Script Fails with "Nickel not found" -```text +```nickel # Install Nickel # macOS brew install nickel @@ -392,7 +392,7 @@ nickel --version ### Issue: Configuration Won't Generate TOML -```text +```toml # Check Nickel syntax nickel typecheck provisioning/config/runtime/orchestrator.solo.ncl @@ -405,7 +405,7 @@ nickel export --format toml provisioning/config/runtime/orchestrator.solo.ncl ### Issue: Service Can't Read Configuration -```text +```toml # Verify TOML file exists ls -la provisioning/config/runtime/generated/orchestrator.solo.toml @@ -422,7 +422,7 @@ cargo run -p orchestrator --verbose ### Issue: Services Won't Start After Config Change -```text +```toml # If you edited .ncl file manually, TOML must be regenerated ./provisioning/scripts/setup-platform-config.sh --generate-toml @@ -454,7 +454,7 @@ Files in `provisioning/schemas/platform/` are **version-controlled** because: The setup script is safe to run multiple times: -```text +```bash # Safe: Updates only what's needed ./provisioning/scripts/setup-platform-config.sh --quick-mode --mode enterprise diff --git a/docs/src/getting-started/getting-started.md b/docs/src/getting-started/getting-started.md index ceb3779..a704183 100644 --- a/docs/src/getting-started/getting-started.md +++ b/docs/src/getting-started/getting-started.md @@ -26,7 +26,7 @@ Before starting this guide, ensure you have: Provisioning uses **declarative configuration** to manage infrastructure. Instead of manually creating resources, you define what you want in configuration files, and the system makes it happen. -```text +```toml You describe → System creates → Infrastructure exists ``` @@ -51,7 +51,7 @@ You describe → System creates → Infrastructure exists Create your personal configuration: -```text +```toml # Initialize user configuration provisioning init config @@ -60,7 +60,7 @@ provisioning init config ### Step 2: Verify Your Environment -```text +```bash # Check your environment setup provisioning env @@ -70,7 +70,7 @@ provisioning allenv You should see output like: -```text +```bash ✅ Configuration loaded successfully ✅ All required tools available 📁 Base path: /usr/local/provisioning @@ -79,7 +79,7 @@ You should see output like: ### Step 3: Explore Available Resources -```text +```bash # List available providers provisioning list providers @@ -96,7 +96,7 @@ Let's create a simple local infrastructure to learn the basics. ### Step 1: Create a Workspace -```text +```bash # Create a new workspace directory mkdir ~/my-first-infrastructure cd ~/my-first-infrastructure @@ -107,7 +107,7 @@ provisioning generate infra --new local-demo This creates: -```text +```bash local-demo/ ├── config/ │ └── config.ncl # Master Nickel configuration @@ -120,14 +120,14 @@ local-demo/ ### Step 2: Examine the Configuration -```text +```toml # View the generated configuration provisioning show settings --infra local-demo ``` ### Step 3: Validate the Configuration -```text +```toml # Validate syntax and structure provisioning validate config --infra local-demo @@ -136,7 +136,7 @@ provisioning validate config --infra local-demo ### Step 4: Deploy Infrastructure (Check Mode) -```text +```bash # Dry run - see what would be created provisioning server create --infra local-demo --check @@ -145,7 +145,7 @@ provisioning server create --infra local-demo --check ### Step 5: Create Your Infrastructure -```text +```bash # Create the actual infrastructure provisioning server create --infra local-demo @@ -159,7 +159,7 @@ provisioning server list --infra local-demo Let's install a containerized service: -```text +```bash # Install Docker/containerd provisioning taskserv create containerd --infra local-demo @@ -171,7 +171,7 @@ provisioning taskserv list --infra local-demo For container orchestration: -```text +```bash # Install Kubernetes provisioning taskserv create kubernetes --infra local-demo @@ -180,7 +180,7 @@ provisioning taskserv create kubernetes --infra local-demo ### Checking Service Status -```text +```bash # Show all services on your infrastructure provisioning show servers --infra local-demo @@ -194,7 +194,7 @@ provisioning show servers web-01 taskserv kubernetes --infra local-demo All commands follow this pattern: -```text +```bash provisioning [global-options] [command-options] [arguments] ``` @@ -229,7 +229,7 @@ The system supports multiple environments: ### Switching Environments -```text +```bash # Set environment for this session export PROVISIONING_ENV=dev provisioning env @@ -242,7 +242,7 @@ provisioning --environment dev server create Create environment configs: -```text +```toml # Development environment provisioning init config dev @@ -254,7 +254,7 @@ provisioning init config prod ### Workflow 1: Development Environment -```text +```bash # 1. Create development workspace mkdir ~/dev-environment cd ~/dev-environment @@ -276,7 +276,7 @@ provisioning taskserv create containerd --infra dev-setup ### Workflow 2: Service Updates -```text +```bash # Check for service updates provisioning taskserv check-updates @@ -289,7 +289,7 @@ provisioning taskserv versions kubernetes ### Workflow 3: Infrastructure Scaling -```text +```bash # Add servers to existing infrastructure # Edit settings.ncl to add more servers @@ -304,14 +304,14 @@ provisioning taskserv create containerd --infra dev-setup ### Starting Interactive Shell -```text +```bash # Start Nushell with provisioning loaded provisioning nu ``` In the interactive shell, you have access to all provisioning functions: -```text +```bash # Inside Nushell session use lib_provisioning * @@ -324,7 +324,7 @@ help commands | where name =~ "provision" ### Useful Interactive Commands -```text +```bash # Show detailed server information find_servers "web-*" | table @@ -346,7 +346,7 @@ taskservs_list | where status == "running" ### Configuration Hierarchy -```text +```toml Infrastructure settings.ncl ↓ (overrides) Environment config.{env}.toml @@ -358,7 +358,7 @@ System config.defaults.toml ### Customizing Your Configuration -```text +```toml # Edit user configuration provisioning sops ~/.provisioning/config.user.toml @@ -368,7 +368,7 @@ nano ~/.provisioning/config.user.toml Example customizations: -```text +```toml [debug] enabled = true # Enable debug mode by default log_level = "debug" # Verbose logging @@ -384,7 +384,7 @@ format = "json" # Prefer JSON output ### Checking System Status -```text +```bash # Overall system health provisioning env @@ -397,7 +397,7 @@ provisioning taskserv list --infra dev-setup ### Logging and Debugging -```text +```bash # Enable debug mode for troubleshooting provisioning --debug server create --infra dev-setup --check @@ -407,7 +407,7 @@ provisioning show logs --infra dev-setup ### Cost Monitoring -```text +```bash # Show cost estimates provisioning show cost --infra dev-setup @@ -440,7 +440,7 @@ provisioning server price --infra dev-setup ### 4. Development Workflow -```text +```bash # 1. Always validate before applying provisioning validate config --infra my-infra @@ -458,7 +458,7 @@ provisioning show servers --infra my-infra ### Built-in Help System -```text +```bash # General help provisioning help @@ -485,7 +485,7 @@ Let's walk through a complete example of setting up a web application infrastruc ### Step 1: Plan Your Infrastructure -```text +```bash # Create project workspace mkdir ~/webapp-infrastructure cd ~/webapp-infrastructure @@ -504,7 +504,7 @@ Edit `webapp/settings.ncl` to define: ### Step 3: Deploy Base Infrastructure -```text +```bash # Validate configuration provisioning validate config --infra webapp @@ -517,7 +517,7 @@ provisioning server create --infra webapp ### Step 4: Install Services -```text +```bash # Install container runtime on all servers provisioning taskserv create containerd --infra webapp @@ -530,7 +530,7 @@ provisioning taskserv create postgresql --infra webapp ### Step 5: Deploy Application -```text +```bash # Create application cluster provisioning cluster create webapp --infra webapp diff --git a/docs/src/getting-started/installation-guide.md b/docs/src/getting-started/installation-guide.md index b992736..4705f33 100644 --- a/docs/src/getting-started/installation-guide.md +++ b/docs/src/getting-started/installation-guide.md @@ -42,7 +42,7 @@ Before installation, ensure you have: ### Pre-installation Checklist -```text +```bash # Check your system uname -a # View system information df -h # Check available disk space @@ -57,7 +57,7 @@ This is the easiest method for most users. #### Step 1: Download the Package -```text +```bash # Download the latest release package wget https://releases.example.com/provisioning-latest.tar.gz @@ -67,7 +67,7 @@ curl -LO https://releases.example.com/provisioning-latest.tar.gz #### Step 2: Extract and Install -```text +```bash # Extract the package tar xzf provisioning-latest.tar.gz @@ -91,7 +91,7 @@ For containerized environments or testing. #### Using Docker -```text +```bash # Pull the provisioning container docker pull provisioning:latest @@ -108,7 +108,7 @@ sudo ln -sf /usr/local/provisioning/bin/provisioning /usr/local/bin/provisioning #### Using Podman -```text +```bash # Similar to Docker but with Podman podman pull provisioning:latest podman run -it --name provisioning-setup @@ -127,7 +127,7 @@ For developers or custom installations. #### Installation Steps -```text +```bash # Clone the repository git clone https://github.com/your-org/provisioning.git cd provisioning @@ -143,7 +143,7 @@ cd provisioning For advanced users who want complete control. -```text +```bash # Create installation directory sudo mkdir -p /usr/local/provisioning @@ -165,7 +165,7 @@ The installation process sets up: #### 1. Core System Files -```text +```bash /usr/local/provisioning/ ├── core/ # Core provisioning logic ├── providers/ # Cloud provider integrations @@ -200,7 +200,7 @@ The installation process sets up: ### Basic Verification -```text +```bash # Check if provisioning command is available provisioning --version @@ -213,7 +213,7 @@ provisioning allenv Expected output should show: -```text +```bash ✅ Provisioning v1.0.0 installed ✅ All dependencies available ✅ Configuration loaded successfully @@ -221,7 +221,7 @@ Expected output should show: ### Tool Verification -```text +```bash # Check individual tools nu --version # Should show Nushell 0.109.0+ nickel version # Should show Nickel 1.5+ @@ -232,7 +232,7 @@ k9s version # Should show K9s 0.50.6 ### Plugin Verification -```text +```bash # Start Nushell and check plugins nu -c "version | get installed_plugins" @@ -242,7 +242,7 @@ nu -c "version | get installed_plugins" ### Configuration Verification -```text +```toml # Validate configuration provisioning validate config @@ -256,7 +256,7 @@ provisioning validate config Add to your shell profile (`~/.bashrc`, `~/.zshrc`, or `~/.profile`): -```text +```bash # Add provisioning to PATH export PATH="/usr/local/bin:$PATH" @@ -266,7 +266,7 @@ export PROVISIONING="/usr/local/provisioning" ### Configuration Initialization -```text +```toml # Initialize user configuration provisioning init config @@ -275,7 +275,7 @@ provisioning init config ### First-Time Setup -```text +```bash # Set up your first workspace mkdir -p ~/provisioning-workspace cd ~/provisioning-workspace @@ -291,7 +291,7 @@ provisioning env ### Linux (Ubuntu/Debian) -```text +```bash # Install system dependencies sudo apt update sudo apt install -y curl wget tar @@ -305,7 +305,7 @@ sudo ./install-provisioning ### Linux (RHEL/CentOS/Fedora) -```text +```bash # Install system dependencies sudo dnf install -y curl wget tar # or for older versions: sudo yum install -y curl wget tar @@ -315,7 +315,7 @@ sudo dnf install -y curl wget tar ### macOS -```text +```bash # Using Homebrew (if available) brew install curl wget @@ -328,7 +328,7 @@ sudo ./install-provisioning ### Windows (WSL2) -```text +```bash # In WSL2 terminal sudo apt update sudo apt install -y curl wget tar @@ -344,7 +344,7 @@ wget https://releases.example.com/provisioning-latest.tar.gz Create `~/.provisioning/config.user.toml`: -```text +```toml [core] name = "my-provisioning" @@ -367,7 +367,7 @@ format = "yaml" For developers, use enhanced debugging: -```text +```toml [debug] enabled = true log_level = "debug" @@ -381,7 +381,7 @@ enabled = false # Disable caching during development ### Upgrading from Previous Version -```text +```bash # Backup current installation sudo cp -r /usr/local/provisioning /usr/local/provisioning.backup @@ -399,7 +399,7 @@ provisioning --version ### Migrating Configuration -```text +```toml # Backup your configuration cp -r ~/.provisioning ~/.provisioning.backup @@ -415,7 +415,7 @@ provisioning init config #### Permission Denied Errors -```text +```bash # Problem: Cannot write to /usr/local # Solution: Use sudo sudo ./install-provisioning @@ -427,7 +427,7 @@ export PATH="$HOME/provisioning/bin:$PATH" #### Missing Dependencies -```text +```bash # Problem: curl/wget not found # Ubuntu/Debian solution: sudo apt install -y curl wget tar @@ -438,7 +438,7 @@ sudo dnf install -y curl wget tar #### Download Failures -```text +```bash # Problem: Cannot download package # Solution: Check internet connection and try alternative ping google.com @@ -452,7 +452,7 @@ wget --tries=3 https://releases.example.com/provisioning-latest.tar.gz #### Extraction Failures -```text +```bash # Problem: Archive corrupted # Solution: Verify and re-download sha256sum provisioning-latest.tar.gz # Check against published hash @@ -464,7 +464,7 @@ wget https://releases.example.com/provisioning-latest.tar.gz #### Tool Installation Failures -```text +```bash # Problem: Nushell installation fails # Solution: Check architecture and OS compatibility uname -m # Should show x86_64 or arm64 @@ -478,7 +478,7 @@ uname -s # Should show Linux, Darwin, etc. #### Command Not Found -```text +```bash # Problem: 'provisioning' command not found # Check installation path ls -la /usr/local/bin/provisioning @@ -493,7 +493,7 @@ echo 'export PATH="/usr/local/bin:$PATH"' >> ~/.bashrc #### Plugin Errors -```text +```bash # Problem: Plugin command not found # Solution: Ensure plugin is properly registered @@ -506,7 +506,7 @@ exec nu #### Configuration Errors -```text +```toml # Problem: Configuration validation fails # Solution: Initialize with template provisioning init config diff --git a/docs/src/getting-started/installation-validation-guide.md b/docs/src/getting-started/installation-validation-guide.md index 97dbaf1..4928e26 100644 --- a/docs/src/getting-started/installation-validation-guide.md +++ b/docs/src/getting-started/installation-validation-guide.md @@ -16,7 +16,7 @@ Before running the bootstrap script, verify that your system has all required de Run these commands to verify your system meets minimum requirements: -```text +```bash # Check OS uname -s # Expected: Darwin (macOS), Linux, or WSL2 @@ -48,7 +48,7 @@ df -h | grep -E '^/dev|^Filesystem' Nushell is required for bootstrap and CLI operations: -```text +```nushell command -v nu # Expected output: /path/to/nu @@ -58,7 +58,7 @@ nu --version **If Nushell is not installed:** -```text +```nushell # macOS (using Homebrew) brew install nushell @@ -75,7 +75,7 @@ sudo yum install nushell Nickel is required for configuration validation: -```text +```nickel command -v nickel # Expected output: /path/to/nickel @@ -85,7 +85,7 @@ nickel --version **If Nickel is not installed:** -```text +```nickel # Install via Cargo (requires Rust) cargo install nickel-lang-cli @@ -96,7 +96,7 @@ cargo install nickel-lang-cli Docker is required for running containerized services: -```text +```bash command -v docker # Expected output: /path/to/docker @@ -112,7 +112,7 @@ Visit [Docker installation guide](https://docs.docker.com/get-docker/) and insta Verify the provisioning CLI binary exists: -```text +```bash ls -la /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning # Expected: -rwxr-xr-x (executable) @@ -122,13 +122,13 @@ file /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning **If binary is not executable:** -```text +```bash chmod +x /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning ``` ### Prerequisites Checklist -```text +```bash [ ] OS is macOS, Linux, or WSL2 [ ] CPU: 2+ cores available [ ] RAM: 2 GB minimum installed @@ -147,13 +147,13 @@ The bootstrap script automates 7 stages of installation and initialization. Run ### Step 2.1: Navigate to Project Root -```text +```bash cd /Users/Akasha/project-provisioning ``` ### Step 2.2: Run Bootstrap Script -```text +```bash ./provisioning/bootstrap/install.sh ``` @@ -161,7 +161,7 @@ cd /Users/Akasha/project-provisioning You should see output similar to this: -```text +```bash ╔════════════════════════════════════════════════════════════════╗ ║ PROVISIONING BOOTSTRAP (Bash) ║ ╚════════════════════════════════════════════════════════════════╝ @@ -241,7 +241,7 @@ After bootstrap completes, verify that all components are working correctly. Bootstrap should have created workspace directories. Verify they exist: -```text +```bash cd /Users/Akasha/project-provisioning # Check all required directories @@ -253,7 +253,7 @@ ls -la workspaces/workspace_librecloud/.clusters/ ``` **Expected Output**: -```text +```bash total 0 drwxr-xr-x 2 user group 64 Jan 7 10:30 . @@ -264,7 +264,7 @@ drwxr-xr-x 2 user group 64 Jan 7 10:30 . Bootstrap should have exported Nickel configuration to TOML format: -```text +```nickel # Check generated files exist ls -la workspaces/workspace_librecloud/config/generated/ @@ -279,7 +279,7 @@ cat workspaces/workspace_librecloud/config/generated/platform/orchestrator.toml ``` **Expected Output**: -```text +```bash config/ ├── generated/ │ ├── workspace.toml @@ -293,7 +293,7 @@ config/ Verify Nickel configuration files have valid syntax: -```text +```nickel cd /Users/Akasha/project-provisioning/workspaces/workspace_librecloud # Type-check main workspace config @@ -313,7 +313,7 @@ nu workspace.nu typecheck ``` **Expected Output**: -```text +```bash ✓ All files validated successfully ✓ infra/wuji/main.ncl ✓ infra/sgoyol/main.ncl @@ -323,7 +323,7 @@ nu workspace.nu typecheck The orchestrator service manages workflows and deployments: -```text +```bash # Check if orchestrator is running (health check) curl http://localhost:9090/health # Expected: {"status": "healthy"} or similar response @@ -337,7 +337,7 @@ ps aux | grep orchestrator ``` **Expected Output**: -```text +```json { "status": "healthy", "uptime": "0:05:23" @@ -348,7 +348,7 @@ ps aux | grep orchestrator Check logs and restart manually: -```text +```bash cd /Users/Akasha/project-provisioning/provisioning/platform/orchestrator # Check log file @@ -365,7 +365,7 @@ curl http://localhost:9090/health You can install the provisioning CLI globally for easier access: -```text +```bash # Option A: System-wide installation (requires sudo) cd /Users/Akasha/project-provisioning sudo ./scripts/install-provisioning.sh @@ -382,7 +382,7 @@ provisioning --version ``` **Expected Output**: -```text +```bash provisioning version 1.0.0 Usage: provisioning [OPTIONS] COMMAND @@ -396,7 +396,7 @@ Commands: ### Installation Validation Checklist -```text +```bash [ ] Workspace directories created (.orchestrator, .kms, .providers, .taskservs, .clusters) [ ] Generated TOML files exist in config/generated/ [ ] Nickel type-checking passes (no errors) @@ -415,7 +415,7 @@ This section covers common issues and solutions. ### Issue: "Nushell not found" **Symptoms**: -```text +```nushell ./provisioning/bootstrap/install.sh: line X: nu: command not found ``` @@ -427,7 +427,7 @@ This section covers common issues and solutions. ### Issue: "Nickel configuration validation failed" **Symptoms**: -```text +```nickel ⚙️ Stage 4: Validating Configuration Error: Nickel configuration validation failed ``` @@ -441,7 +441,7 @@ Error: Nickel configuration validation failed ### Issue: "Docker not installed" **Symptoms**: -```text +```bash ❌ Docker is required but not installed ``` @@ -453,7 +453,7 @@ Error: Nickel configuration validation failed ### Issue: "Configuration export failed" **Symptoms**: -```text +```toml ⚠️ Configuration export encountered issues (may continue) ``` @@ -472,7 +472,7 @@ Error: Nickel configuration validation failed ### Issue: "Orchestrator didn't start" **Symptoms**: -```text +```bash 🚀 Stage 6: Initializing Orchestrator Service ⚠️ Orchestrator may not have started (check logs) @@ -492,7 +492,7 @@ curl http://localhost:9090/health ### Issue: "Sudo password prompt during bootstrap" **Symptoms**: -```text +```bash Stage 3: Creating Directory Structure [sudo] password for user: ``` @@ -505,12 +505,12 @@ Stage 3: Creating Directory Structure ### Issue: "Permission denied" on binary **Symptoms**: -```text +```bash bash: ./provisioning/bootstrap/install.sh: Permission denied ``` **Solution**: -```text +```bash # Make script executable chmod +x /Users/Akasha/project-provisioning/provisioning/bootstrap/install.sh @@ -528,7 +528,7 @@ After successful installation validation, you can: To deploy infrastructure to UpCloud: -```text +```bash # Read workspace deployment guide cat workspaces/workspace_librecloud/docs/deployment-guide.md @@ -541,7 +541,7 @@ cat docs/deployment-guide.md To create a new workspace for different infrastructure: -```text +```bash provisioning workspace init my_workspace --template minimal ``` @@ -549,7 +549,7 @@ provisioning workspace init my_workspace --template minimal Discover what's available to deploy: -```text +```bash # List available task services provisioning mod discover taskservs @@ -566,7 +566,7 @@ provisioning mod discover clusters After completing all steps, verify with this final checklist: -```text +```bash Prerequisites Verified: [ ] OS is macOS, Linux, or WSL2 [ ] CPU: 2+ cores diff --git a/docs/src/getting-started/quickstart-cheatsheet.md b/docs/src/getting-started/quickstart-cheatsheet.md index cd578d9..049b7a7 100644 --- a/docs/src/getting-started/quickstart-cheatsheet.md +++ b/docs/src/getting-started/quickstart-cheatsheet.md @@ -26,7 +26,7 @@ Native Nushell plugins for high-performance operations. **10-50x faster than HTT ### Authentication Plugin (nu_plugin_auth) -```text +```bash # Login (password prompted securely) auth login admin @@ -54,7 +54,7 @@ auth mfa verify --code ABCD-EFGH-IJKL # Backup code **Installation:** -```text +```bash cd provisioning/core/plugins/nushell-plugins cargo build --release -p nu_plugin_auth plugin add target/release/nu_plugin_auth @@ -64,7 +64,7 @@ plugin add target/release/nu_plugin_auth **Performance**: 10x faster encryption (~5 ms vs ~50 ms HTTP) -```text +```bash # Encrypt with auto-detected backend kms encrypt "secret data" # vault:v1:abc123... @@ -102,7 +102,7 @@ kms status **Installation:** -```text +```bash cargo build --release -p nu_plugin_kms plugin add target/release/nu_plugin_kms @@ -115,7 +115,7 @@ export RUSTYVAULT_TOKEN="hvs.xxxxx" **Performance**: 30-50x faster queries (~1 ms vs ~30-50 ms HTTP) -```text +```bash # Get orchestrator status (direct file access, ~1 ms) orch status # { active_tasks: 5, completed_tasks: 120, health: "healthy" } @@ -132,7 +132,7 @@ orch tasks --status failed --limit 10 **Installation:** -```text +```bash cargo build --release -p nu_plugin_orchestrator plugin add target/release/nu_plugin_orchestrator ``` @@ -154,7 +154,7 @@ plugin add target/release/nu_plugin_orchestrator ### Infrastructure Shortcuts -```text +```bash # Server shortcuts provisioning s # server (same as 'provisioning server') provisioning s create # Create servers @@ -186,7 +186,7 @@ provisioning i validate ### Orchestration Shortcuts -```text +```bash # Workflow shortcuts provisioning wf # workflow (same as 'provisioning workflow') provisioning flow # workflow (alias) @@ -217,7 +217,7 @@ provisioning orch logs ### Development Shortcuts -```text +```bash # Module shortcuts provisioning mod # module (same as 'provisioning module') provisioning mod discover taskserv @@ -251,7 +251,7 @@ provisioning pack clean ### Workspace Shortcuts -```text +```bash # Workspace shortcuts provisioning ws # workspace (same as 'provisioning workspace') provisioning ws init @@ -275,7 +275,7 @@ provisioning tpl validate ### Configuration Shortcuts -```text +```toml # Environment shortcuts provisioning e # env (same as 'provisioning env') provisioning val # validate (same as 'provisioning validate') @@ -296,7 +296,7 @@ provisioning allenv # Show all config and environment ### Utility Shortcuts -```text +```bash # List shortcuts provisioning l # list (same as 'provisioning list') provisioning ls # list (alias) @@ -334,7 +334,7 @@ provisioning plugin test nu_plugin_kms ### Generation Shortcuts -```text +```bash # Generate shortcuts provisioning g # generate (same as 'provisioning generate') provisioning gen # generate (alias) @@ -347,7 +347,7 @@ provisioning g new ### Action Shortcuts -```text +```bash # Common actions provisioning c # create (same as 'provisioning create') provisioning d # delete (same as 'provisioning delete') @@ -369,7 +369,7 @@ provisioning csts # create-server-task (alias) ### Server Management -```text +```bash # Create servers provisioning server create provisioning server create --check # Dry-run mode @@ -396,7 +396,7 @@ provisioning server price --provider upcloud ### Taskserv Management -```text +```bash # Create taskserv provisioning taskserv create kubernetes provisioning taskserv create kubernetes --check @@ -421,7 +421,7 @@ provisioning taskserv check-updates --taskserv kubernetes ### Cluster Management -```text +```bash # Create cluster provisioning cluster create buildkit provisioning cluster create buildkit --check @@ -442,7 +442,7 @@ provisioning cluster list --infra wuji ### Workflow Management -```text +```bash # Submit server creation workflow nu -c "use core/nulib/workflows/server_create.nu *; server_create_workflow 'wuji' '' [] --check" @@ -475,7 +475,7 @@ nu -c "use core/nulib/workflows/management.nu *; workflow status " ### Batch Operations -```text +```bash # Submit batch workflow from Nickel provisioning batch submit workflows/example_batch.ncl nu -c "use core/nulib/workflows/batch.nu *; batch submit workflows/example_batch.ncl" @@ -507,7 +507,7 @@ nu -c "use core/nulib/workflows/batch.nu *; batch stats" ### Orchestrator Management -```text +```bash # Start orchestrator in background cd provisioning/platform/orchestrator ./scripts/start-orchestrator.nu --background @@ -531,7 +531,7 @@ provisioning orchestrator logs ### Environment and Validation -```text +```bash # Show environment variables provisioning env @@ -548,7 +548,7 @@ provisioning setup ### Configuration Files -```text +```toml # System defaults less provisioning/config/config.defaults.toml @@ -566,7 +566,7 @@ vim workspace/infra//config.toml ### HTTP Configuration -```text +```toml # Configure HTTP client behavior # In workspace/config/local-overrides.toml: [http] @@ -579,7 +579,7 @@ use_curl = true # Use curl instead of ureq ### Workspace Management -```text +```bash # List all workspaces provisioning workspace list @@ -617,7 +617,7 @@ provisioning workspace migrate ### User Preferences -```text +```bash # View user preferences provisioning workspace preferences @@ -642,7 +642,7 @@ provisioning workspace get-preference editor ### Authentication (via CLI) -```text +```bash # Login provisioning login admin @@ -658,7 +658,7 @@ provisioning auth sessions ### Multi-Factor Authentication (MFA) -```text +```bash # Enroll in TOTP (Google Authenticator, Authy) provisioning mfa totp enroll @@ -675,7 +675,7 @@ provisioning mfa devices ### Secrets Management -```text +```bash # Generate AWS STS credentials (15 min-12h TTL) provisioning secrets generate aws --ttl 1hr @@ -694,7 +694,7 @@ provisioning secrets cleanup ### SSH Temporal Keys -```text +```bash # Connect to server with temporal key provisioning ssh connect server01 --ttl 1hr @@ -710,7 +710,7 @@ provisioning ssh revoke ### KMS Operations (via CLI) -```text +```bash # Encrypt configuration file provisioning kms encrypt secure.yaml @@ -726,7 +726,7 @@ provisioning config decrypt workspace/infra/production/ ### Break-Glass Emergency Access -```text +```bash # Request emergency access provisioning break-glass request "Production database outage" @@ -742,7 +742,7 @@ provisioning break-glass revoke ### Compliance and Audit -```text +```bash # Generate compliance report provisioning compliance report provisioning compliance report --standard gdpr @@ -770,7 +770,7 @@ provisioning audit export --format json --output audit-logs.json ### Complete Deployment from Scratch -```text +```bash # 1. Initialize workspace provisioning workspace init --name production @@ -804,7 +804,7 @@ provisioning server ssh k8s-master-01 ### Multi-Environment Deployment -```text +```bash # Deploy to dev provisioning server create --infra dev --check provisioning server create --infra dev @@ -823,7 +823,7 @@ provisioning taskserv create kubernetes --infra production ### Update Infrastructure -```text +```bash # 1. Check for updates provisioning taskserv check-updates @@ -839,7 +839,7 @@ provisioning taskserv list --infra production | where name == kubernetes ### Encrypted Secrets Deployment -```text +```bash # 1. Authenticate auth login admin auth mfa verify --code 123456 @@ -862,7 +862,7 @@ orch tasks --status completed Enable verbose logging with `--debug` or `-x` flag: -```text +```bash # Server creation with debug output provisioning server create --debug provisioning server create -x @@ -878,7 +878,7 @@ provisioning --debug taskserv create kubernetes Preview changes without applying them with `--check` or `-c` flag: -```text +```bash # Check what servers would be created provisioning server create --check provisioning server create -c @@ -897,7 +897,7 @@ provisioning server create --check --debug Skip confirmation prompts with `--yes` or `-y` flag: -```text +```bash # Auto-confirm server creation provisioning server create --yes provisioning server create -y @@ -910,7 +910,7 @@ provisioning server delete --yes Wait for operations to complete with `--wait` or `-w` flag: -```text +```bash # Wait for server creation to complete provisioning server create --wait @@ -922,7 +922,7 @@ provisioning taskserv create kubernetes --wait Specify target infrastructure with `--infra` or `-i` flag: -```text +```bash # Create servers in specific infrastructure provisioning server create --infra production provisioning server create -i production @@ -937,7 +937,7 @@ provisioning server list --infra production ### JSON Output -```text +```bash # Output as JSON provisioning server list --out json provisioning taskserv list --out json @@ -948,7 +948,7 @@ provisioning server list --out json | jq '.[] | select(.status == "running")' ### YAML Output -```text +```yaml # Output as YAML provisioning server list --out yaml provisioning taskserv list --out yaml @@ -959,7 +959,7 @@ provisioning server list --out yaml | yq '.[] | select(.status == "running")' ### Table Output (Default) -```text +```bash # Output as table (default) provisioning server list provisioning server list --out table @@ -970,7 +970,7 @@ provisioning server list | table ### Text Output -```text +```bash # Output as plain text provisioning server list --out text ``` @@ -981,7 +981,7 @@ provisioning server list --out text ### Use Plugins for Frequent Operations -```text +```bash # ❌ Slow: HTTP API (50 ms per call) for i in 1..100 { http post http://localhost:9998/encrypt { data: "secret" } } @@ -991,14 +991,14 @@ for i in 1..100 { kms encrypt "secret" } ### Batch Operations -```text +```bash # Use batch workflows for multiple operations provisioning batch submit workflows/multi-cloud-deploy.ncl ``` ### Check Mode for Testing -```text +```bash # Always test with --check first provisioning server create --check provisioning server create # Only after verification @@ -1010,7 +1010,7 @@ provisioning server create # Only after verification ### Command-Specific Help -```text +```bash # Show help for specific command provisioning help server provisioning help taskserv @@ -1028,7 +1028,7 @@ provisioning help config ### Bi-Directional Help -```text +```bash # All these work identically: provisioning help workspace provisioning workspace help @@ -1038,7 +1038,7 @@ provisioning help ws ### General Help -```text +```bash # Show all commands provisioning help provisioning --help @@ -1065,7 +1065,7 @@ provisioning --version ## Plugin Installation Quick Reference -```text +```bash # Build all plugins (one-time setup) cd provisioning/core/plugins/nushell-plugins cargo build --release --all diff --git a/docs/src/getting-started/quickstart.md b/docs/src/getting-started/quickstart.md index d1f4164..c250579 100644 --- a/docs/src/getting-started/quickstart.md +++ b/docs/src/getting-started/quickstart.md @@ -13,7 +13,7 @@ Please see the complete quick start guide here: ## Quick Commands -```text +```bash # Check system status provisioning status diff --git a/docs/src/getting-started/setup-profiles.md b/docs/src/getting-started/setup-profiles.md index 0182c0d..7aaafbd 100644 --- a/docs/src/getting-started/setup-profiles.md +++ b/docs/src/getting-started/setup-profiles.md @@ -70,12 +70,12 @@ This guide provides detailed information about each setup profile and when to us #### Step 1: Run Setup -```text +```bash provisioning setup profile --profile developer ``` Output: -```text +```bash ╔═══════════════════════════════════════════════════════╗ ║ PROVISIONING SYSTEM SETUP - DEVELOPER PROFILE ║ ╚═══════════════════════════════════════════════════════╝ @@ -108,7 +108,7 @@ System automatically detects: Creates three Nickel configs: **system.ncl** - System info (read-only): -```text +```json { version = "1.0.0", config_base_path = "/Users/user/Library/Application Support/provisioning", @@ -124,7 +124,7 @@ Creates three Nickel configs: ``` **platform/deployment.ncl** - Deployment config (can edit): -```text +```json { deployment = { mode = 'docker_compose, @@ -149,7 +149,7 @@ Creates three Nickel configs: ``` **user_preferences.ncl** - User settings (can edit): -```text +```json { output_format = 'yaml, use_colors = true, @@ -163,7 +163,7 @@ Creates three Nickel configs: #### Step 4: Validation Each config is validated: -```text +```toml ✓ Validating system.ncl ✓ Validating platform/deployment.ncl ✓ Validating user_preferences.ncl @@ -173,7 +173,7 @@ Each config is validated: #### Step 5: Service Startup Docker Compose starts: -```text +```bash ✓ Starting Docker Compose services... ✓ Starting orchestrator... [port 9090] ✓ Starting control-center... [port 3000] @@ -183,7 +183,7 @@ Docker Compose starts: #### Step 6: Verification Health checks verify services: -```text +```bash ✓ Orchestrator health: HEALTHY ✓ Control Center health: HEALTHY ✓ KMS health: HEALTHY @@ -194,32 +194,32 @@ Setup complete in 3 minutes 47 seconds! ### After Setup: Common Tasks **Verify everything works**: -```text +```bash curl http://localhost:9090/health curl http://localhost:3000/health curl http://localhost:3001/health ``` **View your configuration**: -```text +```toml cat ~/Library/Application\ Support/provisioning/system.ncl cat ~/Library/Application\ Support/provisioning/platform/deployment.ncl ``` **Create a workspace**: -```text +```bash provisioning workspace create myapp ``` **View logs**: -```text +```bash docker-compose logs orchestrator docker-compose logs control-center docker-compose logs kms ``` **Stop services**: -```text +```bash docker-compose down ``` @@ -277,7 +277,7 @@ docker-compose down #### Step 1: Run Setup -```text +```bash provisioning setup profile --profile production --interactive ``` @@ -289,7 +289,7 @@ Same as Developer profile - auto-detects OS, CPU, memory, etc. The wizard asks 10-15 questions: -```text +```bash 1. Deployment Mode? a) Kubernetes (recommended for HA) b) SSH (manual server management) @@ -366,7 +366,7 @@ The wizard asks 10-15 questions: Creates extensive Nickel configs: **platform/deployment.ncl**: -```text +```json { deployment = { mode = 'kubernetes, @@ -393,7 +393,7 @@ Creates extensive Nickel configs: ``` **providers/upcloud.ncl**: -```text +```json { provider = 'upcloud, api_key_ref = "rustyvault://secrets/upcloud/api-key", @@ -405,7 +405,7 @@ Creates extensive Nickel configs: ``` **cedar-policies/default.cedar**: -```text +```bash permit( principal == User::"john@company.com", action == Action::"Deploy", @@ -429,7 +429,7 @@ forbid( #### Step 5: Validation All configs validated: -```text +```toml ✓ Validating system.ncl ✓ Validating platform/deployment.ncl ✓ Validating providers/upcloud.ncl @@ -439,7 +439,7 @@ All configs validated: #### Step 6: Summary & Confirmation -```text +```bash Setup Summary ───────────────────────────────────────── Profile: Production @@ -457,7 +457,7 @@ Do you want to proceed? (y/n): y #### Step 7: Infrastructure Creation (Optional) -```text +```bash Creating UpCloud infrastructure... Creating 3 master nodes... [networking configured] Creating 5 worker nodes... [networking configured] @@ -478,28 +478,28 @@ Deploy services: ### After Setup: Common Tasks **View Kubernetes cluster**: -```text +```yaml kubectl get nodes kubectl get pods --all-namespaces ``` **Check Cedar authorization**: -```text +```bash cat ~/.config/provisioning/cedar-policies/default.cedar ``` **View infrastructure definition**: -```text +```bash cat workspace-production-infrastructure/infrastructure.ncl ``` **Deploy an application**: -```text +```bash provisioning app deploy myapp --workspace production-infrastructure ``` **Monitor cluster**: -```text +```bash # Access Grafana open http://localhost:3000 @@ -547,7 +547,7 @@ open http://localhost:9090 #### Example: GitHub Actions -```text +```bash name: Integration Tests on: [push, pull_request] @@ -598,27 +598,27 @@ jobs: #### What Happens **Step 1: Minimal Detection** -```text +```bash ✓ Detected: CI environment ✓ Profile: CICD ``` **Step 2: Ephemeral Config Creation** -```text +```toml ✓ Created: /tmp/provisioning-ci-abc123def456/ ✓ Created: /tmp/provisioning-ci-abc123def456/system.ncl ✓ Created: /tmp/provisioning-ci-abc123def456/platform/deployment.ncl ``` **Step 3: Validation** -```text +```bash ✓ Validating system.ncl ✓ Validating platform/deployment.ncl ✓ All configurations validated: PASSED ``` **Step 4: Services Start** -```text +```bash ✓ Starting Docker Compose services ✓ Orchestrator running [port 9090] ✓ Control Center running [port 3000] @@ -627,7 +627,7 @@ jobs: ``` **Step 5: Tests Execute** -```text +```bash $ curl http://localhost:9090/health {"status": "healthy", "uptime": "2s"} @@ -639,7 +639,7 @@ All tests passed! ``` **Step 6: Automatic Cleanup** -```text +```bash ✓ Cleanup triggered (job exit) ✓ Stopping Docker Compose ✓ Removing temporary directory: /tmp/provisioning-ci-abc123def456/ @@ -650,7 +650,7 @@ All tests passed! Use environment variables to customize: -```text +```bash # Provider (local or cloud) export PROVISIONING_PROVIDER=local|upcloud|aws|hetzner @@ -670,7 +670,7 @@ export PROVISIONING_CONFIG=/tmp/custom-config.ncl ### CI/CD Best Practices **1. Use matrix builds for testing**: -```text +```bash strategy: matrix: profile: [developer, production] @@ -678,7 +678,7 @@ strategy: ``` **2. Cache Nickel compilation**: -```text +```nickel - uses: actions/cache@v3 with: path: ~/.cache/nickel @@ -686,7 +686,7 @@ strategy: ``` **3. Separate test stages**: -```text +```bash - name: Setup (CI/CD Profile) - name: Test Unit - name: Test Integration @@ -694,7 +694,7 @@ strategy: ``` **4. Publish test results**: -```text +```bash - name: Publish Test Results if: always() uses: actions/upload-artifact@v3 @@ -730,7 +730,7 @@ strategy: ### Migration Path -```text +```bash Developer → Production (ready for team) ↓ @@ -747,7 +747,7 @@ You can run Developer locally and CI/CD in your pipeline simultaneously. If you started with Developer and want to move to Production: -```text +```bash # Backup your current setup tar czf provisioning-backup.tar.gz ~/.config/provisioning/ @@ -763,7 +763,7 @@ tar xzf provisioning-backup.tar.gz All profiles' Nickel configs can be edited after setup: -```text +```nickel # Edit deployment config vim ~/.config/provisioning/platform/deployment.ncl @@ -781,7 +781,7 @@ docker-compose restart # or kubectl apply -f ### Developer Profile **Problem**: Docker not running -```text +```bash # Solution: Start Docker docker daemon & # or @@ -789,7 +789,7 @@ sudo systemctl start docker ``` **Problem**: Ports 9090/3000/3001 already in use -```text +```bash # Solution: Kill conflicting process lsof -i :9090 | grep LISTEN | awk '{print $2}' | xargs kill -9 ``` @@ -797,14 +797,14 @@ lsof -i :9090 | grep LISTEN | awk '{print $2}' | xargs kill -9 ### Production Profile **Problem**: Kubernetes not installed -```text +```yaml # Solution: Install kubectl brew install kubectl # macOS sudo apt-get install kubectl # Linux ``` **Problem**: Cloud credentials rejected -```text +```bash # Solution: Verify credentials upcloud auth status # or aws sts get-caller-identity # Re-run setup with correct credentials @@ -813,13 +813,13 @@ upcloud auth status # or aws sts get-caller-identity ### CI/CD Profile **Problem**: Services not accessible from test -```text +```bash # Solution: Use service DNS curl http://orchestrator:9090/health # instead of localhost ``` **Problem**: Cleanup not working -```text +```bash # Solution: Manual cleanup docker system prune -f rm -rf /tmp/provisioning-ci-*/ diff --git a/docs/src/getting-started/setup-quickstart.md b/docs/src/getting-started/setup-quickstart.md index e1ee914..7656e97 100644 --- a/docs/src/getting-started/setup-quickstart.md +++ b/docs/src/getting-started/setup-quickstart.md @@ -4,7 +4,7 @@ ## Step 1: Check Prerequisites (30 seconds) -```text +```bash # Check Nushell nu --version # Should be 0.109.0+ @@ -17,7 +17,7 @@ systemctl --version ## Step 2: Install Provisioning (1 minute) -```text +```bash # Option A: Using installer script curl -sSL https://install.provisioning.dev | bash @@ -29,7 +29,7 @@ cd provisioning ## Step 3: Initialize System (2 minutes) -```text +```bash # Run interactive setup provisioning setup system --interactive @@ -41,7 +41,7 @@ provisioning setup system --interactive ## Step 4: Create Your First Workspace (1 minute) -```text +```bash # Create workspace provisioning setup workspace myapp @@ -51,7 +51,7 @@ provisioning workspace list ## Step 5: Deploy Your First Server (1 minute) -```text +```bash # Activate workspace provisioning workspace activate myapp @@ -67,7 +67,7 @@ provisioning server create --yes ## Verify Everything Works -```text +```bash # Check health provisioning platform health @@ -80,7 +80,7 @@ provisioning server ssh ## Common Commands Cheat Sheet -```text +```bash # Workspace management provisioning workspace list # List all workspaces provisioning workspace activate prod # Switch workspace @@ -106,7 +106,7 @@ provisioning platform health # Check platform health **Setup wizard won't start** -```text +```bash # Check Nushell nu --version @@ -116,7 +116,7 @@ chmod +x $(which provisioning) **Configuration error** -```text +```toml # Validate configuration provisioning setup validate --verbose @@ -126,7 +126,7 @@ provisioning info paths **Deployment fails** -```text +```bash # Dry-run to see what would happen provisioning server create --check @@ -146,7 +146,7 @@ After basic setup: ## Need Help -```text +```bash # Get help provisioning help diff --git a/docs/src/getting-started/setup-system-guide.md b/docs/src/getting-started/setup-system-guide.md index 6d271f0..7a8220e 100644 --- a/docs/src/getting-started/setup-system-guide.md +++ b/docs/src/getting-started/setup-system-guide.md @@ -15,7 +15,7 @@ ### 30-Second Setup -```text +```bash # Install provisioning curl -sSL https://install.provisioning.dev | bash @@ -37,7 +37,7 @@ provisioning server create ## Directory Structure -```text +```bash provisioning/ ├── system.toml # System info (immutable) ├── user_preferences.toml # User settings (editable) @@ -54,7 +54,7 @@ provisioning/ Run the interactive setup wizard: -```text +```bash provisioning setup system --interactive ``` @@ -93,7 +93,7 @@ The wizard guides you through: Create and manage multiple isolated environments: -```text +```bash # Create workspace provisioning setup workspace dev provisioning setup workspace prod @@ -109,7 +109,7 @@ provisioning workspace activate prod Update any setting: -```text +```bash # Update platform configuration provisioning setup platform --config new-config.toml @@ -122,7 +122,7 @@ provisioning setup validate ## Backup & Restore -```text +```bash # Backup current configuration provisioning setup backup --path ./backup.tar.gz @@ -137,25 +137,25 @@ provisioning setup migrate --from-existing ### "Command not found: provisioning" -```text +```javascript export PATH="/usr/local/bin:$PATH" ``` ### "Nushell not found" -```text +```nushell curl -sSL https://raw.githubusercontent.com/nushell/nushell/main/install.sh | bash ``` ### "Cannot write to directory" -```text +```bash chmod 755 ~/Library/Application\ Support/provisioning/ ``` ### Check required tools -```text +```bash provisioning setup validate --check-tools ``` @@ -181,7 +181,7 @@ A: Yes, via GitOps - configurations in Git, secrets in secure storage. ## Getting Help -```text +```bash # General help provisioning help diff --git a/docs/src/getting-started/setup.md b/docs/src/getting-started/setup.md index 21338da..1a8ae1d 100644 --- a/docs/src/getting-started/setup.md +++ b/docs/src/getting-started/setup.md @@ -25,7 +25,7 @@ All profiles use **Nickel-first architecture**: configuration source of truth is ### Developer Profile (Recommended for First Time) -```text +```bash # Run unified setup provisioning setup profile --profile developer @@ -45,7 +45,7 @@ curl http://localhost:3001/health ``` Expected output: -```text +```bash ╔═════════════════════════════════════════════════════╗ ║ PROVISIONING SETUP - DEVELOPER PROFILE ║ ╚═════════════════════════════════════════════════════╝ @@ -62,7 +62,7 @@ Setup complete in ~4 minutes! ### Production Profile (HA, Security, Team Ready) -```text +```bash # Interactive setup for production provisioning setup profile --profile production --interactive @@ -84,7 +84,7 @@ nickel typecheck ~/.config/provisioning/platform/deployment.ncl ``` Expected config structure: -```text +```toml ~/.config/provisioning/ ├── system.ncl # System detection + capabilities ├── user_preferences.ncl # User settings (MFA, audit, etc.) @@ -102,7 +102,7 @@ Expected config structure: ### CI/CD Profile (Automated, Ephemeral) -```text +```bash # Fully automated setup for pipelines export PROVISIONING_PROVIDER=local export PROVISIONING_WORKSPACE=ci-test-${CI_JOB_ID} @@ -126,7 +126,7 @@ provisioning setup profile --profile cicd ### Linux (XDG Base Directory) -```text +```bash # Primary location ~/.config/provisioning/ @@ -145,7 +145,7 @@ $XDG_CONFIG_HOME/provisioning/ ### macOS (Application Support) -```text +```bash # Platform-specific location ~/Library/Application Support/provisioning/ @@ -177,7 +177,7 @@ Provisioning detects: - **Memory**: Total system RAM in GB - **Disk Space**: Total available disk -```text +```bash # View detected system provisioning setup detect --verbose ``` @@ -193,7 +193,7 @@ You choose between: Setup creates Nickel configs using composition: -```text +```nickel # Example: system.ncl is composed from: let helpers = import "../../schemas/platform/common/helpers.ncl" let defaults = import "../../schemas/platform/defaults/system-defaults.ncl" @@ -213,7 +213,7 @@ Result: **Type-safe config**, guaranteed valid structure and values. All configs are validated: -```text +```toml # Done automatically during setup nickel typecheck ~/.config/provisioning/system.ncl nickel typecheck ~/.config/provisioning/platform/deployment.ncl @@ -225,19 +225,19 @@ nickel typecheck ~/.config/provisioning/**/*.ncl ### Step 5: Service Bootstrap (Profile-Dependent) **Developer**: Starts Docker Compose services locally -```text +```bash docker-compose up -d orchestrator control-center kms ``` **Production**: Outputs Kubernetes manifests (doesn't auto-start, you review first) -```text +```yaml cat ~/.config/provisioning/platform/deployment.ncl # Review, then deploy to your cluster kubectl apply -f generated-from-deployment.ncl ``` **CI/CD**: Starts ephemeral Docker Compose in `/tmp` -```text +```bash # Automatic cleanup on job exit docker-compose -f /tmp/provisioning-ci-${JOB_ID}/compose.yml up # Tests run, cleanup automatic on script exit @@ -266,7 +266,7 @@ docker-compose -f /tmp/provisioning-ci-${JOB_ID}/compose.yml up **Time**: 3-4 minutes **Example**: -```text +```bash provisioning setup profile --profile developer # Output: @@ -303,7 +303,7 @@ provisioning setup profile --profile developer **Time**: 10-15 minutes (interactive, many questions) **Example**: -```text +```bash provisioning setup profile --profile production --interactive # Prompts: @@ -347,7 +347,7 @@ provisioning setup profile --profile production --interactive **Time**: Less than 2 minutes **Example**: -```text +```bash # In GitHub Actions: - name: Setup Provisioning run: | @@ -369,7 +369,7 @@ provisioning setup profile --profile production --interactive ### After Setup, Verify Everything Works **Developer Profile**: -```text +```bash # Check configs exist ls -la ~/.config/provisioning/ ls -la ~/.config/provisioning/platform/ @@ -387,7 +387,7 @@ curl http://localhost:3001/health ``` **Production Profile**: -```text +```bash # Check Nickel configs nickel typecheck ~/.config/provisioning/system.ncl nickel typecheck ~/.config/provisioning/platform/deployment.ncl @@ -404,7 +404,7 @@ cat ~/.config/provisioning/cedar-policies/default.cedar ``` **CI/CD Profile**: -```text +```bash # Check temp configs exist ls -la /tmp/provisioning-ci-*/ @@ -424,7 +424,7 @@ docker ps | grep provisioning **Cause**: Nickel binary not installed **Solution**: -```text +```nickel # macOS brew install nickel @@ -444,7 +444,7 @@ nickel --version # Should be 1.5.0+ **Cause**: Nickel typecheck error in generated config **Solution**: -```text +```nickel # See detailed error nickel typecheck ~/.config/provisioning/platform/deployment.ncl --color always @@ -463,7 +463,7 @@ provisioning setup profile --profile developer --verbose **Cause**: Docker not installed or not running **Solution**: -```text +```bash # Check Docker docker --version docker ps @@ -487,7 +487,7 @@ provisioning setup profile --profile developer **Cause**: Port already in use, Docker not running, or resource constraints **Solution**: -```text +```bash # Check what's using ports 9090, 3000, 3001 lsof -i :9090 lsof -i :3000 @@ -509,7 +509,7 @@ docker system prune # Free up space if needed **Cause**: Directory created with wrong permissions **Solution**: -```text +```bash # Fix permissions (macOS) chmod 700 ~/Library/Application\ Support/provisioning/ @@ -528,7 +528,7 @@ provisioning setup profile --profile developer **Cause**: Services reading from old location or wrong environment variable **Solution**: -```text +```bash # Verify service sees new location echo $PROVISIONING_CONFIG # Should be: ~/.config/provisioning/platform/deployment.ncl @@ -547,7 +547,7 @@ provisioning service status --verbose After initial setup, you can customize configs per workspace: -```text +```toml # Create workspace-specific override mkdir -p workspace-myproject/config cat > workspace-myproject/config/platform-overrides.ncl <<'EOF' @@ -631,7 +631,7 @@ Result: Minimal, validated, reproducible config. ## Getting Help -```text +```bash # Help for setup provisioning setup --help diff --git a/docs/src/guides/customize-infrastructure.md b/docs/src/guides/customize-infrastructure.md index f7d3b92..2dc7a93 100644 --- a/docs/src/guides/customize-infrastructure.md +++ b/docs/src/guides/customize-infrastructure.md @@ -20,7 +20,7 @@ This guide covers: The provisioning system uses a **3-layer architecture** for configuration inheritance: -```text +```toml ┌─────────────────────────────────────┐ │ Infrastructure Layer (Priority 300)│ ← Highest priority │ workspace/infra/{name}/ │ @@ -52,14 +52,14 @@ Higher numbers override lower numbers. ### View Layer Resolution -```text +```bash # Explain layer concept provisioning lyr explain ``` **Expected Output:** -```text +```bash 📚 LAYER SYSTEM EXPLAINED The layer system provides configuration inheritance across 3 levels: @@ -89,14 +89,14 @@ Resolution: Infrastructure → Workspace → Core Higher priority layers override lower ones. ``` -```text +```bash # Show layer resolution for your project provisioning lyr show my-production ``` **Expected Output:** -```text +```bash 📊 Layer Resolution for my-production: LAYER PRIORITY SOURCE FILES @@ -121,14 +121,14 @@ Status: ✅ All layers resolved successfully ### Test Layer Resolution -```text +```bash # Test how a specific module resolves provisioning lyr test kubernetes my-production ``` **Expected Output:** -```text +```bash 🔍 Layer Resolution Test: kubernetes → my-production Resolving kubernetes configuration... @@ -171,14 +171,14 @@ Resolution: ✅ Success ### List Available Templates -```text +```bash # List all templates provisioning tpl list ``` **Expected Output:** -```text +```bash 📋 Available Templates: TASKSERVS: @@ -203,7 +203,7 @@ CLUSTERS: Total: 13 templates ``` -```text +```bash # List templates by type provisioning tpl list --type taskservs provisioning tpl list --type providers @@ -212,14 +212,14 @@ provisioning tpl list --type clusters ### View Template Details -```text +```bash # Show template details provisioning tpl show production-kubernetes ``` **Expected Output:** -```text +```bash 📄 Template: production-kubernetes Description: Production-ready Kubernetes configuration with @@ -250,14 +250,14 @@ Example Usage: ### Apply Template -```text +```bash # Apply template to your infrastructure provisioning tpl apply production-kubernetes my-production ``` **Expected Output:** -```text +```bash 🚀 Applying template: production-kubernetes → my-production Checking compatibility... ⏳ @@ -282,14 +282,14 @@ Next steps: ### Validate Template Usage -```text +```bash # Validate template was applied correctly provisioning tpl validate my-production ``` **Expected Output:** -```text +```bash ✅ Template Validation: my-production Templates Applied: @@ -314,7 +314,7 @@ Status: ✅ Valid ### Step 1: Create Template Structure -```text +```bash # Create custom template directory mkdir -p provisioning/workspace/templates/my-custom-template ``` @@ -323,7 +323,7 @@ mkdir -p provisioning/workspace/templates/my-custom-template **File: `provisioning/workspace/templates/my-custom-template/main.ncl`** -```text +```nickel # Custom Kubernetes template with specific settings let kubernetes_config = { # Version @@ -389,7 +389,7 @@ kubernetes_config **File: `provisioning/workspace/templates/my-custom-template/metadata.toml`** -```text +```toml [template] name = "my-custom-template" version = "1.0.0" @@ -409,7 +409,7 @@ features = ["security", "monitoring", "high-availability"] ### Step 4: Test Custom Template -```text +```bash # List templates (should include your custom template) provisioning tpl list @@ -426,7 +426,7 @@ provisioning tpl apply my-custom-template my-test **Core Layer** (`provisioning/extensions/taskservs/postgres/main.ncl`): -```text +```javascript let postgres_config = { version = "15.5", port = 5432, @@ -437,7 +437,7 @@ postgres_config **Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.ncl`): -```text +```javascript let postgres_config = { max_connections = 500, # Override only max_connections } in @@ -446,7 +446,7 @@ postgres_config **Result** (after layer resolution): -```text +```javascript let postgres_config = { version = "15.5", # From Core port = 5432, # From Core @@ -459,7 +459,7 @@ postgres_config **Workspace Layer** (`provisioning/workspace/templates/production-postgres.ncl`): -```text +```javascript let postgres_config = { replication = { enabled = true, @@ -472,7 +472,7 @@ postgres_config **Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.ncl`): -```text +```javascript let postgres_config = { replication = { sync_mode = "sync", # Override sync mode @@ -484,7 +484,7 @@ postgres_config **Result**: -```text +```javascript let postgres_config = { version = "15.5", # From Core port = 5432, # From Core @@ -503,7 +503,7 @@ postgres_config **Workspace Layer** (`provisioning/workspace/templates/base-kubernetes.ncl`): -```text +```javascript let kubernetes_config = { version = "1.30.0", control_plane_count = 3, @@ -518,7 +518,7 @@ kubernetes_config **Development Infrastructure** (`workspace/infra/my-dev/taskservs/kubernetes.ncl`): -```text +```javascript let kubernetes_config = { control_plane_count = 1, # Smaller for dev worker_count = 2, @@ -532,7 +532,7 @@ kubernetes_config **Production Infrastructure** (`workspace/infra/my-prod/taskservs/kubernetes.ncl`): -```text +```javascript let kubernetes_config = { control_plane_count = 5, # Larger for prod worker_count = 10, @@ -550,7 +550,7 @@ kubernetes_config Create different configurations for each environment: -```text +```toml # Create environments provisioning ws init my-app-dev provisioning ws init my-app-staging @@ -573,7 +573,7 @@ Create reusable configuration fragments: **File: `provisioning/workspace/templates/shared/security-policies.ncl`** -```text +```javascript let security_policies = { pod_security = { enforce = "restricted", @@ -603,7 +603,7 @@ security_policies Import in your infrastructure: -```text +```javascript let security_policies = (import "../../../provisioning/workspace/templates/shared/security-policies.ncl") in let kubernetes_config = { @@ -618,7 +618,7 @@ kubernetes_config Use Nickel features for dynamic configuration: -```text +```nickel # Calculate resources based on server count let server_count = 5 in let replicas_per_server = 2 in @@ -634,7 +634,7 @@ postgres_config ### Pattern 4: Conditional Configuration -```text +```javascript let environment = "production" in # or "development" let kubernetes_config = { @@ -651,14 +651,14 @@ kubernetes_config ## Layer Statistics -```text +```bash # Show layer system statistics provisioning lyr stats ``` **Expected Output:** -```text +```bash 📊 Layer System Statistics: Infrastructure Layer: @@ -686,7 +686,7 @@ Resolution Performance: ### Complete Customization Example -```text +```bash # 1. Create new infrastructure provisioning ws init my-custom-app @@ -728,7 +728,7 @@ provisioning t create kubernetes --infra my-custom-app ### 2. Template Organization -```text +```bash provisioning/workspace/templates/ ├── shared/ # Shared configuration fragments │ ├── security-policies.ncl @@ -749,7 +749,7 @@ Document your customizations: **File: `workspace/infra/my-production/README.md`** -```text +```bash # My Production Infrastructure ## Customizations @@ -769,7 +769,7 @@ Document your customizations: Keep templates and configurations in version control: -```text +```toml cd provisioning/workspace/templates/ git add . git commit -m "Add production Kubernetes template with enhanced security" @@ -783,7 +783,7 @@ git commit -m "Configure production environment for my-production" ### Issue: Configuration not applied -```text +```toml # Check layer resolution provisioning lyr show my-production @@ -796,7 +796,7 @@ provisioning lyr test kubernetes my-production ### Issue: Conflicting configurations -```text +```toml # Validate configuration provisioning val config --infra my-production @@ -806,7 +806,7 @@ provisioning show config kubernetes --infra my-production ### Issue: Template not found -```text +```bash # List available templates provisioning tpl list @@ -826,7 +826,7 @@ provisioning tpl refresh ## Quick Reference -```text +```bash # Layer system provisioning lyr explain # Explain layers provisioning lyr show # Show layer resolution diff --git a/docs/src/guides/extension-development-quickstart.md b/docs/src/guides/extension-development-quickstart.md index 1888f8d..7e4717a 100644 --- a/docs/src/guides/extension-development-quickstart.md +++ b/docs/src/guides/extension-development-quickstart.md @@ -28,7 +28,7 @@ This guide provides a hands-on walkthrough for developing custom extensions usin ### Step 1: Create Extension from Template -```text +```bash # Interactive creation (recommended for beginners) ./provisioning/tools/create-extension.nu interactive @@ -40,7 +40,7 @@ This guide provides a hands-on walkthrough for developing custom extensions usin ### Step 2: Navigate and Customize -```text +```bash # Navigate to your new extension cd extensions/taskservs/my-app @@ -56,7 +56,7 @@ ls -la Edit `main.ncl` to match your service requirements: -```text +```nickel # contracts.ncl - Define the schema { MyAppConfig = { @@ -92,7 +92,7 @@ let defaults = import "./defaults.ncl" in ### Step 4: Test Your Extension -```text +```bash # Test discovery ./provisioning/core/cli/module-loader discover taskservs | grep my-app @@ -105,7 +105,7 @@ nickel typecheck main.ncl ### Step 5: Use in Workspace -```text +```bash # Create test workspace mkdir -p /tmp/test-my-app cd /tmp/test-my-app @@ -148,7 +148,7 @@ nickel export infra/default/servers.ncl ### Database Service Extension -```text +```bash # Create database service ./provisioning/tools/create-extension.nu taskserv company-db --author "Your Company" @@ -160,7 +160,7 @@ cd extensions/taskservs/company-db Edit the schema: -```text +```bash # Database service configuration schema let CompanyDbConfig = { # Database settings @@ -189,7 +189,7 @@ CompanyDbConfig ### Monitoring Service Extension -```text +```bash # Create monitoring service ./provisioning/tools/create-extension.nu taskserv company-monitoring --author "Your Company" @@ -198,7 +198,7 @@ CompanyDbConfig Customize for Prometheus with company dashboards: -```text +```bash # Monitoring service configuration let AlertManagerConfig = { smtp_server | String, @@ -227,7 +227,7 @@ CompanyMonitoringConfig ### Legacy System Integration -```text +```bash # Create legacy integration ./provisioning/tools/create-extension.nu taskserv legacy-bridge --author "Your Company" @@ -236,7 +236,7 @@ CompanyMonitoringConfig Customize for mainframe integration: -```text +```bash # Legacy bridge configuration schema let LegacyBridgeConfig = { # Legacy system details @@ -263,7 +263,7 @@ LegacyBridgeConfig ### Custom Provider Development -```text +```bash # Create custom cloud provider ./provisioning/tools/create-extension.nu provider company-cloud --author "Your Company" @@ -272,7 +272,7 @@ LegacyBridgeConfig ### Complete Infrastructure Stack -```text +```bash # Create complete cluster configuration ./provisioning/tools/create-extension.nu cluster company-stack --author "Your Company" @@ -283,7 +283,7 @@ LegacyBridgeConfig ### Local Testing Workflow -```text +```bash # 1. Create test workspace mkdir test-workspace && cd test-workspace ../provisioning/tools/workspace-init.nu . init @@ -307,7 +307,7 @@ nickel export servers.ncl Create `.github/workflows/test-extensions.yml`: -```text +```yaml name: Test Extensions on: [push, pull_request] diff --git a/docs/src/guides/from-scratch.md b/docs/src/guides/from-scratch.md index 91c9405..c46f9d0 100644 --- a/docs/src/guides/from-scratch.md +++ b/docs/src/guides/from-scratch.md @@ -53,7 +53,7 @@ Nushell 0.109.1+ is the primary shell and scripting language for the provisionin ### macOS (via Homebrew) -```text +```bash # Install Nushell brew install nushell @@ -66,7 +66,7 @@ nu --version **Ubuntu/Debian:** -```text +```bash # Add Nushell repository curl -fsSL https://starship.rs/install.sh | bash @@ -80,21 +80,21 @@ nu --version **Fedora:** -```text +```bash sudo dnf install nushell nu --version ``` **Arch Linux:** -```text +```bash sudo pacman -S nushell nu --version ``` ### Linux/macOS (via Cargo) -```text +```rust # Install Rust (if not already installed) curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source $HOME/.cargo/env @@ -108,7 +108,7 @@ nu --version ### Windows (via Winget) -```text +```bash # Install Nushell winget install nushell @@ -118,7 +118,7 @@ nu --version ### Configure Nushell -```text +```nushell # Start Nushell nu @@ -149,7 +149,7 @@ Native plugins provide **10-50x performance improvement** for authentication, KM ### Prerequisites for Building Plugins -```text +```bash # Install Rust toolchain (if not already installed) curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source $HOME/.cargo/env @@ -167,7 +167,7 @@ sudo apt install kwalletmanager # Ubuntu/Debian (KDE) ### Build Plugins -```text +```bash # Navigate to plugins directory cd provisioning/core/plugins/nushell-plugins @@ -185,7 +185,7 @@ cargo build --release --all ### Register Plugins with Nushell -```text +```nushell # Register all three plugins (full paths recommended) plugin add $PWD/target/release/nu_plugin_auth plugin add $PWD/target/release/nu_plugin_kms @@ -199,7 +199,7 @@ plugin add target/release/nu_plugin_orchestrator ### Verify Plugin Installation -```text +```bash # List registered plugins plugin list | where name =~ "auth|kms|orch" @@ -220,7 +220,7 @@ orch --help # Should show orch commands ### Configure Plugin Environments -```text +```toml # Add to ~/.config/nushell/env.nu $env.CONTROL_CENTER_URL = "http://localhost:3000" $env.RUSTYVAULT_ADDR = "http://localhost:8200" @@ -234,7 +234,7 @@ $env.AGE_RECIPIENT = "age1xxxxxxxxx" # Replace with your public key ### Test Plugins (Quick Smoke Test) -```text +```bash # Test KMS plugin (requires backend configured) kms status # Expected: { backend: "rustyvault", status: "healthy", ... } @@ -264,7 +264,7 @@ If you want to skip plugin installation for now: To use HTTP fallback: -```text +```bash # System automatically uses HTTP if plugins not available # No configuration changes needed ``` @@ -277,7 +277,7 @@ To use HTTP fallback: **SOPS (Secrets Management)** -```text +```bash # macOS brew install sops @@ -293,7 +293,7 @@ sops --version **Age (Encryption Tool)** -```text +```bash # macOS brew install age @@ -318,7 +318,7 @@ cat ~/.age/key.txt **K9s (Kubernetes Management)** -```text +```yaml # macOS brew install k9s @@ -332,7 +332,7 @@ k9s version **glow (Markdown Renderer)** -```text +```bash # macOS brew install glow @@ -350,7 +350,7 @@ glow --version ### Clone Repository -```text +```bash # Clone project git clone https://github.com/your-org/project-provisioning.git cd project-provisioning @@ -361,7 +361,7 @@ git pull origin main ### Add CLI to PATH (Optional) -```text +```bash # Add to ~/.bashrc or ~/.zshrc export PATH="$PATH:/Users/Akasha/project-provisioning/provisioning/core/cli" @@ -381,7 +381,7 @@ A workspace is a self-contained environment for managing infrastructure. ### Create New Workspace -```text +```bash # Initialize new workspace provisioning workspace init --name production @@ -396,7 +396,7 @@ provisioning workspace init The new workspace initialization now generates **Nickel configuration files** for type-safe, schema-validated infrastructure definitions: -```text +```nickel workspace/ ├── config/ │ ├── config.ncl # Master Nickel configuration (type-safe) @@ -423,7 +423,7 @@ The workspace configuration uses **Nickel (type-safe, validated)**. This provide **Example Nickel config** (`config.ncl`): -```text +```json { workspace = { name = "production", @@ -446,7 +446,7 @@ The workspace configuration uses **Nickel (type-safe, validated)**. This provide ### Verify Workspace -```text +```bash # Show workspace info provisioning workspace info @@ -462,7 +462,7 @@ provisioning workspace active Now you can inspect and validate your Nickel workspace configuration: -```text +```nickel # View complete workspace configuration provisioning workspace config show @@ -498,12 +498,12 @@ provisioning workspace config hierarchy **UpCloud Provider:** -```text +```bash # Create provider config vim workspace/config/providers/upcloud.toml ``` -```text +```toml [upcloud] username = "your-upcloud-username" password = "your-upcloud-password" # Will be encrypted @@ -515,12 +515,12 @@ default_plan = "2xCPU-4 GB" **AWS Provider:** -```text +```bash # Create AWS config vim workspace/config/providers/aws.toml ``` -```text +```toml [aws] region = "us-east-1" access_key_id = "AKIAXXXXX" @@ -533,7 +533,7 @@ default_region = "us-east-1" ### Encrypt Sensitive Data -```text +```bash # Generate Age key if not done already age-keygen -o ~/.age/key.txt @@ -551,12 +551,12 @@ rm workspace/config/providers/upcloud.toml ### Configure Local Overrides -```text +```toml # Edit user-specific settings vim workspace/config/local-overrides.toml ``` -```text +```toml [user] name = "admin" email = "admin@example.com" @@ -580,7 +580,7 @@ ssh_key = "~/.ssh/id_ed25519" ### Discover Available Modules -```text +```bash # Discover task services provisioning module discover taskserv # Shows: kubernetes, containerd, etcd, cilium, helm, etc. @@ -596,7 +596,7 @@ provisioning module discover cluster ### Load Modules into Workspace -```text +```bash # Load Kubernetes taskserv provisioning module load taskserv production kubernetes @@ -617,7 +617,7 @@ provisioning module list cluster production Before deploying, validate all configuration: -```text +```toml # Validate workspace configuration provisioning workspace validate @@ -636,7 +636,7 @@ provisioning allenv **Expected output:** -```text +```bash ✓ Configuration valid ✓ Provider credentials configured ✓ Workspace initialized @@ -653,7 +653,7 @@ provisioning allenv ### Preview Server Creation (Dry Run) -```text +```bash # Check what would be created (no actual changes) provisioning server create --infra production --check @@ -671,7 +671,7 @@ provisioning server create --infra production --check --debug ### Create Servers -```text +```bash # Create servers (with confirmation prompt) provisioning server create --infra production @@ -684,7 +684,7 @@ provisioning server create --infra production --wait **Expected output:** -```text +```bash Creating servers for infrastructure: production ● Creating server: k8s-master-01 (de-fra1, 4xCPU-8 GB) @@ -701,7 +701,7 @@ Servers: ### Verify Server Creation -```text +```bash # List all servers provisioning server list --infra production @@ -721,7 +721,7 @@ Task services are infrastructure components like Kubernetes, databases, monitori ### Install Kubernetes (Check Mode First) -```text +```yaml # Preview Kubernetes installation provisioning taskserv create kubernetes --infra production --check @@ -734,7 +734,7 @@ provisioning taskserv create kubernetes --infra production --check ### Install Kubernetes -```text +```yaml # Install Kubernetes (with dependencies) provisioning taskserv create kubernetes --infra production @@ -749,7 +749,7 @@ provisioning workflow monitor **Expected output:** -```text +```bash Installing taskserv: kubernetes ● Installing containerd on k8s-master-01 @@ -777,7 +777,7 @@ Cluster Info: ### Install Additional Services -```text +```bash # Install Cilium (CNI) provisioning taskserv create cilium --infra production @@ -796,7 +796,7 @@ Clusters are complete application stacks (for example, BuildKit, OCI Registry, M ### Create BuildKit Cluster (Check Mode) -```text +```bash # Preview cluster creation provisioning cluster create buildkit --infra production --check @@ -809,7 +809,7 @@ provisioning cluster create buildkit --infra production --check ### Create BuildKit Cluster -```text +```bash # Create BuildKit cluster provisioning cluster create buildkit --infra production @@ -822,7 +822,7 @@ orch tasks --status running **Expected output:** -```text +```bash Creating cluster: buildkit ● Deploying BuildKit daemon @@ -841,7 +841,7 @@ Cluster Info: ### Verify Cluster -```text +```bash # List all clusters provisioning cluster list --infra production @@ -858,7 +858,7 @@ kubectl get pods -n buildkit ### Comprehensive Health Check -```text +```bash # Check orchestrator status orch status # or @@ -880,7 +880,7 @@ kubectl get pods --all-namespaces ### Run Validation Tests -```text +```bash # Validate infrastructure provisioning infra validate --infra production @@ -907,7 +907,7 @@ All checks should show: ### Configure kubectl Access -```text +```toml # Get kubeconfig from master node provisioning server ssh k8s-master-01 "cat ~/.kube/config" > ~/.kube/config-production @@ -921,7 +921,7 @@ kubectl get pods --all-namespaces ### Set Up Monitoring (Optional) -```text +```bash # Deploy monitoring stack provisioning cluster create monitoring --infra production @@ -932,7 +932,7 @@ kubectl port-forward -n monitoring svc/grafana 3000:80 ### Configure CI/CD Integration (Optional) -```text +```toml # Generate CI/CD credentials provisioning secrets generate aws --ttl 12h @@ -943,7 +943,7 @@ kubectl create clusterrolebinding ci-cd --clusterrole=admin --serviceaccount=def ### Backup Configuration -```text +```toml # Backup workspace configuration tar -czf workspace-production-backup.tar.gz workspace/ @@ -962,7 +962,7 @@ kms encrypt (open workspace-production-backup.tar.gz | encode base64) --backend **Problem**: Server creation times out or fails -```text +```bash # Check provider credentials provisioning validate config @@ -977,7 +977,7 @@ provisioning server create --infra production --check --debug **Problem**: Kubernetes installation fails -```text +```yaml # Check server connectivity provisioning server ssh k8s-master-01 @@ -996,7 +996,7 @@ provisioning taskserv create kubernetes --infra production **Problem**: `auth`, `kms`, or `orch` commands not found -```text +```bash # Check plugin registration plugin list | where name =~ "auth|kms|orch" @@ -1015,7 +1015,7 @@ nu **Problem**: `kms encrypt` returns error -```text +```bash # Check backend status kms status @@ -1033,7 +1033,7 @@ cat ~/.age/key.txt **Problem**: `orch status` returns error -```text +```bash # Check orchestrator status ps aux | grep orchestrator @@ -1049,7 +1049,7 @@ tail -f provisioning/platform/orchestrator/data/orchestrator.log **Problem**: `provisioning validate config` shows errors -```text +```toml # Show detailed errors provisioning validate config --debug @@ -1109,7 +1109,7 @@ vim workspace/config/local-overrides.toml ### Get Help -```text +```bash # Show help for any command provisioning help provisioning help server diff --git a/docs/src/guides/guide-system.md b/docs/src/guides/guide-system.md index 062bf7d..3988a0b 100644 --- a/docs/src/guides/guide-system.md +++ b/docs/src/guides/guide-system.md @@ -35,7 +35,7 @@ A comprehensive interactive guide system providing copy-paste ready commands and For best viewing experience, install `glow` (markdown terminal renderer): -```text +```bash # macOS brew install glow @@ -54,7 +54,7 @@ go install github.com/charmbracelet/glow@latest ## Quick Start with Guides -```text +```bash # Show quick reference (fastest) provisioning sc @@ -122,7 +122,7 @@ provisioning guide list The guide system is integrated into the help system: -```text +```bash # Show guide help provisioning help guides diff --git a/docs/src/guides/infrastructure-setup.md b/docs/src/guides/infrastructure-setup.md index ca0c5a5..6eff08c 100644 --- a/docs/src/guides/infrastructure-setup.md +++ b/docs/src/guides/infrastructure-setup.md @@ -8,7 +8,7 @@ ### 1. Generate Infrastructure Configs (Solo Mode) -```text +```toml cd project-provisioning # Generate solo deployment (Docker Compose, Nginx, Prometheus, OCI Registry) @@ -20,7 +20,7 @@ jq . /tmp/solo-infra.json ### 2. Validate Generated Configs -```text +```toml # Solo deployment validation bash provisioning/platform/scripts/validate-infrastructure.nu --config-dir provisioning/platform/infrastructure @@ -29,7 +29,7 @@ bash provisioning/platform/scripts/validate-infrastructure.nu --config-dir provi ### 3. Compare Solo vs Enterprise -```text +```bash # Export both examples nickel export --format json provisioning/schemas/infrastructure/examples-solo-deployment.ncl > /tmp/solo.json nickel export --format json provisioning/schemas/infrastructure/examples-enterprise-deployment.ncl > /tmp/enterprise.json @@ -80,7 +80,7 @@ echo "=== Enterprise Prometheus Jobs ===" && jq '.prometheus_config.scrape_confi ### Two-Tier Configuration System **Platform Config Layer** (Service-Internal): -```text +```toml Orchestrator port, database host, logging level ↓ ConfigLoader (Rust) @@ -89,7 +89,7 @@ Service reads TOML from runtime/generated/ ``` **Infrastructure Config Layer** (Deployment-External): -```text +```toml Docker Compose services, Nginx routing, Prometheus scrape jobs ↓ nickel export → YAML/JSON @@ -99,7 +99,7 @@ Docker/Kubernetes/Nginx deploys infrastructure ### Complete Deployment Workflow -```text +```bash 1. Choose platform config mode provisioning/platform/config/examples/orchestrator.solo.example.ncl ↓ @@ -126,7 +126,7 @@ Docker/Kubernetes/Nginx deploys infrastructure ### Solo Mode (Development) -```text +```bash Orchestrator: 1.0 CPU, 1024M RAM (1 replica) Control Center: 0.5 CPU, 512M RAM CoreDNS: 0.25 CPU, 256M RAM @@ -139,7 +139,7 @@ Use Case: Development, testing, PoCs ### Enterprise Mode (Production) -```text +```bash Orchestrator: 4.0 CPU, 4096M RAM (3 replicas) Control Center: 2.0 CPU, 2048M RAM (HA) CoreDNS: 1.0 CPU, 1024M RAM @@ -156,19 +156,19 @@ Use Case: Production deployments, high availability ### Generate Solo Infrastructure -```text +```bash nickel export --format json provisioning/schemas/infrastructure/examples-solo-deployment.ncl ``` ### Generate Enterprise Infrastructure -```text +```bash nickel export --format json provisioning/schemas/infrastructure/examples-enterprise-deployment.ncl ``` ### Validate JSON Structure -```text +```bash jq '.docker_compose_services | keys' /tmp/infra.json jq '.prometheus_config.scrape_configs | length' /tmp/infra.json jq '.oci_registry_config.backend' /tmp/infra.json @@ -176,7 +176,7 @@ jq '.oci_registry_config.backend' /tmp/infra.json ### Check Resource Limits -```text +```bash # All services in solo mode jq '.docker_compose_services[] | {name: .name, cpu: .deploy.resources.limits.cpus, memory: .deploy.resources.limits.memory}' /tmp/solo.json @@ -186,7 +186,7 @@ jq '.docker_compose_services.orchestrator.deploy.resources.limits' /tmp/solo.jso ### Compare Modes -```text +```bash # Services count jq '.docker_compose_services | length' /tmp/solo.json # 5 services jq '.docker_compose_services | length' /tmp/enterprise.json # 6 services @@ -206,7 +206,7 @@ jq -r '.oci_registry_config.backend' /tmp/enterprise.json # Harbor ### Type Check Schemas -```text +```bash nickel typecheck provisioning/schemas/infrastructure/docker-compose.ncl nickel typecheck provisioning/schemas/infrastructure/kubernetes.ncl nickel typecheck provisioning/schemas/infrastructure/nginx.ncl @@ -217,14 +217,14 @@ nickel typecheck provisioning/schemas/infrastructure/oci-registry.ncl ### Validate Examples -```text +```bash nickel typecheck provisioning/schemas/infrastructure/examples-solo-deployment.ncl nickel typecheck provisioning/schemas/infrastructure/examples-enterprise-deployment.ncl ``` ### Test Export -```text +```bash nickel export --format json provisioning/schemas/infrastructure/examples-solo-deployment.ncl | jq . ``` @@ -234,14 +234,14 @@ nickel export --format json provisioning/schemas/infrastructure/examples-solo-de ### Solo Platform Config -```text +```toml nickel export --format toml provisioning/platform/config/examples/orchestrator.solo.example.ncl # Output: TOML with [database], [logging], [monitoring], [workspace] sections ``` ### Enterprise Platform Config -```text +```toml nickel export --format toml provisioning/platform/config/examples/orchestrator.enterprise.example.ncl # Output: TOML with HA, S3, Redis, tracing configuration ``` @@ -252,7 +252,7 @@ nickel export --format toml provisioning/platform/config/examples/orchestrator.e ### Platform Configs (services internally) -```text +```toml provisioning/platform/config/ ├── runtime/generated/*.toml # Auto-generated by ConfigLoader ├── examples/ # Reference implementations @@ -264,7 +264,7 @@ provisioning/platform/config/ ### Infrastructure Schemas -```text +```bash provisioning/schemas/infrastructure/ ├── docker-compose.ncl # 232 lines ├── kubernetes.ncl # 376 lines @@ -279,7 +279,7 @@ provisioning/schemas/infrastructure/ ### TypeDialog Integration -```text +```bash provisioning/platform/.typedialog/provisioning/platform/ ├── forms/ # Ready for auto-generated forms ├── templates/service-form.template.j2 @@ -290,7 +290,7 @@ provisioning/platform/.typedialog/provisioning/platform/ ### Automation Scripts -```text +```bash provisioning/platform/scripts/ ├── generate-infrastructure-configs.nu # Generate all configs ├── validate-infrastructure.nu # Validate with tools diff --git a/docs/src/guides/internationalization-system.md b/docs/src/guides/internationalization-system.md index 800c5b2..61368ec 100644 --- a/docs/src/guides/internationalization-system.md +++ b/docs/src/guides/internationalization-system.md @@ -14,7 +14,7 @@ for localization at scale. ### Directory Structure -```text +```bash provisioning/locales/ ├── i18n-config.toml # Locale metadata and fallback chains ├── en-US/ @@ -35,7 +35,7 @@ provisioning/locales/ Fluent files use a simple key-value format: -```text +```bash # Comments start with # key = value category-name = Category Name @@ -54,7 +54,7 @@ help-infrastructure-desc = Server, taskserv, cluster, and VM management The system automatically detects the active language via the `LANG` environment variable: -```text +```bash # English (default) LANG=en_US provisioning help @@ -81,7 +81,7 @@ The help system (`help_system_fluent.nu`) uses three key functions: #### 1. Locale Detection -```text +```bash def get-active-locale [] { # Parses LANG env var and returns locale code # Example: "es_ES.UTF-8" → "es-ES" @@ -90,7 +90,7 @@ def get-active-locale [] { #### 2. Fluent Parsing -```text +```python def parse-fluent [content: string] { # Parses .ftl file format into record of key-value pairs # Skips comments and empty lines @@ -99,7 +99,7 @@ def parse-fluent [content: string] { #### 3. String Lookup with Fallback -```text +```python def get-help-string [key: string] { # Looks up key in active locale # Falls back to en-US if not found @@ -111,7 +111,7 @@ def get-help-string [key: string] { Help functions precompute all strings, then use them in print statements: -```text +```bash def help-infrastructure [] { let title = (get-help-string "help-infrastructure-title") let server = (get-help-string "help-infra-server") @@ -132,14 +132,14 @@ def help-infrastructure [] { TypeDialog forms automatically load translations when configured: -```text +```toml # In form definition "locales_path": "provisioning/locales" ``` Form labels and validation messages use the same Fluent files: -```text +```bash form-label-password = Contraseña form-error-password-required = La contraseña es obligatoria form-placeholder-email = correo@ejemplo.com @@ -149,7 +149,7 @@ form-placeholder-email = correo@ejemplo.com ### Step 1: Create Locale Directory -```text +```bash mkdir -p provisioning/locales/fr-FR touch provisioning/locales/fr-FR/help.ftl touch provisioning/locales/fr-FR/forms.ftl @@ -159,7 +159,7 @@ touch provisioning/locales/fr-FR/forms.ftl Use English files as template: -```text +```bash cp provisioning/locales/en-US/help.ftl provisioning/locales/fr-FR/help.ftl ``` @@ -167,7 +167,7 @@ cp provisioning/locales/en-US/help.ftl provisioning/locales/fr-FR/help.ftl Edit the new files and translate all values: -```text +```bash # Before (English template) help-main-title = PROVISIONING SYSTEM help-main-subtitle = Layered Infrastructure Automation @@ -183,7 +183,7 @@ help-main-subtitle = Automation Infrastructurelle Stratifiée Add the new locale to `provisioning/locales/i18n-config.toml`: -```text +```toml [locales.fr-FR] name = "French (France)" @@ -193,7 +193,7 @@ fr-FR = ["en-US"] ### Step 5: Test -```text +```bash # Test with the new locale LANG=fr_FR provisioning help @@ -205,7 +205,7 @@ LANG=fr_FR provisioning help infrastructure When a translation is missing, the system follows this chain: -```text +```bash Active Locale (es-ES) ↓ Fallback Locale (defined in i18n-config.toml) @@ -215,7 +215,7 @@ English (en-US) - Final fallback Example configuration: -```text +```toml [fallback_chains] es-ES = ["en-US"] # Spanish falls back to English pt-BR = ["pt-PT", "en-US"] # Brazilian Portuguese → Portuguese → English @@ -226,7 +226,7 @@ pt-BR = ["pt-PT", "en-US"] # Brazilian Portuguese → Portuguese → English ### Command Names **MUST NOT be translated** - keep in English: -```text +```bash # WRONG ❌ help-main-infrastructure-name = infraestructura @@ -239,7 +239,7 @@ Reason: Command aliases must remain consistent across all languages for CLI comp ### Descriptions **SHOULD be translated** - translate descriptions fully: -```text +```bash help-main-infrastructure-desc = Server, taskserv, cluster, VM, and infra management # Spanish: help-main-infrastructure-desc = Gestión de servidores, taskserv, clusters, VM e infraestructura @@ -248,7 +248,7 @@ help-main-infrastructure-desc = Gestión de servidores, taskserv, clusters, VM e ### Error Messages **SHOULD be translated** - translate fully for user experience: -```text +```bash help-error-unknown-category = Unknown help category # Spanish: help-error-unknown-category = Categoría de ayuda desconocida @@ -257,7 +257,7 @@ help-error-unknown-category = Categoría de ayuda desconocida ### Options and Flags **MUST NOT be translated** - keep in English: -```text +```bash help-orch-start = Start orchestrator [--background] # Spanish: help-orch-start = Iniciar orquestador [--background] # Keep [--background] as-is @@ -267,7 +267,7 @@ help-orch-start = Iniciar orquestador [--background] # Keep [--background] as-i ### From Nushell Code -```text +```nushell use provisioning/core/nulib/main_provisioning/help_system_fluent.nu * # Get a translated string @@ -279,7 +279,7 @@ print $title # Output depends on LANG env var Forms automatically use the configured locale path: -```text +```toml # Form config "locales_path": "provisioning/locales" @@ -297,7 +297,7 @@ Forms automatically use the configured locale path: ### Manual Testing -```text +```bash # Test English LANG=en_US nu -c 'use provisioning/core/nulib/main_provisioning/help_system_fluent.nu *; provisioning-help' @@ -316,7 +316,7 @@ LANG=fr_FR provisioning-help Run the comprehensive test suite: -```text +```nushell nu -c 'source tests/test-multilingual-help.nu; test-all' ``` @@ -348,7 +348,7 @@ For future improvements: ### Translation Not Appearing **Problem**: Help shows English instead of Spanish -```text +```bash # Check LANG=es_ES provisioning-help # Output: English text diff --git a/docs/src/guides/multi-provider-deployment.md b/docs/src/guides/multi-provider-deployment.md index 84b966c..aa7fb8d 100644 --- a/docs/src/guides/multi-provider-deployment.md +++ b/docs/src/guides/multi-provider-deployment.md @@ -170,7 +170,7 @@ Spaces (affordable object storage). ### Multi-Provider Workspace Structure -```text +```bash provisioning/examples/workspaces/my-multi-provider-app/ ├── workspace.ncl # Infrastructure definition ├── config.toml # Provider credentials, regions, defaults @@ -184,7 +184,7 @@ provisioning/examples/workspaces/my-multi-provider-app/ Each provider requires authentication via environment variables: -```text +```bash # Hetzner export HCLOUD_TOKEN="your-hetzner-api-token" @@ -203,7 +203,7 @@ export DIGITALOCEAN_TOKEN="your-do-api-token" #### Configuration File Structure (config.toml) -```text +```toml [providers] [providers.hetzner] @@ -239,7 +239,7 @@ owner = "platform-team" Nickel workspace with multiple providers: -```text +```nickel # workspace.ncl - Multi-provider infrastructure definition let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in @@ -316,8 +316,7 @@ let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.n #### Architecture -```text - ┌─────────────────────┐ +``` ┌─────────────────────┐ │ Client Requests │ └──────────┬──────────┘ │ @@ -332,7 +331,7 @@ let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.n #### Nickel Configuration -```text +```javascript let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in let aws = import "../../extensions/providers/aws/nickel/main.ncl" in @@ -378,7 +377,7 @@ let aws = import "../../extensions/providers/aws/nickel/main.ncl" in Hetzner servers connect to AWS RDS via VPN or public endpoint: -```text +```bash # Network setup script def setup_database_connection [] { let hetzner_servers = (hetzner_list_servers) @@ -415,8 +414,7 @@ Monthly estimate: #### Architecture -```text - Primary (DigitalOcean NYC) Backup (Hetzner DE) +``` Primary (DigitalOcean NYC) Backup (Hetzner DE) ┌──────────────────────┐ ┌─────────────────┐ │ DigitalOcean LB │◄────────►│ HAProxy Monitor │ └──────────┬───────────┘ └────────┬────────┘ @@ -434,7 +432,7 @@ Monthly estimate: #### Failover Trigger -```text +```bash def monitor_primary_health [do_region, hetzner_region] { loop { let health = (do_health_check $do_region) @@ -466,7 +464,7 @@ def trigger_failover [backup_region] { #### Nickel Configuration -```text +```javascript let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in @@ -530,7 +528,7 @@ let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in #### Failover Testing -```text +```bash # Test failover without affecting production def test_failover_dry_run [config] { print "Starting failover dry-run test..." @@ -574,8 +572,7 @@ def test_failover_dry_run [config] { #### Architecture -```text - ┌─────────────────┐ +``` ┌─────────────────┐ │ Global DNS │ │ (Geofencing) │ └────────┬────────┘ @@ -596,7 +593,7 @@ def test_failover_dry_run [config] { #### Global Load Balancing -```text +```bash def setup_global_dns [] { # Using Route53 or Cloudflare for DNS failover let regions = [ @@ -622,7 +619,7 @@ def setup_global_dns [] { #### Nickel Configuration -```text +```json { regions = { us_east = { @@ -680,7 +677,7 @@ def setup_global_dns [] { #### Data Synchronization -```text +```bash # Multi-region data sync strategy def sync_data_across_regions [primary_region, secondary_regions] { let sync_config = { @@ -723,8 +720,7 @@ def sync_data_across_regions [primary_region, secondary_regions] { #### Architecture -```text - On-Premises Data Center Public Cloud (Burst) +``` On-Premises Data Center Public Cloud (Burst) ┌─────────────────────────┐ ┌────────────────────┐ │ Physical Servers │◄────►│ AWS Auto-Scaling │ │ - App Tier (24 cores) │ │ - Elasticity │ @@ -745,7 +741,7 @@ def sync_data_across_regions [primary_region, secondary_regions] { #### VPN Configuration -```text +```toml def setup_hybrid_vpn [] { # AWS VPN to on-premise datacenter let vpn_config = { @@ -771,7 +767,7 @@ def setup_hybrid_vpn [] { #### Nickel Configuration -```text +```json { on_premises = { provider = "manual", @@ -837,7 +833,7 @@ def setup_hybrid_vpn [] { #### Burst Capacity Orchestration -```text +```bash # Monitor on-prem and trigger AWS burst def monitor_and_burst [] { loop { @@ -872,7 +868,7 @@ def monitor_and_burst [] { **workspace.ncl**: -```text +```javascript let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in let aws = import "../../extensions/providers/aws/nickel/main.ncl" in let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in @@ -957,7 +953,7 @@ let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in **config.toml**: -```text +```toml [workspace] name = "three-provider-webapp" environment = "production" @@ -988,7 +984,7 @@ rollback_on_failure = true **deploy.nu**: -```text +```nushell #!/usr/bin/env nu # Deploy three-provider web application @@ -1157,7 +1153,7 @@ main $env.ENVIRONMENT? **Diagnosis**: -```text +```bash # Check network connectivity def diagnose_network_issue [source_ip, dest_ip] { print "Diagnosing network connectivity..." @@ -1190,7 +1186,7 @@ def diagnose_network_issue [source_ip, dest_ip] { **Diagnosis**: -```text +```bash def check_replication_lag [] { # AWS RDS aws rds describe-db-instances --query 'DBInstances[].{ID:DBInstanceIdentifier,Lag:ReplicationLag}' @@ -1213,7 +1209,7 @@ def check_replication_lag [] { **Diagnosis**: -```text +```bash def test_failover_chain [] { # 1. Verify backup infrastructure is ready verify_backup_infrastructure @@ -1243,7 +1239,7 @@ def test_failover_chain [] { **Diagnosis**: -```text +```bash def analyze_cost_spike [] { print "Analyzing cost spike..." diff --git a/docs/src/guides/multi-provider-networking.md b/docs/src/guides/multi-provider-networking.md index d083622..89612c4 100644 --- a/docs/src/guides/multi-provider-networking.md +++ b/docs/src/guides/multi-provider-networking.md @@ -26,7 +26,7 @@ Multi-provider deployments require secure, private communication between resourc ### Architecture -```text +```bash ┌──────────────────────────────────┐ │ DigitalOcean VPC │ │ Network: 10.0.0.0/16 │ @@ -70,7 +70,7 @@ Multi-provider deployments require secure, private communication between resourc - Private networking without NAT **Configuration**: -```text +```toml # Create private network hcloud network create --name "app-network" --ip-range "10.0.0.0/16" @@ -100,7 +100,7 @@ hcloud server attach-to-network server-1 --network app-network --ip 10.0.1.10 - Static routing **Configuration**: -```text +```toml # Create private network upctl network create --name "app-network" --ip-networks 10.0.0.0/16 @@ -130,7 +130,7 @@ upctl server attach-network --server server-1 - Flow logs and VPC insights **Configuration**: -```text +```toml # Create VPC aws ec2 create-vpc --cidr-block 10.1.0.0/16 @@ -163,7 +163,7 @@ aws ec2 create-security-group --group-name app-sg - Droplet-to-droplet communication **Configuration**: -```text +```toml # Create VPC doctl compute vpc create --name "app-vpc" --region nyc3 --ip-range 10.0.0.0/16 @@ -178,7 +178,7 @@ doctl compute firewall create --name app-fw --vpc-id vpc-id ### Hetzner vSwitch Configuration (Nickel) -```text +```javascript let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in { @@ -216,7 +216,7 @@ let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in ### AWS VPC Configuration (Nickel) -```text +```javascript let aws = import "../../extensions/providers/aws/nickel/main.ncl" in { @@ -274,7 +274,7 @@ let aws = import "../../extensions/providers/aws/nickel/main.ncl" in ### DigitalOcean VPC Configuration (Nickel) -```text +```javascript let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in { @@ -325,7 +325,7 @@ let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.n #### Step 1: AWS Site-to-Site VPN Setup -```text +```bash # Create Virtual Private Gateway (VGW) aws ec2 create-vpn-gateway --type ipsec.1 @@ -375,7 +375,7 @@ aws ec2 create-route Download VPN configuration from AWS: -```text +```toml # Get VPN configuration aws ec2 describe-vpn-connections --vpn-connection-ids $VPN_CONN_ID @@ -385,7 +385,7 @@ aws ec2 describe-vpn-connections Configure IPSec on DigitalOcean server (acting as VPN gateway): -```text +```toml # Install StrongSwan ssh root@do-server apt-get update @@ -445,7 +445,7 @@ swanctl --stats #### Step 3: Add Route on DigitalOcean -```text +```bash # Add route to AWS VPC through VPN ssh root@do-server @@ -462,7 +462,7 @@ ufw allow from 10.1.0.0/16 to 10.0.0.0/16 #### Create Wireguard Keypairs -```text +```bash # On DO server ssh root@do-server apt-get install -y wireguard wireguard-tools @@ -479,7 +479,7 @@ sudo wg genkey | sudo tee /etc/wireguard/aws_private.key | wg pubkey > /etc/wire #### Configure Wireguard on DigitalOcean -```text +```toml # /etc/wireguard/wg0.conf cat > /etc/wireguard/wg0.conf <<'EOF' [Interface] @@ -505,7 +505,7 @@ systemctl enable wg-quick@wg0 #### Configure Wireguard on AWS -```text +```toml # /etc/wireguard/wg0.conf cat > /etc/wireguard/wg0.conf <<'EOF' [Interface] @@ -529,7 +529,7 @@ sudo systemctl enable wg-quick@wg0 #### Test Connectivity -```text +```bash # From DO server ssh root@do-server ping 10.10.0.2 @@ -546,7 +546,7 @@ curl -I http://10.1.1.10:5432 # Test AWS RDS from DO ### Define Cross-Provider Routes (Nickel) -```text +```json { # Route between DigitalOcean and AWS vpn_routes = { @@ -577,7 +577,7 @@ curl -I http://10.1.1.10:5432 # Test AWS RDS from DO ### Static Routes on Hetzner -```text +```bash # Add route to AWS VPC ip route add 10.1.0.0/16 via 10.0.0.1 @@ -594,7 +594,7 @@ EOF ### AWS Route Tables -```text +```bash # Get main route table RT_ID=$(aws ec2 describe-route-tables --filters Name=vpc-id,Values=vpc-12345 --query 'RouteTables[0].RouteTableId' --output text) @@ -626,7 +626,7 @@ aws ec2 create-route - Curve25519 key exchange - Automatic key rotation -```text +```bash # Verify IPSec configuration swanctl --stats @@ -637,7 +637,7 @@ swanctl --list-connections ### 2. Firewall Rules **DigitalOcean Firewall**: -```text +```bash inbound_rules = [ # Allow VPN traffic from AWS { @@ -655,7 +655,7 @@ inbound_rules = [ ``` **AWS Security Group**: -```text +```bash # Allow traffic from DigitalOcean VPC aws ec2 authorize-security-group-ingress --group-id sg-12345 @@ -672,14 +672,14 @@ aws ec2 authorize-security-group-ingress ``` **Hetzner Firewall**: -```text +```bash hcloud firewall create --name vpn-fw --rules "direction=in protocol=udp destination_port=51820 source_ips=10.0.0.0/16;10.1.0.0/16" ``` ### 3. Network Segmentation -```text +```bash # Each provider has isolated subnets networks = { do_web_tier = "10.0.1.0/24", # Public-facing web @@ -697,7 +697,7 @@ networks = { ### 4. DNS Security -```text +```bash # Private DNS for internal services # On each provider's VPC/network, configure: @@ -722,7 +722,7 @@ aws route53 change-resource-record-sets ### Complete Multi-Provider Network Setup (Nushell) -```text +```nushell #!/usr/bin/env nu def setup_multi_provider_network [] { @@ -871,7 +871,7 @@ setup_multi_provider_network ### Issue: No Connectivity Between Providers **Diagnosis**: -```text +```bash # Test VPN tunnel status swanctl --stats @@ -893,7 +893,7 @@ traceroute 10.1.1.10 ### Issue: High Latency Between Providers **Diagnosis**: -```text +```bash # Measure latency ping -c 10 10.1.1.10 | tail -1 @@ -913,7 +913,7 @@ iperf3 -c 10.1.1.10 -t 10 ### Issue: DNS Not Resolving Across Providers **Diagnosis**: -```text +```bash # Test internal DNS nslookup database.internal @@ -932,7 +932,7 @@ ssh do-server "nslookup database.internal" ### Issue: VPN Tunnel Drops **Diagnosis**: -```text +```bash # Check connection logs journalctl -u strongswan-swanctl -f diff --git a/docs/src/guides/provider-digitalocean.md b/docs/src/guides/provider-digitalocean.md index a96adad..3ef6713 100644 --- a/docs/src/guides/provider-digitalocean.md +++ b/docs/src/guides/provider-digitalocean.md @@ -105,7 +105,7 @@ Unlike AWS, DigitalOcean uses hourly billing with transparent monthly rates: ### Step 2: Configure Environment Variables -```text +```toml # Add to ~/.bashrc, ~/.zshrc, or env file export DIGITALOCEAN_TOKEN="dop_v1_xxxxxxxxxxxxxxxxxxxxxxxxxxxx" @@ -115,7 +115,7 @@ export DIGITALOCEAN_REGION="nyc3" ### Step 3: Verify Configuration -```text +```toml # Using provisioning CLI provisioning provider verify digitalocean @@ -128,7 +128,7 @@ doctl compute droplet list Create or update `config.toml` in your workspace: -```text +```toml [providers.digitalocean] enabled = true token_env = "DIGITALOCEAN_TOKEN" @@ -280,7 +280,7 @@ Network firewall rules. ### Droplet Configuration -```text +```javascript let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in digitalocean.Droplet & { @@ -365,7 +365,7 @@ apt-get install -y nginx" ### Load Balancer Configuration -```text +```toml digitalocean.LoadBalancer & { name = "web-lb", algorithm = "round_robin", # or "least_connections" @@ -411,7 +411,7 @@ digitalocean.LoadBalancer & { ### Volume Configuration -```text +```toml digitalocean.Volume & { name = "data-volume", size = 100, # GB @@ -429,7 +429,7 @@ digitalocean.Volume & { ### Managed Database Configuration -```text +```toml digitalocean.Database & { name = "prod-db", engine = "pg", # or "mysql", "redis" @@ -452,7 +452,7 @@ digitalocean.Database & { ### Example 1: Simple Web Server -```text +```javascript let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in { @@ -488,7 +488,7 @@ let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.n ### Example 2: Web Application with Database -```text +```nickel { web_tier = digitalocean.Droplet & { name = "web-server", @@ -543,7 +543,7 @@ let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.n ### Example 3: High-Performance Storage -```text +```nickel { app_server = digitalocean.Droplet & { name = "app-with-storage", @@ -599,7 +599,7 @@ let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.n - Block unnecessary outbound traffic **Default Rules** -```text +```bash # Minimal firewall for web server inbound_rules = [ { protocol = "tcp", ports = "22", sources = { addresses = ["YOUR_OFFICE_IP/32"] } }, @@ -687,7 +687,7 @@ outbound_rules = [ 4. Check Droplet has public IP assigned **Solution**: -```text +```bash # Add to firewall doctl compute firewall add-rules firewall-id --inbound-rules="protocol:tcp,ports:22,sources:addresses:YOUR_IP" @@ -703,7 +703,7 @@ ssh -v -i ~/.ssh/key.pem root@DROPLET_IP **Symptoms**: Volume created but not accessible, mount fails **Diagnosis**: -```text +```bash # Check volume attachment doctl compute volume list @@ -715,7 +715,7 @@ sudo file -s /dev/sdb ``` **Solution**: -```text +```bash # Format volume (only first time) sudo mkfs.ext4 /dev/sdb @@ -734,7 +734,7 @@ echo '/dev/sdb /data ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab **Symptoms**: Backends marked unhealthy, traffic not flowing **Diagnosis**: -```text +```bash # Test health check endpoint manually curl -i http://BACKEND_IP:8080/health @@ -754,7 +754,7 @@ tail -f /var/log/app.log **Symptoms**: Cannot connect to managed database **Diagnosis**: -```text +```bash # Test connectivity from Droplet psql -h db-host.db.ondigitalocean.com -U admin -d defaultdb diff --git a/docs/src/guides/provider-hetzner.md b/docs/src/guides/provider-hetzner.md index bfca0c3..776d3d0 100644 --- a/docs/src/guides/provider-hetzner.md +++ b/docs/src/guides/provider-hetzner.md @@ -110,7 +110,7 @@ Hetzner is **3.5x cheaper** than DigitalOcean and **23x cheaper** than AWS for t ### Step 2: Configure Environment Variables -```text +```toml # Add to ~/.bashrc, ~/.zshrc, or env file export HCLOUD_TOKEN="MC4wNTI1YmE1M2E4YmE0YTQzMTQ..." @@ -120,7 +120,7 @@ export HCLOUD_LOCATION="nbg1" ### Step 3: Install hcloud CLI (Optional) -```text +```bash # macOS brew install hcloud @@ -134,7 +134,7 @@ hcloud version ### Step 4: Configure SSH Key -```text +```toml # Upload your SSH public key hcloud ssh-key create --name "provisioning-key" --public-key-from-file ~/.ssh/id_rsa.pub @@ -147,7 +147,7 @@ hcloud ssh-key list Create or update `config.toml` in your workspace: -```text +```toml [providers.hetzner] enabled = true token_env = "HCLOUD_TOKEN" @@ -261,7 +261,7 @@ Network firewall rules. ### Cloud Server Configuration -```text +```javascript let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in hetzner.Server & { @@ -334,7 +334,7 @@ apt-get install -y nginx" ### Volume Configuration -```text +```toml hetzner.Volume & { name = "data-volume", size = 100, # GB @@ -352,7 +352,7 @@ hetzner.Volume & { ### Load Balancer Configuration -```text +```toml hetzner.LoadBalancer & { name = "web-lb", load_balancer_type = "lb11", @@ -384,7 +384,7 @@ hetzner.LoadBalancer & { ### Firewall Configuration -```text +```toml hetzner.Firewall & { name = "web-firewall", labels = { "env" = "prod" }, @@ -424,7 +424,7 @@ hetzner.Firewall & { ### Example 1: Single Server Web Server -```text +```javascript let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in { @@ -458,7 +458,7 @@ let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in ### Example 2: Web Application with Load Balancer and Storage -```text +```json { # Backend servers app_servers = hetzner.Server & { @@ -518,7 +518,7 @@ let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in ### Example 3: High-Performance Compute Cluster -```text +```json { # Compute nodes with 10Gbps networking compute_nodes = hetzner.Server & { @@ -590,7 +590,7 @@ let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in ### 2. Network Architecture **High Availability**: -```text +```bash # Use Floating IPs for failover floating_ip = hetzner.FloatingIP & { name = "web-ip", @@ -605,7 +605,7 @@ attachment = { ``` **Private Networking**: -```text +```bash # Create private network for internal communication private_network = hetzner.Network & { name = "private", @@ -630,7 +630,7 @@ private_network = hetzner.Network & { ### 4. Firewall Configuration **Principle of Least Privilege**: -```text +```toml # Only open necessary ports firewall_rules = [ # SSH from management IP only @@ -648,7 +648,7 @@ firewall_rules = [ ### 5. Monitoring and Health Checks **Enable Monitoring**: -```text +```bash hcloud server update --enable-rescue ``` @@ -679,7 +679,7 @@ hcloud server update --enable-rescue **Symptoms**: SSH timeout or connection refused **Diagnosis**: -```text +```bash # Check server status hcloud server list @@ -691,7 +691,7 @@ hcloud server describe server-name ``` **Solution**: -```text +```bash # Update firewall to allow SSH from your IP hcloud firewall add-rules firewall-id --rules "direction=in protocol=tcp source_ips=YOUR_IP/32 destination_port=22" @@ -705,7 +705,7 @@ hcloud server request-console server-id **Symptoms**: Volume created but cannot attach, mount fails **Diagnosis**: -```text +```bash # Check volume status hcloud volume list @@ -714,7 +714,7 @@ hcloud server describe server-name ``` **Solution**: -```text +```bash # Format volume (first time only) sudo mkfs.ext4 /dev/sdb @@ -732,7 +732,7 @@ sudo mount -a **Symptoms**: Unexpected egress charges **Diagnosis**: -```text +```bash # Check server network traffic sar -n DEV 1 100 @@ -751,7 +751,7 @@ netstat -an | grep ESTABLISHED | wc -l **Symptoms**: LB created but backends not receiving traffic **Diagnosis**: -```text +```bash # Check LB status hcloud load-balancer describe lb-name diff --git a/docs/src/guides/update-infrastructure.md b/docs/src/guides/update-infrastructure.md index 5f95448..81087b5 100644 --- a/docs/src/guides/update-infrastructure.md +++ b/docs/src/guides/update-infrastructure.md @@ -21,7 +21,7 @@ This guide covers: **Best for**: Non-critical environments, development, staging -```text +```bash # Direct update without downtime consideration provisioning t create --infra ``` @@ -30,7 +30,7 @@ provisioning t create --infra **Best for**: Production environments, high availability -```text +```bash # Update servers one by one provisioning s update --infra --rolling ``` @@ -39,7 +39,7 @@ provisioning s update --infra --rolling **Best for**: Critical production, zero-downtime requirements -```text +```bash # Create new infrastructure, switch traffic, remove old provisioning ws init -green # ... configure and deploy @@ -51,14 +51,14 @@ provisioning ws delete -blue ### 1.1 Check All Task Services -```text +```bash # Check all taskservs for updates provisioning t check-updates ``` **Expected Output:** -```text +```bash 📦 Task Service Update Check: NAME CURRENT LATEST STATUS @@ -73,14 +73,14 @@ Updates available: 3 ### 1.2 Check Specific Task Service -```text +```bash # Check specific taskserv provisioning t check-updates kubernetes ``` **Expected Output:** -```text +```bash 📦 Kubernetes Update Check: Current: 1.29.0 @@ -101,14 +101,14 @@ Recommended: ✅ Safe to update ### 1.3 Check Version Status -```text +```bash # Show detailed version information provisioning version show ``` **Expected Output:** -```text +```bash 📋 Component Versions: COMPONENT CURRENT LATEST DAYS OLD STATUS @@ -121,7 +121,7 @@ redis 7.2.3 7.2.3 0 ✅ current ### 1.4 Check for Security Updates -```text +```bash # Check for security-related updates provisioning version updates --security-only ``` @@ -130,14 +130,14 @@ provisioning version updates --security-only ### 2.1 Review Current Configuration -```text +```toml # Show current infrastructure provisioning show settings --infra my-production ``` ### 2.2 Backup Configuration -```text +```toml # Create configuration backup cp -r workspace/infra/my-production workspace/infra/my-production.backup-$(date +%Y%m%d) @@ -147,20 +147,20 @@ provisioning ws backup my-production **Expected Output:** -```text +```bash ✅ Backup created: workspace/backups/my-production-20250930.tar.gz ``` ### 2.3 Create Update Plan -```text +```bash # Generate update plan provisioning plan update --infra my-production ``` **Expected Output:** -```text +```bash 📝 Update Plan for my-production: Phase 1: Minor Updates (Low Risk) @@ -189,14 +189,14 @@ Recommended: Test in staging environment first #### Dry-Run Update -```text +```bash # Test update without applying provisioning t create cilium --infra my-production --check ``` **Expected Output:** -```text +```bash 🔍 CHECK MODE: Simulating Cilium update Current: 1.14.5 @@ -214,28 +214,28 @@ No errors detected. Ready to update. #### Generate Updated Configuration -```text +```toml # Generate new configuration provisioning t generate cilium --infra my-production ``` **Expected Output:** -```text +```bash ✅ Generated Cilium configuration (version 1.15.0) Saved to: workspace/infra/my-production/taskservs/cilium.ncl ``` #### Apply Update -```text +```bash # Apply update provisioning t create cilium --infra my-production ``` **Expected Output:** -```text +```bash 🚀 Updating Cilium on my-production... Downloading Cilium 1.15.0... ⏳ @@ -260,14 +260,14 @@ Verifying connectivity... ⏳ #### Verify Update -```text +```bash # Verify updated version provisioning version taskserv cilium ``` **Expected Output:** -```text +```bash 📦 Cilium Version Info: Installed: 1.15.0 @@ -283,7 +283,7 @@ Nodes: #### Test in Staging First -```text +```bash # If you have staging environment provisioning t create kubernetes --infra my-staging --check provisioning t create kubernetes --infra my-staging @@ -294,7 +294,7 @@ provisioning test kubernetes --infra my-staging #### Backup Current State -```text +```bash # Backup Kubernetes state kubectl get all -A -o yaml > k8s-backup-$(date +%Y%m%d).yaml @@ -304,21 +304,21 @@ provisioning t backup kubernetes --infra my-production #### Schedule Maintenance Window -```text +```bash # Set maintenance mode (optional, if supported) provisioning maintenance enable --infra my-production --duration 30m ``` #### Update Kubernetes -```text +```yaml # Update control plane first provisioning t create kubernetes --infra my-production --control-plane-only ``` **Expected Output:** -```text +```bash 🚀 Updating Kubernetes control plane on my-production... Draining control plane: web-01... ⏳ @@ -336,14 +336,14 @@ Verifying control plane... ⏳ 🎉 Control plane update complete! ``` -```text +```bash # Update worker nodes one by one provisioning t create kubernetes --infra my-production --workers-only --rolling ``` **Expected Output:** -```text +```bash 🚀 Updating Kubernetes workers on my-production... Rolling update: web-02... @@ -366,7 +366,7 @@ Rolling update: web-02... #### Verify Update -```text +```bash # Verify Kubernetes cluster kubectl get nodes provisioning version taskserv kubernetes @@ -374,13 +374,13 @@ provisioning version taskserv kubernetes **Expected Output:** -```text +```bash NAME STATUS ROLES AGE VERSION web-01 Ready control-plane 30d v1.30.0 web-02 Ready 30d v1.30.0 ``` -```text +```bash # Run smoke tests provisioning test kubernetes --infra my-production ``` @@ -391,14 +391,14 @@ provisioning test kubernetes --infra my-production #### Backup Database -```text +```bash # Backup PostgreSQL database provisioning t backup postgres --infra my-production ``` **Expected Output:** -```text +```bash 🗄️ Backing up PostgreSQL... Creating dump: my-production-postgres-20250930.sql... ⏳ @@ -412,14 +412,14 @@ Saved to: workspace/backups/postgres/my-production-20250930.sql.gz #### Check Compatibility -```text +```bash # Check if data migration is needed provisioning t check-migration postgres --from 15.5 --to 16.1 ``` **Expected Output:** -```text +```bash 🔍 PostgreSQL Migration Check: From: 15.5 @@ -442,14 +442,14 @@ Recommended: Use streaming replication for zero-downtime upgrade #### Perform Update -```text +```bash # Update PostgreSQL (with automatic migration) provisioning t create postgres --infra my-production --migrate ``` **Expected Output:** -```text +```bash 🚀 Updating PostgreSQL on my-production... ⚠️ Major version upgrade detected (15.5 → 16.1) @@ -483,7 +483,7 @@ Verifying data integrity... ⏳ #### Verify Update -```text +```bash # Verify PostgreSQL provisioning version taskserv postgres ssh db-01 "psql --version" @@ -493,14 +493,14 @@ ssh db-01 "psql --version" ### 4.1 Batch Update (Sequentially) -```text +```bash # Update multiple taskservs one by one provisioning t update --infra my-production --taskservs cilium,containerd,redis ``` **Expected Output:** -```text +```bash 🚀 Updating 3 taskservs on my-production... [1/3] Updating cilium... ⏳ @@ -519,14 +519,14 @@ provisioning t update --infra my-production --taskservs cilium,containerd,redis ### 4.2 Parallel Update (Non-Dependent Services) -```text +```bash # Update taskservs in parallel (if they don't depend on each other) provisioning t update --infra my-production --taskservs redis,postgres --parallel ``` **Expected Output:** -```text +```bash 🚀 Updating 2 taskservs in parallel on my-production... redis: Updating... ⏳ @@ -544,14 +544,14 @@ postgres: ✅ Updated (16.1) ### 5.1 Update Server Resources -```text +```bash # Edit server configuration provisioning sops workspace/infra/my-production/servers.ncl ``` **Example: Upgrade server plan** -```text +```bash # Before { name = "web-01" @@ -565,7 +565,7 @@ provisioning sops workspace/infra/my-production/servers.ncl } ``` -```text +```bash # Apply server update provisioning s update --infra my-production --check provisioning s update --infra my-production @@ -573,14 +573,14 @@ provisioning s update --infra my-production ### 5.2 Update Server OS -```text +```bash # Update operating system packages provisioning s update --infra my-production --os-update ``` **Expected Output:** -```text +```bash 🚀 Updating OS packages on my-production servers... web-01: Updating packages... ⏳ @@ -601,14 +601,14 @@ db-01: Updating packages... ⏳ If update fails or causes issues: -```text +```bash # Rollback to previous version provisioning t rollback cilium --infra my-production ``` **Expected Output:** -```text +```bash 🔄 Rolling back Cilium on my-production... Current: 1.15.0 @@ -629,14 +629,14 @@ Verifying connectivity... ⏳ ### 6.2 Rollback from Backup -```text +```bash # Restore configuration from backup provisioning ws restore my-production --from workspace/backups/my-production-20250930.tar.gz ``` ### 6.3 Emergency Rollback -```text +```bash # Complete infrastructure rollback provisioning rollback --infra my-production --to-snapshot ``` @@ -645,14 +645,14 @@ provisioning rollback --infra my-production --to-snapshot ### 7.1 Verify All Components -```text +```bash # Check overall health provisioning health --infra my-production ``` **Expected Output:** -```text +```bash 🏥 Health Check: my-production Servers: @@ -674,21 +674,21 @@ Overall Status: ✅ All systems healthy ### 7.2 Verify Version Updates -```text +```bash # Verify all versions are updated provisioning version show ``` ### 7.3 Run Integration Tests -```text +```bash # Run comprehensive tests provisioning test all --infra my-production ``` **Expected Output:** -```text +```bash 🧪 Running Integration Tests... [1/5] Server connectivity... ⏳ @@ -711,7 +711,7 @@ provisioning test all --infra my-production ### 7.4 Monitor for Issues -```text +```bash # Monitor logs for errors provisioning logs --infra my-production --follow --level error ``` @@ -740,7 +740,7 @@ Use this checklist for production updates: ### Scenario 1: Minor Security Patch -```text +```bash # Quick security update provisioning t check-updates --security-only provisioning t update --infra my-production --security-patches --yes @@ -748,7 +748,7 @@ provisioning t update --infra my-production --security-patches --yes ### Scenario 2: Major Version Upgrade -```text +```bash # Careful major version update provisioning ws backup my-production provisioning t check-migration --from X.Y --to X+1.Y @@ -758,7 +758,7 @@ provisioning test all --infra my-production ### Scenario 3: Emergency Hotfix -```text +```bash # Apply critical hotfix immediately provisioning t create --infra my-production --hotfix --yes ``` @@ -769,7 +769,7 @@ provisioning t create --infra my-production --hotfix --yes **Solution:** -```text +```bash # Check update status provisioning t status --infra my-production @@ -784,7 +784,7 @@ provisioning t rollback --infra my-production **Solution:** -```text +```bash # Check logs provisioning logs --infra my-production @@ -799,7 +799,7 @@ provisioning t rollback --infra my-production **Solution:** -```text +```bash # Check migration logs provisioning t migration-logs --infra my-production @@ -826,7 +826,7 @@ provisioning t restore --infra my-production --from ## Quick Reference -```text +```bash # Update workflow provisioning t check-updates provisioning ws backup my-production diff --git a/docs/src/guides/workspace-generation-quick-reference.md b/docs/src/guides/workspace-generation-quick-reference.md index cab9afe..3160a5d 100644 --- a/docs/src/guides/workspace-generation-quick-reference.md +++ b/docs/src/guides/workspace-generation-quick-reference.md @@ -4,7 +4,7 @@ ## Quick Start: Create a Workspace -```text +```bash # Interactive mode (recommended) provisioning workspace init @@ -19,7 +19,7 @@ provisioning workspace init my_workspace /path/to/my_workspace --activate When you run `provisioning workspace init`, the system creates: -```text +```bash my_workspace/ ├── config/ │ ├── config.ncl # Master Nickel configuration @@ -47,7 +47,7 @@ my_workspace/ ### Master Configuration: `config/config.ncl` -```text +```json { workspace = { name = "my_workspace", @@ -78,7 +78,7 @@ my_workspace/ ### Infrastructure: `infra/default/main.ncl` -```text +```json { workspace_name = "my_workspace", infrastructure = "default", @@ -113,7 +113,7 @@ These guides are customized for your workspace's: ## Initialization Process (8 Steps) -```text +```bash STEP 1: Create directory structure └─ workspace/, config/, infra/default/, etc. @@ -146,7 +146,7 @@ STEP 8: Display summary ### Workspace Management -```text +```bash # Create interactive workspace provisioning workspace init @@ -165,7 +165,7 @@ provisioning workspace active ### Configuration -```text +```toml # Validate Nickel configuration nickel typecheck config/config.ncl nickel typecheck infra/default/main.ncl @@ -176,7 +176,7 @@ provisioning validate config ### Deployment -```text +```bash # Dry-run (check mode) provisioning -c server create @@ -191,7 +191,7 @@ provisioning server list ### Auto-Generated Structure -```text +```bash my_workspace/ ├── config/ │ ├── config.ncl # Master configuration @@ -219,7 +219,7 @@ my_workspace/ ### Edit Configuration -```text +```toml # Master workspace configuration vim config/config.ncl @@ -232,7 +232,7 @@ vim infra/default/servers.ncl ### Add Multiple Infrastructures -```text +```bash # Create new infrastructure environment mkdir -p infra/production infra/staging @@ -248,7 +248,7 @@ vim infra/production/servers.ncl Update `config/config.ncl` to enable cloud providers: -```text +```nickel providers = { upcloud = { name = "upcloud", diff --git a/docs/src/infrastructure/batch-workflow-multi-provider.md b/docs/src/infrastructure/batch-workflow-multi-provider.md index f2e99bc..3b9b557 100644 --- a/docs/src/infrastructure/batch-workflow-multi-provider.md +++ b/docs/src/infrastructure/batch-workflow-multi-provider.md @@ -35,7 +35,7 @@ The batch workflow system enables declarative orchestration of operations across ### Workflow Definition -```text +```bash # file: workflows/multi-provider-deployment.yml name: multi-provider-app-deployment @@ -252,7 +252,7 @@ pre_flight: ### Execution Flow -```text +```bash ┌─────────────────────────────────────────────────────────┐ │ Start Deployment │ └──────────────────┬──────────────────────────────────────┘ @@ -313,7 +313,7 @@ pre_flight: ### Workflow Definition -```text +```bash # file: workflows/multi-provider-dr-failover.yml name: multi-provider-dr-failover @@ -505,7 +505,7 @@ notifications: ### Failover Timeline -```text +```bash Time Event ──────────────────────────────────────────────────── 00:00 Health check detects failure (3 consecutive failures) @@ -534,7 +534,7 @@ Data loss (RPO): < 5 minutes (database replication lag) ### Workflow Definition -```text +```bash # file: workflows/cost-optimization-migration.yml name: cost-optimization-migration @@ -668,7 +668,7 @@ cost_tracking: ### Workflow Definition -```text +```bash # file: workflows/multi-region-replication.yml name: multi-region-replication @@ -760,7 +760,7 @@ phases: ### Issue: Workflow Stuck in Phase **Diagnosis**: -```text +```bash provisioning workflow status workflow-id --verbose ``` @@ -773,7 +773,7 @@ provisioning workflow status workflow-id --verbose ### Issue: Rollback Failed **Diagnosis**: -```text +```bash provisioning workflow rollback workflow-id --dry-run ``` @@ -786,7 +786,7 @@ provisioning workflow rollback workflow-id --dry-run ### Issue: Data Inconsistency After Failover **Diagnosis**: -```text +```bash provisioning database verify-consistency ``` diff --git a/docs/src/infrastructure/batch-workflow-system.md b/docs/src/infrastructure/batch-workflow-system.md index 7055f2c..681fcd9 100644 --- a/docs/src/infrastructure/batch-workflow-system.md +++ b/docs/src/infrastructure/batch-workflow-system.md @@ -16,7 +16,7 @@ approaches. The system enables provider-agnostic batch operations with mixed pro ## Batch Workflow Commands -```text +```bash # Submit batch workflow from Nickel definition nu -c "use core/nulib/workflows/batch.nu *; batch submit workflows/example_batch.ncl" @@ -40,7 +40,7 @@ nu -c "use core/nulib/workflows/batch.nu *; batch stats" Batch workflows are defined using Nickel configuration in `schemas/workflows.ncl`: -```text +```nickel # Example batch workflow with mixed providers { batch_workflow = { diff --git a/docs/src/infrastructure/cli-architecture.md b/docs/src/infrastructure/cli-architecture.md index cac05c8..e9b9796 100644 --- a/docs/src/infrastructure/cli-architecture.md +++ b/docs/src/infrastructure/cli-architecture.md @@ -89,7 +89,7 @@ A comprehensive CLI refactoring transforming the monolithic 1,329-line script in The help system works in both directions: -```text +```bash # All these work identically: provisioning help workspace provisioning workspace help @@ -109,7 +109,7 @@ provisioning help concept = provisioning concept help **File Structure:** -```text +```bash provisioning/core/nulib/ ├── provisioning (211 lines) - Main entry point ├── main_provisioning/ diff --git a/docs/src/infrastructure/cli-reference.md b/docs/src/infrastructure/cli-reference.md index 358c3f0..7ebbec9 100644 --- a/docs/src/infrastructure/cli-reference.md +++ b/docs/src/infrastructure/cli-reference.md @@ -15,7 +15,7 @@ Complete command-line reference for Infrastructure Automation. This guide covers All provisioning commands follow this structure: -```text +```bash provisioning [global-options] [subcommand] [command-options] [arguments] ``` @@ -50,7 +50,7 @@ These options can be used with any command: Display help information for the system or specific commands. -```text +```bash # General help provisioning help @@ -75,7 +75,7 @@ provisioning server help create Display version information for the system and dependencies. -```text +```bash # Basic version provisioning version provisioning --version @@ -98,7 +98,7 @@ provisioning -I Display current environment configuration and settings. -```text +```toml # Show environment variables provisioning env @@ -125,7 +125,7 @@ provisioning env --export Create new server instances based on configuration. -```text +```toml # Create all servers in infrastructure provisioning server create --infra my-infra @@ -157,7 +157,7 @@ provisioning server create --infra my-infra --settings custom.ncl Remove server instances and associated resources. -```text +```bash # Delete all servers provisioning server delete --infra my-infra @@ -184,7 +184,7 @@ provisioning server delete --infra my-infra --check Display information about servers. -```text +```bash # List all servers provisioning server list --infra my-infra @@ -211,7 +211,7 @@ provisioning server list --infra my-infra --status running Connect to servers via SSH. -```text +```bash # SSH to server provisioning server ssh web-01 --infra my-infra @@ -236,7 +236,7 @@ provisioning server ssh web-01 --command "systemctl status nginx" --infra my-inf Display pricing information for servers. -```text +```bash # Show costs for all servers provisioning server price --infra my-infra @@ -262,7 +262,7 @@ provisioning server price --infra my-infra --compare Install and configure task services on servers. -```text +```toml # Install service on all eligible servers provisioning taskserv create kubernetes --infra my-infra @@ -290,7 +290,7 @@ provisioning taskserv create kubernetes --config k8s-config.yaml --infra my-infr Remove task services from servers. -```text +```bash # Remove service provisioning taskserv delete kubernetes --infra my-infra @@ -314,7 +314,7 @@ provisioning taskserv delete kubernetes --infra my-infra --check Display available and installed task services. -```text +```bash # List all available services provisioning taskserv list @@ -342,7 +342,7 @@ provisioning taskserv list --search kubernetes Generate configuration files for task services. -```text +```toml # Generate configuration provisioning taskserv generate kubernetes --infra my-infra @@ -366,7 +366,7 @@ provisioning taskserv generate postgresql --output db-config.yaml --infra my-inf Check for and manage service version updates. -```text +```bash # Check updates for all services provisioning taskserv check-updates --infra my-infra @@ -395,7 +395,7 @@ provisioning taskserv update kubernetes --version 1.29 --infra my-infra Deploy and configure application clusters. -```text +```toml # Create cluster provisioning cluster create web-cluster --infra my-infra @@ -419,7 +419,7 @@ provisioning cluster create web-cluster --replicas 5 --infra my-infra Remove application clusters and associated resources. -```text +```bash # Delete cluster provisioning cluster delete web-cluster --infra my-infra @@ -440,7 +440,7 @@ provisioning cluster delete web-cluster --force --infra my-infra Display information about deployed clusters. -```text +```bash # List all clusters provisioning cluster list --infra my-infra @@ -464,7 +464,7 @@ provisioning cluster list --namespace production --infra my-infra Adjust cluster size and resources. -```text +```bash # Scale cluster provisioning cluster scale web-cluster --replicas 10 --infra my-infra @@ -488,7 +488,7 @@ provisioning cluster scale web-cluster --component api --replicas 5 --infra my-i Generate infrastructure and configuration files. -```text +```toml # Generate new infrastructure provisioning generate infra --new my-infrastructure @@ -523,7 +523,7 @@ provisioning generate cluster --infra my-infra Show detailed information about infrastructure components. -```text +```bash # Show settings provisioning show settings --infra my-infra @@ -555,7 +555,7 @@ provisioning show servers --infra my-infra --out json List resource types (servers, networks, volumes, etc.). -```text +```bash # List providers provisioning list providers @@ -584,7 +584,7 @@ provisioning list servers --select Validate configuration files and infrastructure definitions. -```text +```toml # Validate configuration provisioning validate config --infra my-infra @@ -619,7 +619,7 @@ provisioning validate interpolation --infra my-infra Initialize user and project configurations. -```text +```toml # Initialize user configuration provisioning init config @@ -647,7 +647,7 @@ provisioning init config --force Manage configuration templates. -```text +```toml # List available templates provisioning template list @@ -674,7 +674,7 @@ provisioning template create my-template --from dev Start interactive Nushell session with provisioning library loaded. -```text +```nushell # Start interactive shell provisioning nu @@ -695,7 +695,7 @@ provisioning nu --script my-script.nu Edit encrypted configuration files using SOPS. -```text +```toml # Edit encrypted file provisioning sops settings.ncl --infra my-infra @@ -719,7 +719,7 @@ provisioning sops --rotate-keys secrets.ncl --infra my-infra Manage infrastructure contexts and environments. -```text +```bash # Show current context provisioning context @@ -749,7 +749,7 @@ provisioning context delete old-context Manage complex workflows and batch operations. -```text +```bash # Submit batch workflow provisioning workflows batch submit my-workflow.ncl @@ -776,7 +776,7 @@ provisioning workflows batch rollback workflow-123 Control the hybrid orchestrator system. -```text +```bash # Start orchestrator provisioning orchestrator start @@ -810,7 +810,7 @@ Provisioning uses standard exit codes: Control behavior through environment variables: -```text +```bash # Enable debug mode export PROVISIONING_DEBUG=true @@ -826,7 +826,7 @@ export PROVISIONING_NONINTERACTIVE=true ### Batch Operations -```text +```bash #!/bin/bash # Example batch script @@ -855,7 +855,7 @@ echo "Deployment completed successfully" ### JSON Output Processing -```text +```bash # Get server list as JSON servers=$(provisioning server list --infra my-infra --out json) @@ -873,7 +873,7 @@ done ### Sequential Operations -```text +```bash # Chain commands with && (stop on failure) provisioning validate config --infra my-infra && provisioning server create --infra my-infra --check && @@ -886,7 +886,7 @@ echo "Kubernetes installation failed, continuing with other services" ### Complex Workflows -```text +```bash # Full deployment workflow deploy_infrastructure() { local infra_name=$1 @@ -918,7 +918,7 @@ deploy_infrastructure "production" ### CI/CD Integration -```text +```bash # GitLab CI example deploy: script: @@ -932,7 +932,7 @@ deploy: ### Monitoring Integration -```text +```bash # Health check script #!/bin/bash @@ -951,7 +951,7 @@ fi ### Backup Automation -```text +```bash # Backup script #!/bin/bash diff --git a/docs/src/infrastructure/config-rendering-guide.md b/docs/src/infrastructure/config-rendering-guide.md index 69c57f7..a0665bc 100644 --- a/docs/src/infrastructure/config-rendering-guide.md +++ b/docs/src/infrastructure/config-rendering-guide.md @@ -17,7 +17,7 @@ All renderers are accessible through a single unified API endpoint with intellig The daemon runs on port 9091 by default: -```text +```bash # Start in background ./target/release/cli-daemon & @@ -27,7 +27,7 @@ curl http://localhost:9091/health ### Simple Nickel Rendering -```text +```nickel curl -X POST http://localhost:9091/config/render -H "Content-Type: application/json" -d '{ @@ -39,7 +39,7 @@ curl -X POST http://localhost:9091/config/render **Response**: -```text +```json { "rendered": "{ name = \"my-server\", cpu = 4, memory = 8192 }", "error": null, @@ -56,13 +56,13 @@ Render a configuration in any supported language. **Request Headers**: -```text +```bash Content-Type: application/json ``` **Request Body**: -```text +```json { "language": "nickel|tera", "content": "...configuration content...", @@ -85,7 +85,7 @@ Content-Type: application/json **Response** (Success): -```text +```json { "rendered": "...rendered output...", "error": null, @@ -96,7 +96,7 @@ Content-Type: application/json **Response** (Error): -```text +```json { "rendered": null, "error": "Nickel evaluation failed: undefined variable 'name'", @@ -117,7 +117,7 @@ Get rendering statistics across all languages. **Response**: -```text +```json { "total_renders": 156, "successful_renders": 154, @@ -136,7 +136,7 @@ Reset all rendering statistics. **Response**: -```text +```json { "status": "success", "message": "Configuration rendering statistics reset" @@ -147,7 +147,7 @@ Reset all rendering statistics. ### Basic Nickel Configuration -```text +```nickel curl -X POST http://localhost:9091/config/render -H "Content-Type: application/json" -d '{ @@ -171,7 +171,7 @@ curl -X POST http://localhost:9091/config/render Nickel excels at evaluating only what's needed: -```text +```nickel curl -X POST http://localhost:9091/config/render -H "Content-Type: application/json" -d '{ @@ -208,7 +208,7 @@ curl -X POST http://localhost:9091/config/render ### Basic Tera Template -```text +```bash curl -X POST http://localhost:9091/config/render -H "Content-Type: application/json" -d '{ @@ -249,7 +249,7 @@ Monitoring: DISABLED Tera supports Jinja2-compatible filters and functions: -```text +```bash curl -X POST http://localhost:9091/config/render -H "Content-Type: application/json" -d '{ @@ -324,7 +324,7 @@ Comparison of rendering times (on commodity hardware): **Error Response**: -```text +```json { "rendered": null, "error": "Nickel binary not found in PATH. Install Nickel or set NICKEL_PATH environment variable", @@ -335,7 +335,7 @@ Comparison of rendering times (on commodity hardware): **Solution**: -```text +```bash # Install Nickel nickel version @@ -347,7 +347,7 @@ export NICKEL_PATH=/usr/local/bin/nickel **Error Response**: -```text +```json { "rendered": null, "error": "Nickel evaluation failed: Type mismatch at line 3: expected String, got Number", @@ -362,7 +362,7 @@ export NICKEL_PATH=/usr/local/bin/nickel **Error Response**: -```text +```json { "rendered": null, "error": "Nickel evaluation failed: undefined variable 'required_var'", @@ -384,7 +384,7 @@ export NICKEL_PATH=/usr/local/bin/nickel ### Using with Nushell -```text +```nushell # Render a Nickel config from Nushell let config = open workspace/config/provisioning.ncl | into string let response = curl -X POST http://localhost:9091/config/render @@ -396,7 +396,7 @@ print $response.rendered ### Using with Python -```text +```bash import requests import json @@ -432,7 +432,7 @@ else: ### Using with Curl -```text +```bash #!/bin/bash # Function to render config @@ -462,13 +462,13 @@ render_config "nickel" "{name = \"my-server\"}" "server-config" **Check log level**: -```text +```bash PROVISIONING_LOG_LEVEL=debug ./target/release/cli-daemon ``` **Verify Nushell binary**: -```text +```nushell which nu # or set explicit path NUSHELL_PATH=/usr/local/bin/nu ./target/release/cli-daemon @@ -478,7 +478,7 @@ NUSHELL_PATH=/usr/local/bin/nu ./target/release/cli-daemon **Check cache hit rate**: -```text +```bash curl http://localhost:9091/config/stats | jq '.nickel_cache_hits / .nickel_renders' ``` @@ -486,7 +486,7 @@ curl http://localhost:9091/config/stats | jq '.nickel_cache_hits / .nickel_rende **Monitor execution time**: -```text +```bash curl http://localhost:9091/config/render ... | jq '.execution_time_ms' ``` @@ -494,7 +494,7 @@ curl http://localhost:9091/config/render ... | jq '.execution_time_ms' **Set timeout** (depends on client): -```text +```bash curl --max-time 10 -X POST http://localhost:9091/config/render ... ``` @@ -541,13 +541,13 @@ curl --max-time 10 -X POST http://localhost:9091/config/render ... ### API Endpoint -```text +```bash POST http://localhost:9091/config/render ``` ### Request Template -```text +```bash curl -X POST http://localhost:9091/config/render -H "Content-Type: application/json" -d '{ @@ -562,7 +562,7 @@ curl -X POST http://localhost:9091/config/render #### Nickel - Simple Config -```text +```nickel curl -X POST http://localhost:9091/config/render -H "Content-Type: application/json" -d '{ @@ -573,7 +573,7 @@ curl -X POST http://localhost:9091/config/render #### Tera - Template with Loops -```text +```bash curl -X POST http://localhost:9091/config/render -H "Content-Type: application/json" -d '{ @@ -586,7 +586,7 @@ curl -X POST http://localhost:9091/config/render ### Statistics -```text +```bash # Get stats curl http://localhost:9091/config/stats @@ -614,7 +614,7 @@ watch -n 1 'curl -s http://localhost:9091/config/stats | jq' ### Response Fields -```text +```json { "rendered": "...output or null on error", "error": "...error message or null on success", @@ -627,7 +627,7 @@ watch -n 1 'curl -s http://localhost:9091/config/stats | jq' #### Nickel -```text +```json { name = "server", type = "web", @@ -645,7 +645,8 @@ watch -n 1 'curl -s http://localhost:9091/config/stats | jq' #### Tera -```jinja2 +```bash +jinja2 Server: {{ name }} Type: {{ type | upper }} {% for tag_name, tag_value in tags %} @@ -666,7 +667,7 @@ Type: {{ type | upper }} **Cache stats**: -```text +```bash curl -s http://localhost:9091/config/stats | jq '{ nickel_cache_hits: .nickel_cache_hits, nickel_renders: .nickel_renders, @@ -678,7 +679,7 @@ curl -s http://localhost:9091/config/stats | jq '{ #### Batch Rendering -```text +```bash #!/bin/bash for config in configs/*.ncl; do curl -X POST http://localhost:9091/config/render @@ -690,7 +691,7 @@ done #### Validate Before Rendering -```text +```bash # Nickel validation nickel typecheck my-config.ncl @@ -700,7 +701,7 @@ curl ... # catches errors in response #### Monitor Cache Performance -```text +```bash #!/bin/bash while true; do STATS=$(curl -s http://localhost:9091/config/stats) @@ -714,7 +715,7 @@ done #### Missing Binary -```text +```json { "error": "Nickel binary not found. Install Nickel or set NICKEL_PATH", "rendered": null @@ -725,7 +726,7 @@ done #### Syntax Error -```text +```json { "error": "Nickel type checking failed: Type mismatch at line 3", "rendered": null @@ -738,7 +739,7 @@ done #### Nushell -```text +```nushell use lib_provisioning let config = open server.ncl | into string @@ -755,7 +756,7 @@ if ($result.error != null) { #### Python -```text +```bash import requests resp = requests.post("http://localhost:9091/config/render", json={ @@ -769,7 +770,7 @@ print(result["rendered"] if not result["error"] else f"Error: {result['error']}" #### Bash -```text +```bash render() { curl -s -X POST http://localhost:9091/config/render -H "Content-Type: application/json" @@ -782,7 +783,7 @@ render '{"language":"nickel","content":"{name = \"server\"}"}' ### Environment Variables -```text +```bash # Daemon configuration PROVISIONING_LOG_LEVEL=debug # Log level DAEMON_BIND=127.0.0.1:9091 # Bind address @@ -792,7 +793,7 @@ NICKEL_PATH=/usr/local/bin/nickel # Nickel binary ### Useful Commands -```text +```bash # Health check curl http://localhost:9091/health diff --git a/docs/src/infrastructure/configuration.md b/docs/src/infrastructure/configuration.md index 57623a1..ac223bf 100644 --- a/docs/src/infrastructure/configuration.md +++ b/docs/src/infrastructure/configuration.md @@ -19,7 +19,7 @@ all configuration aspects. The system uses a layered configuration approach with clear precedence rules: -```text +```toml Runtime CLI arguments (highest precedence) ↓ (overrides) Environment Variables @@ -48,7 +48,7 @@ System Defaults (config.defaults.toml) (lowest precedence) ### Core System Configuration -```text +```toml [core] version = "1.0.0" # System version name = "provisioning" # System identifier @@ -58,7 +58,7 @@ name = "provisioning" # System identifier The most critical configuration section that defines where everything is located: -```text +```toml [paths] # Base directory - all other paths derive from this base = "/usr/local/provisioning" @@ -82,7 +82,7 @@ requirements = "{{paths.base}}/requirements.yaml" ### Debug and Logging -```text +```toml [debug] enabled = false # Enable debug mode metadata = false # Show internal metadata @@ -94,7 +94,7 @@ no_terminal = false # Disable terminal features ### Output Configuration -```text +```toml [output] file_viewer = "less" # File viewer command format = "yaml" # Default output format (json, yaml, toml, text) @@ -102,7 +102,7 @@ format = "yaml" # Default output format (json, yaml, toml, text) ### Provider Configuration -```text +```toml [providers] default = "local" # Default provider @@ -124,7 +124,7 @@ interface = "CLI" ### Encryption (SOPS) Configuration -```text +```toml [sops] use_sops = true # Enable SOPS encryption config_path = "{{paths.base}}/.sops.yaml" @@ -144,7 +144,7 @@ The system supports powerful interpolation patterns for dynamic configuration va #### Path Interpolation -```text +```bash # Reference other path values templates = "{{paths.base}}/my-templates" custom_path = "{{paths.providers}}/custom" @@ -152,7 +152,7 @@ custom_path = "{{paths.providers}}/custom" #### Environment Variable Interpolation -```text +```bash # Access environment variables user_home = "{{env.HOME}}" current_user = "{{env.USER}}" @@ -161,7 +161,7 @@ custom_path = "{{env.CUSTOM_PATH || /default/path}}" # With fallback #### Date/Time Interpolation -```text +```bash # Dynamic date/time values log_file = "{{paths.base}}/logs/app-{{now.date}}.log" backup_dir = "{{paths.base}}/backups/{{now.timestamp}}" @@ -169,7 +169,7 @@ backup_dir = "{{paths.base}}/backups/{{now.timestamp}}" #### Git Information Interpolation -```text +```bash # Git repository information deployment_branch = "{{git.branch}}" version_tag = "{{git.tag}}" @@ -178,7 +178,7 @@ commit_hash = "{{git.commit}}" #### Cross-Section References -```text +```bash # Reference values from other sections database_host = "{{providers.aws.database_endpoint}}" api_key = "{{sops.decrypted_key}}" @@ -188,7 +188,7 @@ api_key = "{{sops.decrypted_key}}" #### Function Calls -```text +```bash # Built-in functions config_path = "{{path.join(env.HOME, .config, provisioning)}}" safe_name = "{{str.lower(str.replace(project.name, ' ', '-'))}}" @@ -196,7 +196,7 @@ safe_name = "{{str.lower(str.replace(project.name, ' ', '-'))}}" #### Conditional Expressions -```text +```bash # Conditional logic debug_level = "{{debug.enabled && 'debug' || 'info'}}" storage_path = "{{env.STORAGE_PATH || path.join(paths.base, 'storage')}}" @@ -204,7 +204,7 @@ storage_path = "{{env.STORAGE_PATH || path.join(paths.base, 'storage')}}" ### Interpolation Examples -```text +```toml [paths] base = "/opt/provisioning" workspace = "{{env.HOME}}/provisioning-workspace" @@ -240,7 +240,7 @@ Create environment-specific configurations: #### Development Environment (`config.dev.toml`) -```text +```toml [core] name = "provisioning-dev" @@ -261,7 +261,7 @@ enabled = false # No notifications in dev #### Testing Environment (`config.test.toml`) -```text +```toml [core] name = "provisioning-test" @@ -280,7 +280,7 @@ resource_prefix = "test-{{git.branch}}-" #### Production Environment (`config.prod.toml`) -```text +```toml [core] name = "provisioning-prod" @@ -303,7 +303,7 @@ critical_only = true ### Environment Switching -```text +```bash # Set environment for session export PROVISIONING_ENV=dev provisioning env @@ -319,7 +319,7 @@ provisioning env set prod ### Creating Your User Configuration -```text +```toml # Initialize user configuration from template provisioning init config @@ -331,7 +331,7 @@ cp config-examples/config.user.toml ~/.config/provisioning/config.toml #### Developer Setup -```text +```toml [paths] base = "/Users/alice/dev/provisioning" @@ -354,7 +354,7 @@ key_search_paths = [ #### Operations Engineer Setup -```text +```toml [paths] base = "/opt/provisioning" @@ -375,7 +375,7 @@ email = "ops-team@company.com" #### Team Lead Setup -```text +```toml [paths] base = "/home/teamlead/provisioning" @@ -402,7 +402,7 @@ key_search_paths = [ ### Project Configuration File (`provisioning.toml`) -```text +```toml [project] name = "web-application" description = "Main web application infrastructure" @@ -436,7 +436,7 @@ team_email = "platform-team@company.com" ### Infrastructure-Specific Configuration (`.provisioning.toml`) -```text +```toml [infrastructure] name = "production-web-app" environment = "production" @@ -468,7 +468,7 @@ alerting_enabled = true ### Built-in Validation -```text +```bash # Validate current configuration provisioning validate config @@ -486,7 +486,7 @@ provisioning validate config --environment prod Create custom validation in your configuration: -```text +```toml [validation] # Custom validation rules required_sections = ["paths", "providers", "debug"] @@ -510,7 +510,7 @@ min_key_length = 32 #### Issue 1: Path Not Found Errors -```text +```bash # Problem: Base path doesn't exist # Check current configuration provisioning env | grep paths.base @@ -525,7 +525,7 @@ nano ~/.config/provisioning/config.toml #### Issue 2: Interpolation Failures -```text +```bash # Problem: {{env.VARIABLE}} not resolving # Check environment variables env | grep VARIABLE @@ -539,7 +539,7 @@ provisioning --debug validate interpolation validate #### Issue 3: SOPS Encryption Errors -```text +```bash # Problem: Cannot decrypt SOPS files # Check SOPS configuration provisioning sops config @@ -553,7 +553,7 @@ sops -d encrypted-file.ncl #### Issue 4: Provider Authentication -```text +```bash # Problem: Provider authentication failed # Check provider configuration provisioning show providers @@ -567,7 +567,7 @@ aws configure list # For AWS ### Configuration Debugging -```text +```toml # Show current configuration hierarchy provisioning config show --hierarchy @@ -584,7 +584,7 @@ provisioning config debug providers ### Configuration Reset -```text +```toml # Reset to defaults provisioning config reset @@ -599,7 +599,7 @@ provisioning config backup ### Dynamic Configuration Loading -```text +```toml [dynamic] # Load configuration from external sources config_urls = [ @@ -616,7 +616,7 @@ load_if_exists = [ ### Configuration Templating -```text +```toml [templates] # Template-based configuration base_template = "aws-web-app" @@ -632,7 +632,7 @@ extends = ["base-web", "monitoring", "security"] ### Multi-Region Configuration -```text +```toml [regions] primary = "us-west-2" secondary = "us-east-1" @@ -648,7 +648,7 @@ availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"] ### Configuration Profiles -```text +```toml [profiles] active = "development" @@ -672,7 +672,7 @@ security.strict_mode = true ### 1. Version Control -```text +```bash # Track configuration changes git add provisioning.toml git commit -m "feat(config): add production settings" @@ -683,7 +683,7 @@ git checkout -b config/new-provider ### 2. Documentation -```text +```bash # Document your configuration choices [paths] # Using custom base path for team shared installation @@ -697,7 +697,7 @@ log_level = "debug" # Temporary while debugging network problems ### 3. Validation -```text +```bash # Always validate before committing provisioning validate config git add . && git commit -m "update config" @@ -705,7 +705,7 @@ git add . && git commit -m "update config" ### 4. Backup -```text +```bash # Regular configuration backups provisioning config export --format yaml > config-backup-$(date +%Y%m%d).yaml @@ -720,7 +720,7 @@ echo '0 2 * * * provisioning config export > ~/backups/config-$(date +\%Y\%m\%d) - Rotate encryption keys regularly - Audit configuration access -```text +```toml # Encrypt sensitive configuration sops -e settings.ncl > settings.encrypted.ncl @@ -732,7 +732,7 @@ git log -p -- provisioning.toml ### Migrating from Environment Variables -```text +```bash # Old: Environment variables export PROVISIONING_DEBUG=true export PROVISIONING_PROVIDER=aws @@ -747,7 +747,7 @@ default = "aws" ### Upgrading Configuration Format -```text +```toml # Check for configuration updates needed provisioning config check-version diff --git a/docs/src/infrastructure/dynamic-secrets-guide.md b/docs/src/infrastructure/dynamic-secrets-guide.md index 2902356..28e1f1c 100644 --- a/docs/src/infrastructure/dynamic-secrets-guide.md +++ b/docs/src/infrastructure/dynamic-secrets-guide.md @@ -11,37 +11,37 @@ below for fast lookup. #### Generate AWS Credentials (1 hour) -```text +```bash secrets generate aws --role deploy --workspace prod --purpose "deployment" ``` #### Generate SSH Key (2 hours) -```text +```bash secrets generate ssh --ttl 2 --workspace dev --purpose "server access" ``` #### Generate UpCloud Subaccount (2 hours) -```text +```bash secrets generate upcloud --workspace staging --purpose "testing" ``` #### List Active Secrets -```text +```bash secrets list ``` #### Revoke Secret -```text +```bash secrets revoke --reason "no longer needed" ``` #### View Statistics -```text +```bash secrets stats ``` @@ -62,7 +62,7 @@ secrets stats **Base URL**: `http://localhost:9090/api/v1/secrets` -```text +```bash # Generate secret POST /generate @@ -89,7 +89,7 @@ GET /stats ## AWS STS Example -```text +```bash # Generate let creds = secrets generate aws ` --role deploy ` @@ -115,7 +115,7 @@ secrets revoke ($creds.id) --reason "done" ## SSH Key Example -```text +```bash # Generate let key = secrets generate ssh ` --ttl 4 ` @@ -140,7 +140,7 @@ secrets revoke ($key.id) --reason "fixed" **File**: `provisioning/platform/orchestrator/config.defaults.toml` -```text +```toml [secrets] default_ttl_hours = 1 max_ttl_hours = 12 diff --git a/docs/src/infrastructure/infrastructure-from-code-guide.md b/docs/src/infrastructure/infrastructure-from-code-guide.md index b5d45ab..88d3503 100644 --- a/docs/src/infrastructure/infrastructure-from-code-guide.md +++ b/docs/src/infrastructure/infrastructure-from-code-guide.md @@ -15,13 +15,13 @@ organization-specific rules. It consists of three main commands: Scan a project directory for detected technologies: -```text +```bash provisioning detect /path/to/project --out json ``` **Output Example:** -```text +```json { "detections": [ {"technology": "nodejs", "confidence": 0.95}, @@ -35,13 +35,13 @@ provisioning detect /path/to/project --out json Get a completeness assessment and recommendations: -```text +```bash provisioning complete /path/to/project --out json ``` **Output Example:** -```text +```json { "completeness": 1.0, "changes_needed": 2, @@ -54,13 +54,13 @@ provisioning complete /path/to/project --out json Orchestrate detection → completion → assessment pipeline: -```text +```bash provisioning ifc /path/to/project --org default ``` **Output:** -```text +```bash ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 🔄 Infrastructure-from-Code Workflow ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ @@ -84,7 +84,7 @@ Scan and detect technologies in a project. **Usage:** -```text +```bash provisioning detect [PATH] [OPTIONS] ``` @@ -101,7 +101,7 @@ provisioning detect [PATH] [OPTIONS] **Examples:** -```text +```bash # Detect with default text output provisioning detect /path/to/project @@ -121,7 +121,7 @@ Analyze infrastructure completeness and recommend changes. **Usage:** -```text +```bash provisioning complete [PATH] [OPTIONS] ``` @@ -138,7 +138,7 @@ provisioning complete [PATH] [OPTIONS] **Examples:** -```text +```bash # Analyze completeness provisioning complete /path/to/project @@ -155,7 +155,7 @@ Run the full Infrastructure-from-Code pipeline. **Usage:** -```text +```bash provisioning ifc [PATH] [OPTIONS] ``` @@ -174,7 +174,7 @@ provisioning ifc [PATH] [OPTIONS] **Examples:** -```text +```bash # Run workflow with default rules provisioning ifc /path/to/project @@ -198,7 +198,7 @@ An inference rule tells the system: "If we detect technology X, we should recomm **Rule Structure:** -```text +```bash version: "1.0.0" organization: "your-org" rules: @@ -214,7 +214,7 @@ rules: Create an organization-specific rules file: -```text +```bash # ACME Corporation rules cat > $PROVISIONING/config/inference-rules/acme-corp.yaml << 'EOF' version: "1.0.0" @@ -247,7 +247,7 @@ EOF Then use them: -```text +```bash provisioning ifc /path/to/project --org acme-corp ``` @@ -268,7 +268,7 @@ If no organization rules are found, the system uses sensible defaults: Human-readable format with visual indicators: -```text +```bash STEP 1: Technology Detection ──────────────────────────── ✓ Detected 2 technologies @@ -282,13 +282,13 @@ STEP 2: Infrastructure Completion Structured format for automation and parsing: -```text +```bash provisioning detect /path/to/project --out json | jq '.detections[0]' ``` Output: -```text +```json { "technology": "nodejs", "confidence": 0.8333333134651184, @@ -300,7 +300,7 @@ Output: Alternative structured format: -```text +```bash provisioning detect /path/to/project --out yaml ``` @@ -308,7 +308,7 @@ provisioning detect /path/to/project --out yaml ### Example 1: Node.js + PostgreSQL Project -```text +```bash # Step 1: Detect $ provisioning detect my-app ✓ Detected: nodejs, express, postgres, docker @@ -326,7 +326,7 @@ $ provisioning ifc my-app --org acme-corp ### Example 2: Python Django Project -```text +```bash $ provisioning detect django-app --out json { "detections": [ @@ -340,7 +340,7 @@ $ provisioning detect django-app --out json ### Example 3: Microservices Architecture -```text +```bash $ provisioning ifc microservices/ --org mycompany --verbose 🔍 Processing microservices/ - service-a: nodejs + postgres @@ -356,7 +356,7 @@ $ provisioning ifc microservices/ --org mycompany --verbose ### CI/CD Pipeline Example -```text +```bash #!/bin/bash # Check infrastructure completeness in CI/CD @@ -373,7 +373,7 @@ echo "✅ Infrastructure is complete: $COMPLETENESS" ### Configuration as Code Integration -```text +```toml # Generate JSON for infrastructure config provisioning detect /path/to/project --out json > infra-report.json @@ -389,7 +389,7 @@ done **Solution:** Ensure the provisioning project is properly built: -```text +```bash cd $PROVISIONING/platform cargo build --release --bin provisioning-detector ``` @@ -416,14 +416,14 @@ cargo build --release --bin provisioning-detector Generate a template for a new organization: -```text +```bash # Template will be created with proper structure provisioning rules create --org neworg ``` ### Validate Rule Files -```text +```bash # Check for syntax errors provisioning rules validate /path/to/rules.yaml ``` @@ -432,7 +432,7 @@ provisioning rules validate /path/to/rules.yaml Export as Rust code for embedding: -```text +```rust provisioning rules export myorg --format rust > rules.rs ``` @@ -447,7 +447,7 @@ provisioning rules export myorg --format rust > rules.rs ## Related Commands -```text +```bash # View available taskservs that can be inferred provisioning taskserv list @@ -471,7 +471,7 @@ provisioning env | grep PROVISIONING ### 3-Step Workflow -```text +```bash # 1. Detect technologies provisioning detect /path/to/project @@ -496,7 +496,7 @@ provisioning ifc /path/to/project --org myorg ### Output Formats -```text +```bash # Text (human-readable) provisioning detect /path --out text @@ -511,13 +511,13 @@ provisioning detect /path --out yaml #### Use Organization Rules -```text +```bash provisioning ifc /path --org acme-corp ``` #### Create Rules File -```text +```bash mkdir -p $PROVISIONING/config/inference-rules cat > $PROVISIONING/config/inference-rules/myorg.yaml << 'EOF' version: "1.0.0" @@ -534,7 +534,7 @@ EOF ### Example: Node.js + PostgreSQL -```text +```bash $ provisioning detect myapp ✓ Detected: nodejs, postgres @@ -549,7 +549,7 @@ $ provisioning ifc myapp --org default ### CI/CD Integration -```text +```bash #!/bin/bash # Check infrastructure is complete before deploy COMPLETENESS=$(provisioning complete . --out json | jq '.completeness') @@ -564,7 +564,7 @@ fi #### Detect Output -```text +```json { "detections": [ {"technology": "nodejs", "confidence": 0.95}, @@ -576,7 +576,7 @@ fi #### Complete Output -```text +```json { "completeness": 1.0, "changes_needed": 2, @@ -624,7 +624,7 @@ fi ### Useful Aliases -```text +```bash # Add to shell config alias detect='provisioning detect' alias complete='provisioning complete' @@ -640,7 +640,7 @@ ifc /my/project --org myorg **Parse JSON in bash:** -```text +```bash provisioning detect . --out json | jq '.detections[] | .technology' | sort | uniq @@ -648,20 +648,20 @@ provisioning detect . --out json | **Watch for changes:** -```text +```bash watch -n 5 'provisioning complete . --out json | jq ".completeness"' ``` **Generate reports:** -```text +```bash provisioning detect . --out yaml > detection-report.yaml provisioning complete . --out yaml > completion-report.yaml ``` **Validate all organizations:** -```text +```bash for org in $PROVISIONING/config/inference-rules/*.yaml; do org_name=$(basename "$org" .yaml) echo "Testing $org_name..." diff --git a/docs/src/infrastructure/infrastructure-management.md b/docs/src/infrastructure/infrastructure-management.md index 1024399..cd67e0f 100644 --- a/docs/src/infrastructure/infrastructure-management.md +++ b/docs/src/infrastructure/infrastructure-management.md @@ -26,7 +26,7 @@ This comprehensive guide covers creating, managing, and maintaining infrastructu ### Infrastructure Lifecycle -```text +```bash Plan → Create → Deploy → Monitor → Scale → Update → Retire ``` @@ -38,7 +38,7 @@ Each phase has specific commands and considerations. Servers are defined in Nickel configuration files: -```text +```nickel # Example server configuration import models.server @@ -84,7 +84,7 @@ servers: [ #### Creating Servers -```text +```bash # Plan server creation (dry run) provisioning server create --infra my-infra --check @@ -100,7 +100,7 @@ provisioning server create web --infra my-infra #### Managing Existing Servers -```text +```bash # List all servers provisioning server list --infra my-infra @@ -116,7 +116,7 @@ provisioning server status web-01 --infra my-infra #### Server Operations -```text +```bash # Start/stop servers provisioning server start web-01 --infra my-infra provisioning server stop web-01 --infra my-infra @@ -133,7 +133,7 @@ provisioning server update web-01 --infra my-infra #### SSH Access -```text +```bash # SSH to server provisioning server ssh web-01 --infra my-infra @@ -150,7 +150,7 @@ provisioning server copy web-01:/var/log/app.log ./logs/ --infra my-infra #### Server Deletion -```text +```bash # Plan server deletion (dry run) provisioning server delete --infra my-infra --check @@ -179,7 +179,7 @@ Task services are software components installed on servers: ### Task Service Configuration -```text +```toml # Task service configuration example taskservs: { kubernetes: { @@ -229,7 +229,7 @@ taskservs: { #### Installing Services -```text +```bash # Install single service provisioning taskserv create kubernetes --infra my-infra @@ -245,7 +245,7 @@ provisioning taskserv create postgresql --servers db-01,db-02 --infra my-infra #### Managing Services -```text +```bash # List available services provisioning taskserv list @@ -264,7 +264,7 @@ provisioning taskserv health kubernetes --infra my-infra #### Service Operations -```text +```bash # Start/stop services provisioning taskserv start kubernetes --infra my-infra provisioning taskserv stop kubernetes --infra my-infra @@ -281,7 +281,7 @@ provisioning taskserv configure kubernetes --config cluster.yaml --infra my-infr #### Service Removal -```text +```bash # Remove service provisioning taskserv delete kubernetes --infra my-infra @@ -294,7 +294,7 @@ provisioning taskserv delete kubernetes --servers worker-03 --infra my-infra ### Version Management -```text +```bash # Check for updates provisioning taskserv check-updates --infra my-infra @@ -317,7 +317,7 @@ provisioning taskserv upgrade kubernetes --version 1.29 --infra my-infra Clusters are collections of services that work together to provide functionality: -```text +```bash # Cluster configuration example clusters: { web_cluster: { @@ -362,7 +362,7 @@ clusters: { #### Creating Clusters -```text +```bash # Create cluster provisioning cluster create web-cluster --infra my-infra @@ -375,7 +375,7 @@ provisioning cluster create web-cluster --deploy --infra my-infra #### Managing Clusters -```text +```bash # List available clusters provisioning cluster list @@ -391,7 +391,7 @@ provisioning cluster status web-cluster --infra my-infra #### Cluster Operations -```text +```bash # Deploy cluster provisioning cluster deploy web-cluster --infra my-infra @@ -407,7 +407,7 @@ provisioning cluster update web-cluster --rolling --infra my-infra #### Cluster Deletion -```text +```bash # Delete cluster provisioning cluster delete web-cluster --infra my-infra @@ -419,7 +419,7 @@ provisioning cluster delete web-cluster --cleanup --infra my-infra ### Network Configuration -```text +```toml # Network configuration network: { vpc = { @@ -479,7 +479,7 @@ network: { ### Network Commands -```text +```bash # Show network configuration provisioning network show --infra my-infra @@ -497,7 +497,7 @@ provisioning network test --infra my-infra ### Storage Configuration -```text +```toml # Storage configuration storage: { # Block storage @@ -534,7 +534,7 @@ storage: { ### Storage Commands -```text +```bash # Create storage resources provisioning storage create --infra my-infra @@ -552,7 +552,7 @@ provisioning storage restore --backup latest --infra my-infra ### Monitoring Setup -```text +```bash # Install monitoring stack provisioning taskserv create prometheus --infra my-infra provisioning taskserv create grafana --infra my-infra @@ -564,7 +564,7 @@ provisioning taskserv configure prometheus --config monitoring.yaml --infra my-i ### Health Checks -```text +```bash # Check overall infrastructure health provisioning health check --infra my-infra @@ -579,7 +579,7 @@ provisioning health monitor --infra my-infra --watch ### Metrics and Alerting -```text +```bash # Get infrastructure metrics provisioning metrics get --infra my-infra @@ -594,7 +594,7 @@ provisioning alerts list --infra my-infra ### Cost Monitoring -```text +```bash # Show current costs provisioning cost show --infra my-infra @@ -610,7 +610,7 @@ provisioning cost alert --threshold 1000 --infra my-infra ### Cost Optimization -```text +```bash # Analyze cost optimization opportunities provisioning cost optimize --infra my-infra @@ -625,7 +625,7 @@ provisioning cost recommendations --infra my-infra ### Manual Scaling -```text +```bash # Scale servers provisioning server scale --count 5 --infra my-infra @@ -638,7 +638,7 @@ provisioning cluster scale web-cluster --replicas 10 --infra my-infra ### Auto-scaling Configuration -```text +```toml # Auto-scaling configuration auto_scaling: { servers = { @@ -672,7 +672,7 @@ auto_scaling: { ### Backup Strategies -```text +```bash # Full infrastructure backup provisioning backup create --type full --infra my-infra @@ -685,7 +685,7 @@ provisioning backup schedule --daily --time "02:00" --infra my-infra ### Recovery Procedures -```text +```bash # List available backups provisioning backup list --infra my-infra @@ -703,7 +703,7 @@ provisioning restore --backup latest --test --infra my-infra ### Multi-Region Deployment -```text +```bash # Multi-region configuration regions: { primary = { @@ -735,7 +735,7 @@ regions: { ### Blue-Green Deployment -```text +```bash # Create green environment provisioning generate infra --from production --name production-green @@ -753,7 +753,7 @@ provisioning server delete --infra production --yes ### Canary Deployment -```text +```bash # Create canary environment provisioning cluster create web-cluster-canary --replicas 1 --infra my-infra @@ -775,7 +775,7 @@ provisioning cluster rollback web-cluster-canary --infra my-infra #### Server Creation Failures -```text +```bash # Check provider status provisioning provider status aws @@ -791,7 +791,7 @@ provisioning --debug server create web-01 --infra my-infra #### Service Installation Failures -```text +```bash # Check service prerequisites provisioning taskserv check kubernetes --infra my-infra @@ -807,7 +807,7 @@ provisioning --debug taskserv create kubernetes --infra my-infra #### Network Connectivity Issues -```text +```bash # Test network connectivity provisioning network test --infra my-infra @@ -820,7 +820,7 @@ provisioning network trace --from web-01 --to db-01 --infra my-infra ### Performance Optimization -```text +```bash # Analyze performance bottlenecks provisioning performance analyze --infra my-infra @@ -851,7 +851,7 @@ Testing infrastructure before production deployment helps: Test individual taskservs in isolated containers: -```text +```bash # Quick test (create, run, cleanup automatically) provisioning test quick kubernetes @@ -870,7 +870,7 @@ provisioning test env single redis --infra my-infra Test complete server configurations with multiple taskservs: -```text +```toml # Simulate web server with multiple taskservs provisioning test env server web-01 [containerd kubernetes cilium] --auto-start @@ -885,7 +885,7 @@ provisioning test env server db-01 [postgres redis] Test complex cluster topologies before production deployment: -```text +```bash # Test 3-node Kubernetes cluster provisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start @@ -901,7 +901,7 @@ provisioning test topology load kubernetes_single | ### Managing Test Environments -```text +```bash # List all test environments provisioning test env list @@ -931,7 +931,7 @@ Pre-configured multi-node cluster templates: Typical testing workflow: -```text +```bash # 1. Test new taskserv before deploying provisioning test quick kubernetes @@ -952,7 +952,7 @@ provisioning taskserv create kubernetes --infra production Integrate infrastructure testing into CI/CD pipelines: -```text +```bash # GitLab CI example test-infrastructure: stage: test @@ -998,7 +998,7 @@ Test environments require: Create custom topology configurations: -```text +```toml # custom-topology.toml [my_cluster] name = "Custom Test Cluster" @@ -1023,7 +1023,7 @@ memory_mb = 2048 Load and test custom topology: -```text +```bash provisioning test env cluster custom-app custom-topology.toml --auto-start ``` @@ -1031,7 +1031,7 @@ provisioning test env cluster custom-app custom-topology.toml --auto-start Test taskserv dependencies: -```text +```bash # Test Kubernetes dependencies in order provisioning test quick containerd provisioning test quick etcd @@ -1063,7 +1063,7 @@ For complete test environment documentation: ### 2. Operational Excellence -```text +```bash # Always validate before applying changes provisioning validate config --infra my-infra @@ -1079,7 +1079,7 @@ provisioning backup schedule --daily --infra my-infra ### 3. Security -```text +```bash # Regular security updates provisioning taskserv update --security-only --infra my-infra @@ -1092,7 +1092,7 @@ provisioning audit logs --infra my-infra ### 4. Cost Optimization -```text +```bash # Regular cost reviews provisioning cost analyze --infra my-infra diff --git a/docs/src/infrastructure/mode-system-guide.md b/docs/src/infrastructure/mode-system-guide.md index b966215..c4a7417 100644 --- a/docs/src/infrastructure/mode-system-guide.md +++ b/docs/src/infrastructure/mode-system-guide.md @@ -6,7 +6,7 @@ ## Quick Start -```text +```bash # Check current mode provisioning mode current @@ -77,13 +77,13 @@ provisioning mode validate ### Initialize Mode System -```text +```bash provisioning mode init ``` ### Check Current Mode -```text +```bash provisioning mode current # Output: @@ -94,7 +94,7 @@ provisioning mode current ### List All Modes -```text +```bash provisioning mode list # Output: @@ -110,7 +110,7 @@ provisioning mode list ### Switch Mode -```text +```bash # Switch with confirmation provisioning mode switch multi-user @@ -123,7 +123,7 @@ provisioning mode switch multi-user --validate ### Show Mode Details -```text +```bash # Show current mode provisioning mode show @@ -133,7 +133,7 @@ provisioning mode show enterprise ### Validate Mode -```text +```bash # Validate current mode provisioning mode validate @@ -143,7 +143,7 @@ provisioning mode validate cicd ### Compare Modes -```text +```bash provisioning mode compare solo multi-user # Output shows differences in: @@ -160,7 +160,7 @@ provisioning mode compare solo multi-user ### Solo Mode Only -```text +```bash # Start local OCI registry provisioning mode oci-registry start @@ -182,7 +182,7 @@ provisioning mode oci-registry stop ### Solo Mode Workflow -```text +```bash # 1. Initialize (defaults to solo) provisioning workspace init @@ -202,7 +202,7 @@ provisioning taskserv create kubernetes ### Multi-User Mode Workflow -```text +```bash # 1. Switch to multi-user mode provisioning mode switch multi-user @@ -226,7 +226,7 @@ provisioning workspace unlock my-infra ### CI/CD Mode Workflow -```text +```bash # GitLab CI example deploy: stage: deploy @@ -252,7 +252,7 @@ deploy: ### Enterprise Mode Workflow -```text +```bash # 1. Switch to enterprise mode provisioning mode switch enterprise @@ -286,7 +286,7 @@ provisioning workspace unlock prod-deployment ### Mode Templates -```text +```bash workspace/config/modes/ ├── solo.yaml # Solo mode configuration ├── multi-user.yaml # Multi-user mode configuration @@ -296,7 +296,7 @@ workspace/config/modes/ ### Active Mode Configuration -```text +```toml ~/.provisioning/config/active-mode.yaml ``` @@ -323,7 +323,7 @@ All modes use the following OCI registry namespaces: ### Mode switch fails -```text +```bash # Validate mode first provisioning mode validate @@ -333,7 +333,7 @@ provisioning mode validate --check-requirements ### Cannot start OCI registry (solo mode) -```text +```bash # Check if registry binary is installed which zot @@ -347,7 +347,7 @@ lsof -i :5000 ### Authentication fails (multi-user/cicd/enterprise) -```text +```bash # Check token expiry provisioning auth status @@ -361,7 +361,7 @@ ls -la /etc/provisioning/certs/ ### Workspace locking issues (multi-user/enterprise) -```text +```bash # Check lock status provisioning workspace lock-status @@ -378,7 +378,7 @@ etcdctl endpoint health ### OCI registry connection fails -```text +```bash # Test registry connectivity curl https://harbor.company.local/v2/ @@ -415,26 +415,26 @@ docker login harbor.company.local ### 2. Validate Before Switching -```text +```bash provisioning mode validate ``` ### 3. Backup Active Configuration -```text +```toml # Automatic backup created when switching ls ~/.provisioning/config/active-mode.yaml.backup ``` ### 4. Use Check Mode -```text +```bash provisioning server create --check ``` ### 5. Lock Workspaces in Multi-User/Enterprise -```text +```bash provisioning workspace lock # ... make changes ... provisioning workspace unlock @@ -442,7 +442,7 @@ provisioning workspace unlock ### 6. Pull Extensions from OCI (Multi-User/CI/CD/Enterprise) -```text +```bash # Don't use local extensions in shared modes provisioning extension pull ``` diff --git a/docs/src/infrastructure/workspaces/workspace-config-architecture.md b/docs/src/infrastructure/workspaces/workspace-config-architecture.md index 0638d30..61ab9fb 100644 --- a/docs/src/infrastructure/workspaces/workspace-config-architecture.md +++ b/docs/src/infrastructure/workspaces/workspace-config-architecture.md @@ -29,7 +29,7 @@ Configuration is loaded in the following order (lowest to highest priority): When a workspace is initialized, the following structure is created: -```text +```json {workspace}/ ├── config/ │ ├── provisioning.yaml # Main workspace config (generated from template) @@ -81,7 +81,7 @@ Templates support the following interpolation variables: ### Command -```text +```bash # Using the workspace init function nu -c "use provisioning/core/nulib/lib_provisioning/workspace/init.nu *; workspace-init 'my-workspace' '/path/to/workspace' @@ -111,7 +111,7 @@ User context files are stored per workspace: ### Example -```text +```bash workspace: name: "my-workspace" path: "/path/to/my-workspace" @@ -132,7 +132,7 @@ providers: ### 1. Determine Active Workspace -```text +```bash # Check user config directory for active workspace let user_config_dir = ~/Library/Application Support/provisioning/ let active_workspace = (find workspace with active: true in ws_*.yaml files) @@ -140,14 +140,14 @@ let active_workspace = (find workspace with active: true in ws_*.yaml files) ### 2. Load Workspace Config -```text +```toml # Load main workspace config let workspace_config = {workspace.path}/config/provisioning.yaml ``` ### 3. Load Provider Configs -```text +```toml # Merge all provider configs for provider in {workspace.path}/config/providers/*.toml { merge provider config @@ -156,7 +156,7 @@ for provider in {workspace.path}/config/providers/*.toml { ### 4. Load Platform Configs -```text +```toml # Merge all platform configs for platform in {workspace.path}/config/platform/*.toml { merge platform config @@ -165,7 +165,7 @@ for platform in {workspace.path}/config/platform/*.toml { ### 5. Apply User Context -```text +```bash # Apply user-specific overrides let user_context = ~/Library/Application Support/provisioning/ws_{name}.yaml merge user_context (highest config priority) @@ -173,7 +173,7 @@ merge user_context (highest config priority) ### 6. Apply Environment Variables -```text +```bash # Final overrides from environment PROVISIONING_DEBUG=true PROVISIONING_LOG_LEVEL=debug @@ -185,7 +185,7 @@ PROVISIONING_PROVIDER=aws ### Before (ENV-based) -```text +```javascript export PROVISIONING=/usr/local/provisioning export PROVISIONING_INFRA_PATH=/path/to/infra export PROVISIONING_DEBUG=true @@ -194,7 +194,7 @@ export PROVISIONING_DEBUG=true ### After (Workspace-based) -```text +```bash # Initialize workspace workspace-init "production" "/workspaces/prod" --providers ["aws"] --activate @@ -213,26 +213,26 @@ workspace-init "production" "/workspaces/prod" --providers ["aws"] --activate ### Initialize Workspace -```text +```bash use provisioning/core/nulib/lib_provisioning/workspace/init.nu * workspace-init "my-workspace" "/path/to/workspace" --providers ["aws" "local"] --activate ``` ### List Workspaces -```text +```bash workspace-list ``` ### Activate Workspace -```text +```bash workspace-activate "my-workspace" ``` ### Get Active Workspace -```text +```bash workspace-get-active ``` @@ -262,7 +262,7 @@ workspace-get-active ### Main Workspace Config (provisioning.yaml) -```text +```yaml workspace: name: string version: string @@ -293,7 +293,7 @@ providers: ### Provider Config (providers/*.toml) -```text +```toml [provider] name = "aws" enabled = true @@ -310,7 +310,7 @@ cache = "{workspace}/.providers/aws/cache" ### User Context (ws_{name}.yaml) -```text +```yaml workspace: name: string path: string @@ -357,25 +357,25 @@ The workspace .gitignore excludes: ### No Active Workspace Error -```text +```bash Error: No active workspace found. Please initialize or activate a workspace. ``` **Solution**: Initialize or activate a workspace: -```text +```bash workspace-init "my-workspace" "/path/to/workspace" --activate ``` ### Config File Not Found -```text +```toml Error: Required configuration file not found: {workspace}/config/provisioning.yaml ``` **Solution**: The workspace config is corrupted or deleted. Re-initialize: -```text +```toml workspace-init "workspace-name" "/existing/path" --providers ["aws"] ``` @@ -383,7 +383,7 @@ workspace-init "workspace-name" "/existing/path" --providers ["aws"] **Solution**: Add provider config to workspace: -```text +```toml # Generate provider config manually generate-provider-config "/workspace/path" "workspace-name" "aws" ``` diff --git a/docs/src/infrastructure/workspaces/workspace-config-commands.md b/docs/src/infrastructure/workspaces/workspace-config-commands.md index b26e3c2..8f0cf9e 100644 --- a/docs/src/infrastructure/workspaces/workspace-config-commands.md +++ b/docs/src/infrastructure/workspaces/workspace-config-commands.md @@ -21,7 +21,7 @@ The workspace configuration management commands provide a comprehensive set of t Display the complete workspace configuration in JSON, YAML, TOML, and other formats. -```text +```toml # Show active workspace config (YAML format) provisioning workspace config show @@ -44,7 +44,7 @@ provisioning workspace config show my-workspace --out json Validate all configuration files for syntax and required sections. -```text +```toml # Validate active workspace provisioning workspace config validate @@ -65,7 +65,7 @@ provisioning workspace config validate my-workspace Generate a provider configuration file from a template. -```text +```toml # Generate AWS provider config for active workspace provisioning workspace config generate provider aws @@ -88,7 +88,7 @@ provisioning workspace config generate provider local Open configuration files in your editor for modification. -```text +```toml # Edit main workspace config provisioning workspace config edit main @@ -118,7 +118,7 @@ provisioning workspace config edit provider upcloud --infra my-workspace Display the configuration loading hierarchy and precedence. -```text +```toml # Show hierarchy for active workspace provisioning workspace config hierarchy @@ -138,7 +138,7 @@ provisioning workspace config hierarchy my-workspace List all configuration files for a workspace. -```text +```toml # List all configs provisioning workspace config list @@ -177,7 +177,7 @@ All config commands support two ways to specify the workspace: Workspace configurations are organized in a standard structure: -```text +```json {workspace}/ ├── config/ │ ├── provisioning.yaml # Main workspace config @@ -208,7 +208,7 @@ Higher priority values override lower priority values. ### Complete Workflow -```text +```bash # 1. Create new workspace with activation provisioning workspace init my-project ~/workspaces/my-project --providers [aws,local] --activate @@ -236,7 +236,7 @@ provisioning workspace config validate ### Multi-Workspace Management -```text +```bash # Create multiple workspaces provisioning workspace init dev ~/workspaces/dev --activate provisioning workspace init staging ~/workspaces/staging @@ -254,7 +254,7 @@ provisioning workspace config edit provider aws --infra prod ### Configuration Troubleshooting -```text +```toml # 1. Validate all configs provisioning workspace config validate @@ -275,7 +275,7 @@ provisioning workspace config validate Config commands integrate seamlessly with other workspace operations: -```text +```toml # Create workspace with providers provisioning workspace init my-app ~/apps/my-app --providers [aws,upcloud] --activate diff --git a/docs/src/infrastructure/workspaces/workspace-enforcement-guide.md b/docs/src/infrastructure/workspaces/workspace-enforcement-guide.md index 51288b5..0f5c116 100644 --- a/docs/src/infrastructure/workspaces/workspace-enforcement-guide.md +++ b/docs/src/infrastructure/workspaces/workspace-enforcement-guide.md @@ -65,7 +65,7 @@ Only informational and workspace management commands work without a workspace: If you run a command without an active workspace, you'll see: -```text +```bash ✗ Workspace Required No active workspace is configured. @@ -90,7 +90,7 @@ To get started: Each workspace maintains metadata in `.provisioning/metadata.yaml`: -```text +```yaml workspace: name: "my-workspace" path: "/path/to/workspace" @@ -134,7 +134,7 @@ compatibility: View workspace version information: -```text +```bash # Check active workspace version provisioning workspace version @@ -147,7 +147,7 @@ provisioning workspace version --format json **Example Output**: -```text +```bash Workspace Version Information System: @@ -187,7 +187,7 @@ Migration is required when: #### Scenario 1: No Metadata (Unknown Version) -```text +```bash Workspace version is incompatible: Workspace: my-workspace Path: /path/to/workspace @@ -202,14 +202,14 @@ This workspace needs migration: #### Scenario 2: Migration Available -```text +```bash ℹ Migration available: Workspace can be updated from 2.0.0 to 2.0.5 Run: provisioning workspace migrate my-workspace ``` #### Scenario 3: Workspace Too New -```text +```bash Workspace version (3.0.0) is newer than system (2.0.5) Workspace is newer than the system: @@ -225,19 +225,19 @@ Workspace is newer than the system: Migrate active workspace to current system version: -```text +```bash provisioning workspace migrate ``` #### Migrate Specific Workspace -```text +```bash provisioning workspace migrate my-workspace ``` #### Migration Options -```text +```bash # Skip backup (not recommended) provisioning workspace migrate --skip-backup @@ -261,7 +261,7 @@ When you run a migration: **Example Migration Output**: -```text +```bash Workspace Migration Workspace: my-workspace @@ -292,7 +292,7 @@ Migrating workspace to version 2.0.5... #### List Backups -```text +```bash # List backups for active workspace provisioning workspace list-backups @@ -302,7 +302,7 @@ provisioning workspace list-backups my-workspace **Example Output**: -```text +```bash Workspace Backups for my-workspace name created reason size @@ -312,7 +312,7 @@ my-workspace_backup_20251005_1500 2025-10-05T15:00:00Z pre_migration 2.1 MB #### Restore from Backup -```text +```bash # Restore workspace from backup provisioning workspace restore-backup /path/to/backup @@ -322,7 +322,7 @@ provisioning workspace restore-backup /path/to/backup --force **Restore Process**: -```text +```bash Restore Workspace from Backup Backup: /path/.workspace_backups/my-workspace_backup_20251006_1200 @@ -344,7 +344,7 @@ Continue with restore? (y/N): y ### Workspace Version Commands -```text +```bash # Show workspace version information provisioning workspace version [workspace-name] [--format table|json|yaml] @@ -363,7 +363,7 @@ provisioning workspace restore-backup [--force] ### Workspace Management Commands -```text +```bash # List all workspaces provisioning workspace list @@ -391,7 +391,7 @@ provisioning workspace remove [--force] **Solution**: Activate or create a workspace -```text +```bash # List available workspaces provisioning workspace list @@ -408,7 +408,7 @@ provisioning workspace init new-workspace **Solution**: Run migration to fix structure -```text +```bash provisioning workspace migrate my-workspace ``` @@ -416,7 +416,7 @@ provisioning workspace migrate my-workspace **Solution**: Run migration to upgrade workspace -```text +```bash provisioning workspace migrate ``` @@ -424,7 +424,7 @@ provisioning workspace migrate **Solution**: Restore from automatic backup -```text +```bash # List backups provisioning workspace list-backups @@ -442,7 +442,7 @@ provisioning workspace restore-backup /path/to/backup **Solutions**: -```text +```bash # Check workspace compatibility provisioning workspace check-compatibility my-workspace @@ -462,7 +462,7 @@ provisioning workspace register my-workspace /new/path --activate Create workspaces for different environments: -```text +```bash provisioning workspace init dev ~/workspaces/dev --activate provisioning workspace init staging ~/workspaces/staging provisioning workspace init production ~/workspaces/production @@ -472,7 +472,7 @@ provisioning workspace init production ~/workspaces/production Never use `--skip-backup` for important workspaces. Backups are cheap, data loss is expensive. -```text +```bash # Good: Default with backup provisioning workspace migrate @@ -484,7 +484,7 @@ provisioning workspace migrate --skip-backup # DON'T DO THIS Before major operations, verify workspace compatibility: -```text +```bash provisioning workspace check-compatibility ``` @@ -492,7 +492,7 @@ provisioning workspace check-compatibility After upgrading the provisioning system: -```text +```bash # Check if migration available provisioning workspace version @@ -504,7 +504,7 @@ provisioning workspace migrate Don't immediately delete old backups: -```text +```bash # List backups provisioning workspace list-backups @@ -515,7 +515,7 @@ provisioning workspace list-backups Initialize git in workspace directory: -```text +```bash cd ~/workspaces/my-workspace git init git add config/ infra/ @@ -524,7 +524,7 @@ git commit -m "Initial workspace configuration" Exclude runtime and cache directories in `.gitignore`: -```text +```bash .cache/ .runtime/ .provisioning/ @@ -535,7 +535,7 @@ Exclude runtime and cache directories in `.gitignore`: If you need custom migration steps, document them: -```text +```bash # Create migration notes echo "Custom steps for v2 to v3 migration" > MIGRATION_NOTES.md ``` @@ -546,7 +546,7 @@ echo "Custom steps for v2 to v3 migration" > MIGRATION_NOTES.md Each migration is recorded in workspace metadata: -```text +```bash migration_history: - from_version: "unknown" to_version: "2.0.5" @@ -565,7 +565,7 @@ migration_history: View migration history: -```text +```bash provisioning workspace version --format yaml | grep -A 10 "migration_history" ``` @@ -582,7 +582,7 @@ The workspace enforcement and version tracking system provides: **Key Commands**: -```text +```bash # Create workspace provisioning workspace init my-workspace --activate @@ -608,7 +608,7 @@ For more information, see: Check the troubleshooting section or run: -```text +```bash provisioning workspace check-compatibility ``` diff --git a/docs/src/infrastructure/workspaces/workspace-guide.md b/docs/src/infrastructure/workspaces/workspace-guide.md index 6161e76..322fcd8 100644 --- a/docs/src/infrastructure/workspaces/workspace-guide.md +++ b/docs/src/infrastructure/workspaces/workspace-guide.md @@ -18,7 +18,7 @@ This guide covers: ## Quick Start -```text +```bash # List all workspaces provisioning workspace list diff --git a/docs/src/infrastructure/workspaces/workspace-infra-reference.md b/docs/src/infrastructure/workspaces/workspace-infra-reference.md index 5fcfe00..d8e63f3 100644 --- a/docs/src/infrastructure/workspaces/workspace-infra-reference.md +++ b/docs/src/infrastructure/workspaces/workspace-infra-reference.md @@ -14,7 +14,7 @@ eliminates the need to specify infrastructure separately and enables convenient Use the `-ws` flag with `workspace:infra` notation: -```text +```bash # Use production workspace with sgoyol infrastructure for this command only provisioning server list -ws production:sgoyol @@ -26,7 +26,7 @@ provisioning taskserv create kubernetes Activate a workspace with a default infrastructure: -```text +```bash # Activate librecloud workspace and set wuji as default infra provisioning workspace activate librecloud:wuji @@ -38,7 +38,7 @@ provisioning server list ### Basic Format -```text +```bash workspace:infra ``` @@ -93,7 +93,7 @@ When no infrastructure is explicitly specified, the system uses this priority or Use `-ws` to override workspace:infra for a single command: -```text +```bash # Currently in librecloud:wuji context provisioning server list # Shows librecloud:wuji @@ -108,7 +108,7 @@ provisioning server list # Shows librecloud:wuji again Set a workspace as active with a default infrastructure: -```text +```bash # List available workspaces provisioning workspace list @@ -124,7 +124,7 @@ provisioning taskserv create kubernetes The system auto-detects workspace and infrastructure from your current directory: -```text +```bash # Your workspace structure workspace_librecloud/ infra/ @@ -145,7 +145,7 @@ provisioning server list # Switches to another Set a workspace-specific default infrastructure: -```text +```bash # During activation provisioning workspace activate librecloud:wuji @@ -160,7 +160,7 @@ provisioning workspace list ### Workspace Commands -```text +```bash # Activate workspace with infra provisioning workspace activate workspace:infra @@ -182,7 +182,7 @@ provisioning workspace get-default-infra workspace_name ### Common Commands with `-ws` -```text +```bash # Server operations provisioning server create -ws workspace:infra provisioning server list -ws workspace:infra @@ -235,7 +235,7 @@ provisioning infra list -ws workspace:infra The system uses `$env.TEMP_WORKSPACE` for temporal overrides: -```text +```bash # Set temporarily (via -ws flag automatically) $env.TEMP_WORKSPACE = "production" @@ -250,7 +250,7 @@ hide-env TEMP_WORKSPACE ### Validating Notation -```text +```bash # Valid notation formats librecloud:wuji # Standard format production:sgoyol.v2 # With dots and hyphens @@ -263,7 +263,7 @@ lib-cloud_01:wu-ji.v2 # Mix of all allowed chars ### Error Cases -```text +```bash # Workspace not found provisioning workspace activate unknown:infra # Error: Workspace 'unknown' not found in registry @@ -283,7 +283,7 @@ provisioning workspace activate "" Default infrastructure is stored in `~/Library/Application Support/provisioning/user_config.yaml`: -```text +```yaml active_workspace: "librecloud" workspaces: @@ -302,7 +302,7 @@ workspaces: In `provisioning/schemas/workspace_config.ncl`: -```text +```json { InfraConfig = { current | String, # Infrastructure context settings @@ -315,7 +315,7 @@ In `provisioning/schemas/workspace_config.ncl`: ### 1. Use Persistent Activation for Long Sessions -```text +```bash # Good: Activate at start of session provisioning workspace activate production:sgoyol @@ -326,7 +326,7 @@ provisioning taskserv create kubernetes ### 2. Use Temporal Override for Ad-Hoc Operations -```text +```bash # Good: Quick one-off operation provisioning server list -ws production:other-infra @@ -337,7 +337,7 @@ provisioning taskserv list -ws prod:infra1 # Better to activate once ### 3. Navigate with PWD for Context Awareness -```text +```bash # Good: Navigate to infrastructure directory cd workspace_librecloud/infra/wuji provisioning server list # Auto-detects context @@ -347,7 +347,7 @@ provisioning server list # Auto-detects context ### 4. Set Meaningful Defaults -```text +```bash # Good: Default to production infrastructure provisioning workspace activate production:main-infra @@ -360,7 +360,7 @@ provisioning workspace activate production:main-infra **Solution**: Register the workspace first -```text +```bash provisioning workspace register librecloud /path/to/workspace_librecloud ``` @@ -368,7 +368,7 @@ provisioning workspace register librecloud /path/to/workspace_librecloud **Solution**: Verify infrastructure directory exists -```text +```bash ls workspace_librecloud/infra/ # Check available infras provisioning workspace activate librecloud:wuji # Use correct name ``` @@ -377,7 +377,7 @@ provisioning workspace activate librecloud:wuji # Use correct name **Solution**: Ensure you're using `-ws` flag correctly -```text +```bash # Correct provisioning server list -ws production:sgoyol @@ -392,7 +392,7 @@ provisioning -ws production:sgoyol server list **Solution**: Navigate to proper infrastructure directory -```text +```bash # Must be in workspace structure cd workspace_name/infra/infra_name @@ -404,7 +404,7 @@ provisioning server list ### Old Way -```text +```bash provisioning workspace activate librecloud provisioning --infra wuji server list provisioning --infra wuji taskserv create kubernetes @@ -412,7 +412,7 @@ provisioning --infra wuji taskserv create kubernetes ### New Way -```text +```bash provisioning workspace activate librecloud:wuji provisioning server list provisioning taskserv create kubernetes @@ -429,7 +429,7 @@ provisioning taskserv create kubernetes All existing commands and flags continue to work: -```text +```bash # Old syntax still works provisioning --infra wuji server list diff --git a/docs/src/infrastructure/workspaces/workspace-setup.md b/docs/src/infrastructure/workspaces/workspace-setup.md index 3ad88d0..ac221e0 100644 --- a/docs/src/infrastructure/workspaces/workspace-setup.md +++ b/docs/src/infrastructure/workspaces/workspace-setup.md @@ -6,7 +6,7 @@ This guide shows you how to set up a new infrastructure workspace with Nickel-ba ### 1. Create a New Workspace (Automatic) -```text +```bash # Interactive workspace creation with prompts provisioning workspace init @@ -25,7 +25,7 @@ When you run `provisioning workspace init`, the system automatically: After running `workspace init`, your workspace has this structure: -```text +```bash my_workspace/ ├── config/ │ ├── config.ncl # Master Nickel configuration @@ -53,7 +53,7 @@ my_workspace/ The `config/config.ncl` file is the master configuration for your workspace: -```text +```json { workspace = { name = "my_workspace", @@ -101,7 +101,7 @@ These guides are automatically generated based on your workspace's: After creation, edit the Nickel configuration files: -```text +```nickel # Edit master configuration vim config/config.ncl @@ -121,7 +121,7 @@ nickel typecheck config/config.ncl Each workspace gets 4 auto-generated guides in the `docs/` directory: -```text +```bash cd my_workspace # Overview and quick start @@ -141,7 +141,7 @@ cat docs/troubleshooting.md Edit the Nickel configuration files to suit your needs: -```text +```nickel # Master configuration (providers, settings) vim config/config.ncl @@ -154,7 +154,7 @@ vim infra/default/servers.ncl ### 3. Validate Your Configuration -```text +```toml # Check Nickel syntax nickel typecheck config/config.ncl nickel typecheck infra/default/main.ncl @@ -167,7 +167,7 @@ provisioning validate config To add more infrastructure environments: -```text +```bash # Create new infrastructure directory mkdir infra/production mkdir infra/staging @@ -184,7 +184,7 @@ vim infra/production/servers.ncl To use cloud providers (UpCloud, AWS, etc.), update `config/config.ncl`: -```text +```nickel providers = { upcloud = { name = "upcloud", @@ -208,25 +208,25 @@ providers = { ### List Workspaces -```text +```bash provisioning workspace list ``` ### Activate a Workspace -```text +```bash provisioning workspace activate my_workspace ``` ### Show Active Workspace -```text +```bash provisioning workspace active ``` ### Deploy Infrastructure -```text +```bash # Dry-run first (check mode) provisioning -c server create @@ -241,7 +241,7 @@ provisioning server list ### Invalid Nickel Syntax -```text +```nickel # Check syntax nickel typecheck config/config.ncl diff --git a/docs/src/infrastructure/workspaces/workspace-switching-guide.md b/docs/src/infrastructure/workspaces/workspace-switching-guide.md index b60300c..079f13f 100644 --- a/docs/src/infrastructure/workspaces/workspace-switching-guide.md +++ b/docs/src/infrastructure/workspaces/workspace-switching-guide.md @@ -13,13 +13,13 @@ manually editing configuration files. ### List Available Workspaces -```text +```bash provisioning workspace list ``` Output: -```text +```bash Registered Workspaces: ● librecloud @@ -35,13 +35,13 @@ The green ● indicates the currently active workspace. ### Check Active Workspace -```text +```bash provisioning workspace active ``` Output: -```text +```bash Active Workspace: Name: librecloud Path: /Users/Akasha/project-provisioning/workspace_librecloud @@ -50,7 +50,7 @@ Active Workspace: ### Switch to Another Workspace -```text +```bash # Option 1: Using activate provisioning workspace activate production @@ -60,7 +60,7 @@ provisioning workspace switch production Output: -```text +```bash ✓ Workspace 'production' activated Current workspace: production @@ -71,7 +71,7 @@ Path: /opt/workspaces/production ### Register a New Workspace -```text +```bash # Register without activating provisioning workspace register my-project ~/workspaces/my-project @@ -81,7 +81,7 @@ provisioning workspace register my-project ~/workspaces/my-project --activate ### Remove Workspace from Registry -```text +```bash # With confirmation prompt provisioning workspace remove old-workspace @@ -101,7 +101,7 @@ All workspace information is stored in a central user configuration file: **Structure**: -```text +```bash # Active workspace (current workspace in use) active_workspace: "librecloud" @@ -152,7 +152,7 @@ metadata: You can set global user preferences that apply across all workspaces: -```text +```bash # Get a preference value provisioning workspace get-preference editor @@ -176,7 +176,7 @@ provisioning workspace preferences List workspaces in different formats: -```text +```bash # Table format (default) provisioning workspace list @@ -191,7 +191,7 @@ provisioning workspace list --format yaml Activate workspace without output messages: -```text +```bash provisioning workspace activate production --quiet ``` @@ -212,8 +212,7 @@ For a workspace to be activated, it must have: ├── platform/ # Optional └── kms.toml # Optional -```text - +```toml 3. **Main config file**: Must have `config/provisioning.yaml` If these requirements are not met, the activation will fail with helpful error messages: @@ -223,14 +222,12 @@ If these requirements are not met, the activation will fail with helpful error m 💡 Available workspaces: [list of workspaces] 💡 Register it first with: provisioning workspace register my-project -```text - +```bash ``` ✗ Workspace is not migrated to new config system 💡 Missing: /path/to/workspace/config 💡 Run migration: provisioning workspace migrate my-project -```text - +```toml ## Migration from Old System If you have workspaces using the old context system (`ws_{name}.yaml` files), they still work but you should register them in the new system: @@ -241,8 +238,7 @@ provisioning workspace register old-workspace ~/workspaces/old-workspace # Activate it provisioning workspace activate old-workspace -```text - +```bash The old `ws_{name}.yaml` files are still supported for backward compatibility, but the new centralized system is recommended. ## Best Practices @@ -263,8 +259,7 @@ provisioning workspace register dev-local ~/workspaces/dev # ❌ Avoid provisioning workspace register ws1 ~/workspaces/workspace1 provisioning workspace register temp ~/workspaces/t -```text - +```bash ### 3. **Keep Workspaces Organized** Store all workspaces in a consistent location: @@ -275,8 +270,7 @@ Store all workspaces in a consistent location: ├── staging/ ├── development/ └── testing/ -```text - +```bash ### 4. **Regular Cleanup** Remove workspaces you no longer use: @@ -287,8 +281,7 @@ provisioning workspace list # Remove old workspace provisioning workspace remove old-workspace -```text - +```bash ### 5. **Backup User Config** Periodically backup your user configuration: @@ -296,8 +289,7 @@ Periodically backup your user configuration: ``` cp "~/Library/Application Support/provisioning/user_config.yaml" "~/Library/Application Support/provisioning/user_config.yaml.backup" -```text - +```yaml ## Troubleshooting ### Workspace Not Found @@ -308,8 +300,7 @@ cp "~/Library/Application Support/provisioning/user_config.yaml" ``` provisioning workspace register name /path/to/workspace -```text - +```bash ### Missing Configuration **Problem**: `✗ Missing workspace configuration` @@ -318,8 +309,7 @@ provisioning workspace register name /path/to/workspace ``` provisioning workspace migrate name -```text - +```bash ### Directory Not Found **Problem**: `✗ Workspace directory not found: /path/to/workspace` @@ -332,8 +322,7 @@ provisioning workspace migrate name ``` provisioning workspace remove name provisioning workspace register name /new/path -```text - +```bash ### Corrupted User Config **Problem**: `Error: Failed to parse user config` @@ -342,15 +331,13 @@ provisioning workspace register name /new/path ``` ls -la "~/Library/Application Support/provisioning/user_config.yaml"* -```text - +```yaml Restore from backup if needed: ``` cp "~/Library/Application Support/provisioning/user_config.yaml.backup.TIMESTAMP" "~/Library/Application Support/provisioning/user_config.yaml" -```text - +```yaml ## CLI Commands Reference | Command | Alias | Description | @@ -378,8 +365,7 @@ The workspace switching system is fully integrated with the new target-based con 4. User context ~/Library/Application Support/provisioning/ws_{name}.yaml (legacy) 5. User config ~/Library/Application Support/provisioning/user_config.yaml (new) 6. Environment variables PROVISIONING_* -```text - +```yaml ### Example Workflow ``` @@ -401,8 +387,7 @@ provisioning taskserv create kubernetes provisioning workspace switch dev # All commands now use dev workspace config -```text - +```toml ## Nickel Workspace Configuration Starting with v3.7.0, workspaces use **Nickel** for type-safe, schema-validated configurations. @@ -423,8 +408,7 @@ Starting with v3.7.0, workspaces use **Nickel** for type-safe, schema-validated config = "/path/to/workspace/config", }, } -```text - +```toml ### Benefits of Nickel Configuration - ✅ **Type Safety**: Catch configuration errors at load time, not runtime @@ -450,8 +434,7 @@ provisioning workspace config validate # Show configuration hierarchy provisioning workspace config hierarchy -```text - +```toml ## See Also - **Configuration Guide**: `docs/architecture/adr/ADR-010-configuration-format-strategy.md` diff --git a/docs/src/infrastructure/workspaces/workspace-switching-system.md b/docs/src/infrastructure/workspaces/workspace-switching-system.md index 7b56640..c4a3525 100644 --- a/docs/src/infrastructure/workspaces/workspace-switching-system.md +++ b/docs/src/infrastructure/workspaces/workspace-switching-system.md @@ -17,7 +17,7 @@ configuration files. This builds upon the target-based configuration system. ## Workspace Management Commands -```text +```bash # List all registered workspaces provisioning workspace list @@ -50,7 +50,7 @@ provisioning workspace get-preference **Structure**: -```text +```bash # Active workspace (current workspace in use) active_workspace: "librecloud" @@ -82,7 +82,7 @@ metadata: ## Usage Example -```text +```bash # Start with workspace librecloud active $ provisioning workspace active Active Workspace: @@ -128,7 +128,7 @@ The workspace switching system integrates seamlessly with the configuration syst **Configuration Hierarchy** (Priority: Low → High): -```text +```toml 1. Workspace config workspace/{name}/config/provisioning.yaml 2. Provider configs workspace/{name}/config/providers/*.toml 3. Platform configs workspace/{name}/config/platform/*.toml diff --git a/docs/src/integration/gitea-integration-guide.md b/docs/src/integration/gitea-integration-guide.md index 989c2d9..7ad221a 100644 --- a/docs/src/integration/gitea-integration-guide.md +++ b/docs/src/integration/gitea-integration-guide.md @@ -32,7 +32,7 @@ The Gitea integration provides: ### Architecture -```text +```bash ┌─────────────────────────────────────────────────────────┐ │ Provisioning System │ ├─────────────────────────────────────────────────────────┤ @@ -74,7 +74,7 @@ The Gitea integration provides: Edit your `provisioning/schemas/modes.ncl` or workspace config: -```text +```nickel import provisioning.gitea as gitea # Local Docker deployment @@ -121,7 +121,7 @@ For local Gitea: 4. Go to Settings → Applications → Generate New Token 5. Save token to encrypted file: -```text +```toml # Create encrypted token file echo "your-gitea-token" | sops --encrypt /dev/stdin > ~/.provisioning/secrets/gitea-token.enc ``` @@ -134,7 +134,7 @@ For remote Gitea: #### 3. Verify Setup -```text +```bash # Check Gitea status provisioning gitea status @@ -153,7 +153,7 @@ provisioning gitea user When creating a new workspace, enable git integration: -```text +```bash # Initialize new workspace with Gitea provisioning workspace init my-workspace --git --remote gitea @@ -171,7 +171,7 @@ This will: ### Clone Existing Workspace -```text +```bash # Clone from Gitea provisioning workspace clone workspaces/my-workspace ./workspace_my-workspace @@ -181,7 +181,7 @@ provisioning workspace clone my-workspace ./workspace_my-workspace ### Push/Pull Changes -```text +```bash # Push workspace changes cd workspace_my-workspace provisioning workspace push --message "Updated infrastructure configs" @@ -195,7 +195,7 @@ provisioning workspace sync ### Branch Management -```text +```bash # Create branch provisioning workspace branch create feature-new-cluster @@ -211,7 +211,7 @@ provisioning workspace branch delete feature-new-cluster ### Git Status -```text +```bash # Get workspace git status provisioning workspace git status @@ -236,7 +236,7 @@ Distributed locking prevents concurrent modifications to workspaces using Gitea ### Acquire Lock -```text +```bash # Acquire write lock provisioning gitea lock acquire my-workspace write --operation "Deploying servers" @@ -251,7 +251,7 @@ provisioning gitea lock acquire my-workspace write ### Check Lock Status -```text +```bash # List locks for workspace provisioning gitea lock list my-workspace @@ -264,14 +264,14 @@ provisioning gitea lock info my-workspace 42 ### Release Lock -```text +```bash # Release lock provisioning gitea lock release my-workspace 42 ``` ### Force Release Lock (Admin) -```text +```bash # Force release stuck lock provisioning gitea lock force-release my-workspace 42 --reason "Deployment failed, releasing lock" @@ -281,7 +281,7 @@ provisioning gitea lock force-release my-workspace 42 Use `with-workspace-lock` for automatic lock management: -```text +```bash use lib_provisioning/gitea/locking.nu * with-workspace-lock "my-workspace" "deploy" "Server deployment" { @@ -292,7 +292,7 @@ with-workspace-lock "my-workspace" "deploy" "Server deployment" { ### Lock Cleanup -```text +```bash # Cleanup expired locks provisioning gitea lock cleanup ``` @@ -305,7 +305,7 @@ Publish taskservs, providers, and clusters as versioned releases on Gitea. ### Publish Extension -```text +```bash # Publish taskserv provisioning gitea extension publish ./extensions/taskservs/database/postgres @@ -334,7 +334,7 @@ This will: ### List Published Extensions -```text +```bash # List all extensions provisioning gitea extension list @@ -346,7 +346,7 @@ provisioning gitea extension list --type cluster ### Download Extension -```text +```bash # Download specific version provisioning gitea extension download postgres 1.2.0 --destination ./extensions/taskservs/database @@ -356,14 +356,14 @@ provisioning gitea extension download postgres 1.2.0 ### Extension Metadata -```text +```bash # Get extension information provisioning gitea extension info postgres 1.2.0 ``` ### Publishing Workflow -```text +```bash # 1. Make changes to extension cd extensions/taskservs/database/postgres @@ -384,7 +384,7 @@ provisioning gitea extension publish . 1.2.0 ### Start/Stop Gitea -```text +```bash # Start Gitea (local mode) provisioning gitea start @@ -397,7 +397,7 @@ provisioning gitea restart ### Check Status -```text +```bash # Get service status provisioning gitea status @@ -414,7 +414,7 @@ provisioning gitea status ### View Logs -```text +```bash # View recent logs provisioning gitea logs @@ -427,7 +427,7 @@ provisioning gitea logs --lines 200 ### Install Gitea Binary -```text +```bash # Install latest version provisioning gitea install @@ -444,7 +444,7 @@ provisioning gitea install --install-dir ~/bin ### Repository Operations -```text +```bash use lib_provisioning/gitea/api_client.nu * # Create repository @@ -462,7 +462,7 @@ list-repositories "my-org" ### Release Operations -```text +```bash # Create release create-release "my-org" "my-repo" "v1.0.0" "Release Name" "Notes" @@ -478,7 +478,7 @@ list-releases "my-org" "my-repo" ### Workspace Operations -```text +```bash use lib_provisioning/gitea/workspace_git.nu * # Initialize workspace git @@ -496,7 +496,7 @@ pull-workspace "./workspace_my-workspace" ### Locking Operations -```text +```bash use lib_provisioning/gitea/locking.nu * # Acquire lock @@ -522,7 +522,7 @@ list-workspace-locks "my-workspace" **Solutions**: -```text +```bash # Check Docker status docker ps @@ -543,7 +543,7 @@ provisioning gitea start **Solutions**: -```text +```bash # Verify token file exists ls ~/.provisioning/secrets/gitea-token.enc @@ -561,7 +561,7 @@ echo "new-token" | sops --encrypt /dev/stdin > ~/.provisioning/secrets/gitea-tok **Solutions**: -```text +```bash # Check remote URL cd workspace_my-workspace git remote -v @@ -579,7 +579,7 @@ git remote set-url origin git@localhost:workspaces/my-workspace.git **Solutions**: -```text +```bash # Check active locks provisioning gitea lock list my-workspace @@ -596,7 +596,7 @@ provisioning gitea lock force-release my-workspace 42 --reason "Stale lock" **Solutions**: -```text +```bash # Check extension structure ls -la extensions/taskservs/myservice/ # Required: @@ -618,7 +618,7 @@ cat extensions/taskservs/myservice/schemas/manifest.toml **Solutions**: -```text +```bash # Fix data directory permissions sudo chown -R 1000:1000 ~/.provisioning/gitea @@ -668,7 +668,7 @@ provisioning gitea start Edit `docker-compose.yml`: -```text +```yaml services: gitea: image: gitea/gitea:1.21 @@ -684,7 +684,7 @@ services: Configure webhooks for automated workflows: -```text +```toml import provisioning.gitea as gitea _webhook = gitea.GiteaWebhook { @@ -696,7 +696,7 @@ _webhook = gitea.GiteaWebhook { ### Batch Extension Publishing -```text +```bash # Publish all taskservs with same version provisioning gitea extension publish-batch ./extensions/taskservs diff --git a/docs/src/integration/integrations-quickstart.md b/docs/src/integration/integrations-quickstart.md index d34d4d4..b2881eb 100644 --- a/docs/src/integration/integrations-quickstart.md +++ b/docs/src/integration/integrations-quickstart.md @@ -26,7 +26,7 @@ Four integrated feature sets: ### 🏃 30-Second Test -```text +```bash # 1. Check what runtimes you have available provisioning runtime list @@ -39,7 +39,7 @@ provisioning runtime info **Expected Output**: -```text +```bash Available runtimes: • docker • podman @@ -55,7 +55,7 @@ Automatically detects and uses Docker, Podman, OrbStack, Colima, or nerdctl - wh ### Commands -```text +```bash # Detect available runtime provisioning runtime detect # Output: "Detected runtime: docker" @@ -81,7 +81,7 @@ provisioning runtime compose ./docker-compose.yml **Use Case 1: Works on macOS with OrbStack, Linux with Docker** -```text +```bash # User on macOS with OrbStack $ provisioning runtime exec "docker run -it ubuntu bash" # Automatically uses orbctl (OrbStack) @@ -93,7 +93,7 @@ $ provisioning runtime exec "docker run -it ubuntu bash" **Use Case 2: Run docker-compose with detected runtime** -```text +```bash # Detect and run compose $ compose_cmd=$(provisioning runtime compose ./docker-compose.yml) $ eval $compose_cmd up -d @@ -120,7 +120,7 @@ Advanced SSH with connection pooling (90% faster), circuit breaker for fault iso ### Commands -```text +```bash # Create SSH pool connection to host provisioning ssh pool connect server.example.com root --port 22 --timeout 30 @@ -149,7 +149,7 @@ provisioning ssh circuit-breaker ### Example: Multi-Host Deployment -```text +```bash # Set up SSH pool provisioning ssh pool connect srv01.example.com root provisioning ssh pool connect srv02.example.com root @@ -165,7 +165,7 @@ provisioning ssh pool status ### Retry Strategies -```text +```bash # Exponential backoff: 100 ms, 200 ms, 400 ms, 800 ms... provisioning ssh retry-config exponential --max-retries 5 @@ -186,7 +186,7 @@ Multi-backend backup management with Restic, BorgBackup, Tar, or Rsync. Supports ### Commands -```text +```bash # Create backup job provisioning backup create daily-backup /data /var/lib --backend restic @@ -222,7 +222,7 @@ provisioning backup status backup-job-001 ### Example: Automated Daily Backups to S3 -```text +```bash # Create backup configuration provisioning backup create app-backup /opt/myapp /var/lib/myapp --backend restic @@ -244,7 +244,7 @@ provisioning backup list ### Dry-Run (Test First) -```text +```bash # Test backup without actually creating it provisioning backup create test-backup /data --check @@ -262,7 +262,7 @@ Automatically trigger deployments from Git events (push, PR, webhook, scheduled) ### Commands -```text +```bash # Load GitOps rules from configuration file provisioning gitops rules ./gitops-rules.yaml @@ -288,7 +288,7 @@ provisioning gitops status **File: `gitops-rules.yaml`** -```text +```yaml rules: - name: deploy-prod provider: github @@ -316,7 +316,7 @@ rules: **Then:** -```text +```bash # Load rules provisioning gitops rules ./gitops-rules.yaml @@ -337,7 +337,7 @@ Install, start, stop, and manage services across systemd (Linux), launchd (macOS ### Commands -```text +```bash # Install service provisioning service install myapp /usr/local/bin/myapp --user myapp @@ -366,7 +366,7 @@ provisioning service detect-init ### Example: Install Custom Service -```text +```bash # On Linux (systemd) provisioning service install provisioning-worker /usr/local/bin/provisioning-worker @@ -390,7 +390,7 @@ provisioning service status provisioning-worker ### Workflow 1: Multi-Platform Deployment -```text +```bash # Works on macOS with OrbStack, Linux with Docker, etc. provisioning runtime detect # Detects your platform provisioning runtime exec "docker ps" # Uses your runtime @@ -398,7 +398,7 @@ provisioning runtime exec "docker ps" # Uses your runtime ### Workflow 2: Large-Scale SSH Operations -```text +```bash # Connect to multiple servers for host in srv01 srv02 srv03; do provisioning ssh pool connect $host.example.com root @@ -413,7 +413,7 @@ provisioning ssh pool exec [srv01, srv02, srv03] ### Workflow 3: Automated Backups -```text +```bash # Create backup job provisioning backup create daily /opt/app /data --backend restic @@ -428,7 +428,7 @@ provisioning backup list ### Workflow 4: Continuous Deployment from Git -```text +```bash # Define rules in YAML cat > gitops-rules.yaml << 'EOF' rules: @@ -456,7 +456,7 @@ provisioning gitops watch --provider github All integrations support Nickel schemas for advanced configuration: -```text +```javascript let { IntegrationConfig } = import "provisioning/integrations.ncl" in { integrations = { @@ -499,7 +499,7 @@ let { IntegrationConfig } = import "provisioning/integrations.ncl" in All major operations support `--check` for testing: -```text +```bash provisioning runtime exec "systemctl restart app" --check # Output: Would execute: [docker exec ...] @@ -514,7 +514,7 @@ provisioning gitops trigger deploy-test --check Some commands support JSON output: -```text +```bash provisioning runtime list --out json provisioning backup list --out json provisioning gitops deployments --out json @@ -524,7 +524,7 @@ provisioning gitops deployments --out json Chain commands in shell scripts: -```text +```bash #!/bin/bash # Detect runtime and use it @@ -551,7 +551,7 @@ provisioning gitops status **Solution**: Install Docker, Podman, or OrbStack: -```text +```bash # macOS brew install orbstack @@ -566,7 +566,7 @@ provisioning runtime detect **Solution**: Check port and timeout settings: -```text +```toml # Use different port provisioning ssh pool connect server.example.com root --port 2222 @@ -578,7 +578,7 @@ provisioning ssh pool connect server.example.com root --timeout 60 **Solution**: Check permissions on backup path: -```text +```bash # Check if user can read target paths ls -l /data # Should be readable @@ -602,7 +602,7 @@ sudo provisioning backup create mybak /data --backend restic ## 🆘 Need Help -```text +```bash # General help provisioning help integrations diff --git a/docs/src/integration/oci-registry-guide.md b/docs/src/integration/oci-registry-guide.md index b66bd79..4847f32 100644 --- a/docs/src/integration/oci-registry-guide.md +++ b/docs/src/integration/oci-registry-guide.md @@ -39,7 +39,7 @@ applications, OCI artifacts can contain any type of content - in our case, provi Install one of the following OCI tools: -```text +```bash # ORAS (recommended) brew install oras @@ -52,7 +52,7 @@ brew install skopeo ### 1. Start Local OCI Registry (Development) -```text +```bash # Start lightweight OCI registry (Zot) provisioning oci-registry start @@ -62,7 +62,7 @@ curl http://localhost:5000/v2/_catalog ### 2. Pull an Extension -```text +```bash # Pull Kubernetes extension from registry provisioning oci pull kubernetes:1.28.0 @@ -74,7 +74,7 @@ provisioning oci pull kubernetes:1.28.0 ### 3. List Available Extensions -```text +```bash # List all extensions provisioning oci list @@ -89,7 +89,7 @@ provisioning oci tags kubernetes Edit `workspace/config/provisioning.yaml`: -```text +```yaml dependencies: extensions: source_type: "oci" @@ -107,7 +107,7 @@ dependencies: ### 5. Resolve Dependencies -```text +```bash # Resolve and install all dependencies provisioning dep resolve @@ -126,7 +126,7 @@ provisioning dep tree kubernetes **Download extension from OCI registry** -```text +```bash provisioning oci pull : [OPTIONS] # Examples: @@ -148,7 +148,7 @@ provisioning oci pull postgres:15.0 --insecure # Skip TLS verification **Publish extension to OCI registry** -```text +```bash provisioning oci push [OPTIONS] # Examples: @@ -173,7 +173,7 @@ provisioning oci push ./my-provider aws 2.1.0 --registry localhost:5000 **Show available extensions in registry** -```text +```bash provisioning oci list [OPTIONS] # Examples: @@ -184,7 +184,7 @@ provisioning oci list --registry harbor.company.com **Output**: -```text +```bash ┬───────────────┬──────────────────┬─────────────────────────┬─────────────────────────────────────────────┐ │ name │ registry │ namespace │ reference │ ├───────────────┼──────────────────┼─────────────────────────┼─────────────────────────────────────────────┤ @@ -200,7 +200,7 @@ provisioning oci list --registry harbor.company.com **Search for extensions matching query** -```text +```bash provisioning oci search [OPTIONS] # Examples: @@ -215,7 +215,7 @@ provisioning oci search "container-*" **Display all available versions of an extension** -```text +```bash provisioning oci tags [OPTIONS] # Examples: @@ -225,7 +225,7 @@ provisioning oci tags redis --registry harbor.company.com **Output**: -```text +```bash ┬────────────┬─────────┬──────────────────────────────────────────────────────┐ │ artifact │ version │ reference │ ├────────────┼─────────┼──────────────────────────────────────────────────────┤ @@ -241,7 +241,7 @@ provisioning oci tags redis --registry harbor.company.com **Show detailed manifest and metadata** -```text +```bash provisioning oci inspect : [OPTIONS] # Examples: @@ -251,7 +251,7 @@ provisioning oci inspect redis:7.0.0 --format json **Output**: -```text +```bash name: kubernetes type: taskserv version: 1.28.0 @@ -272,7 +272,7 @@ platforms: **Authenticate with OCI registry** -```text +```bash provisioning oci login [OPTIONS] # Examples: @@ -296,7 +296,7 @@ provisioning oci login registry.io --token-file ~/.provisioning/tokens/registry **Remove stored credentials** -```text +```bash provisioning oci logout # Example: @@ -309,7 +309,7 @@ provisioning oci logout harbor.company.com **Remove extension from registry** -```text +```bash provisioning oci delete : [OPTIONS] # Examples: @@ -331,7 +331,7 @@ provisioning oci delete redis:6.0.0 --force # Skip confirmation **Copy extension between registries** -```text +```bash provisioning oci copy [OPTIONS] # Examples: @@ -352,7 +352,7 @@ provisioning oci copy **Display current OCI settings** -```text +```toml provisioning oci config # Output: @@ -376,7 +376,7 @@ provisioning oci config Dependencies are configured in `workspace/config/provisioning.yaml`: -```text +```yaml dependencies: # Core provisioning system core: @@ -415,7 +415,7 @@ dependencies: ### Resolve Dependencies -```text +```bash # Resolve and install all configured dependencies provisioning dep resolve @@ -428,7 +428,7 @@ provisioning dep resolve --update # Update to latest versions ### Check for Updates -```text +```bash # Check all dependencies for updates provisioning dep check-updates @@ -444,7 +444,7 @@ provisioning dep check-updates ### Update Dependency -```text +```bash # Update specific extension to latest version provisioning dep update kubernetes @@ -454,7 +454,7 @@ provisioning dep update kubernetes --version 1.29.0 ### Dependency Tree -```text +```bash # Show dependency tree for extension provisioning dep tree kubernetes @@ -468,7 +468,7 @@ kubernetes:1.28.0 ### Validate Dependencies -```text +```bash # Validate dependency graph (check for cycles, conflicts) provisioning dep validate @@ -482,7 +482,7 @@ provisioning dep validate kubernetes ### Create New Extension -```text +```bash # Generate extension from template provisioning generate extension taskserv redis @@ -508,7 +508,7 @@ provisioning generate extension taskserv redis Edit `manifest.yaml`: -```text +```yaml name: redis type: taskserv version: 1.0.0 @@ -535,7 +535,7 @@ min_provisioning_version: "3.0.0" ### Test Extension Locally -```text +```bash # Load extension from local path provisioning module load taskserv workspace_dev redis --source local @@ -548,7 +548,7 @@ provisioning test extension redis ### Validate Extension -```text +```bash # Validate extension structure provisioning oci package validate ./extensions/taskservs/redis @@ -560,7 +560,7 @@ Warnings: ### Package Extension -```text +```bash # Package as OCI artifact provisioning oci package ./extensions/taskservs/redis @@ -572,7 +572,7 @@ provisioning oci inspect-artifact redis-1.0.0.tar.gz ### Publish Extension -```text +```bash # Login to registry (one-time) provisioning oci login localhost:5000 @@ -594,7 +594,7 @@ echo "Published: oci://localhost:5000/provisioning-extensions/redis:1.0.0" **Using Zot (lightweight)**: -```text +```bash # Start Zot registry provisioning oci-registry start @@ -613,7 +613,7 @@ provisioning oci-registry status **Manual Zot Setup**: -```text +```bash # Install Zot brew install project-zot/tap/zot @@ -685,7 +685,7 @@ zot serve zot-config.json **Solution**: -```text +```bash # Install ORAS (recommended) brew install oras @@ -704,7 +704,7 @@ brew install skopeo **Solution**: -```text +```bash # Check if registry is running curl http://localhost:5000/v2/_catalog @@ -720,7 +720,7 @@ provisioning oci-registry start **Solution**: -```text +```bash # For development, use --insecure flag provisioning oci pull kubernetes:1.28.0 --insecure @@ -740,7 +740,7 @@ provisioning oci pull kubernetes:1.28.0 --insecure **Solution**: -```text +```bash # Login to registry provisioning oci login localhost:5000 @@ -791,7 +791,7 @@ provisioning oci login localhost:5000 **Solution**: -```text +```bash # Validate dependency graph provisioning dep validate kubernetes @@ -809,7 +809,7 @@ provisioning dep tree kubernetes ✅ **DO**: Pin to specific versions in production -```text +```bash modules: taskservs: - "oci://registry/kubernetes:1.28.0" # Specific version @@ -817,7 +817,7 @@ modules: ❌ **DON'T**: Use `latest` tag in production -```text +```bash modules: taskservs: - "oci://registry/kubernetes:latest" # Unpredictable @@ -843,7 +843,7 @@ modules: ✅ **DO**: Specify version constraints -```text +```bash dependencies: containerd: ">=1.7.0" etcd: "^3.5.0" # 3.5.x compatible @@ -851,7 +851,7 @@ dependencies: ❌ **DON'T**: Leave dependencies unversioned -```text +```bash dependencies: containerd: "*" # Too permissive ``` diff --git a/docs/src/integration/oci-registry-platform.md b/docs/src/integration/oci-registry-platform.md index c60eec3..ce38fda 100644 --- a/docs/src/integration/oci-registry-platform.md +++ b/docs/src/integration/oci-registry-platform.md @@ -25,7 +25,7 @@ Comprehensive OCI (Open Container Initiative) registry deployment and management ### Start Zot Registry (Default) -```text +```bash cd provisioning/platform/oci-registry/zot docker-compose up -d @@ -38,7 +38,7 @@ open http://localhost:5000 ### Start Harbor Registry -```text +```bash cd provisioning/platform/oci-registry/harbor docker-compose up -d sleep 120 # Wait for services @@ -64,7 +64,7 @@ open http://localhost ### Nushell Commands -```text +```nushell # Start registry nu -c "use provisioning/core/nulib/lib_provisioning/oci_registry; oci-registry start --type zot" @@ -83,7 +83,7 @@ nu -c "use provisioning/core/nulib/lib_provisioning/oci_registry; oci-registry n ### Docker Compose -```text +```bash # Start docker-compose up -d @@ -115,14 +115,14 @@ docker-compose down -v **Zot/Distribution (htpasswd)**: -```text +```bash htpasswd -Bc htpasswd provisioning docker login localhost:5000 ``` **Harbor (Database)**: -```text +```bash docker login localhost # Username: admin / Password: Harbor12345 ``` @@ -131,7 +131,7 @@ docker login localhost ### Health Checks -```text +```bash # API check curl http://localhost:5000/v2/ @@ -143,13 +143,13 @@ curl http://localhost:5000/v2/_catalog **Zot**: -```text +```bash curl http://localhost:5000/metrics ``` **Harbor**: -```text +```bash curl http://localhost:9090/metrics ``` diff --git a/docs/src/integration/secrets-service-layer-complete.md b/docs/src/integration/secrets-service-layer-complete.md index d5fa428..a100bcd 100644 --- a/docs/src/integration/secrets-service-layer-complete.md +++ b/docs/src/integration/secrets-service-layer-complete.md @@ -29,7 +29,7 @@ tokens, provider credentials) through a REST API controlled by **Cedar policies* ### 1. Register the workspace `librecloud` -```text +```bash # Register workspace provisioning workspace register librecloud /Users/Akasha/project-provisioning/workspace_librecloud @@ -40,7 +40,7 @@ provisioning workspace active ### 2. Create your first database secret -```text +```bash # Create PostgreSQL credential provisioning secrets create database postgres --workspace librecloud @@ -54,14 +54,14 @@ provisioning secrets create database postgres ### 3. Retrieve the secret -```text +```bash # Get credential (requires Cedar authorization) provisioning secrets get librecloud/wuji/postgres/admin_password ``` ### 4. List secrets by domain -```text +```bash # List all PostgreSQL secrets provisioning secrets list --workspace librecloud --domain postgres @@ -79,7 +79,7 @@ provisioning secrets list --workspace librecloud --infra wuji **REST Endpoint**: -```text +```bash POST /api/v1/secrets/database Content-Type: application/json @@ -97,7 +97,7 @@ Content-Type: application/json **CLI Command**: -```text +```bash provisioning secrets create database postgres --workspace librecloud --infra wuji @@ -110,7 +110,7 @@ provisioning secrets create database postgres **Result**: Secret stored in SurrealDB with KMS encryption -```text +```bash ✓ Secret created: librecloud/wuji/postgres/admin_password Workspace: librecloud Infrastructure: wuji @@ -123,7 +123,7 @@ provisioning secrets create database postgres **REST API**: -```text +```bash POST /api/v1/secrets/application { "workspace_id": "librecloud", @@ -135,7 +135,7 @@ POST /api/v1/secrets/application **CLI**: -```text +```bash provisioning secrets create app myapp-web --workspace librecloud --domain web @@ -147,7 +147,7 @@ provisioning secrets create app myapp-web **REST API**: -```text +```bash GET /api/v1/secrets/list?workspace=librecloud&domain=postgres Response: @@ -167,7 +167,7 @@ Response: **CLI**: -```text +```bash # All workspace secrets provisioning secrets list --workspace librecloud @@ -182,7 +182,7 @@ provisioning secrets list --workspace librecloud --infra wuji **REST API**: -```text +```bash GET /api/v1/secrets/librecloud/wuji/postgres/admin_password Requires: @@ -193,7 +193,7 @@ Requires: **CLI**: -```text +```bash # Get full secret provisioning secrets get librecloud/wuji/postgres/admin_password @@ -213,7 +213,7 @@ provisioning secrets get librecloud/wuji/postgres/admin_password **Use Case**: Temporary server access (max 24 hours) -```text +```bash # Generate temporary SSH key (TTL 2 hours) provisioning secrets create ssh --workspace librecloud @@ -240,7 +240,7 @@ provisioning secrets create ssh **Use Case**: Long-duration infrastructure keys -```text +```bash # Create permanent SSH key (stored in DB) provisioning secrets create ssh --workspace librecloud @@ -259,7 +259,7 @@ provisioning secrets create ssh **UpCloud API (Temporal)**: -```text +```bash provisioning secrets create provider upcloud --workspace librecloud --roles "server,network,storage" @@ -274,7 +274,7 @@ provisioning secrets create provider upcloud **UpCloud API (Permanent)**: -```text +```bash provisioning secrets create provider upcloud --workspace librecloud --roles "server,network" @@ -304,7 +304,7 @@ provisioning secrets create provider upcloud **Force Immediate Rotation**: -```text +```bash # Force rotation now provisioning secrets rotate librecloud/wuji/postgres/admin_password @@ -318,7 +318,7 @@ provisioning secrets rotate librecloud/wuji/postgres/admin_password **Check Rotation Status**: -```text +```bash GET /api/v1/secrets/{path}/rotation-status Response: @@ -336,7 +336,7 @@ Response: System automatically runs rotations every hour: -```text +```bash ┌─────────────────────────────────┐ │ Rotation Job Scheduler │ │ - Interval: 1 hour │ @@ -357,7 +357,7 @@ System automatically runs rotations every hour: **Check Scheduler Status**: -```text +```bash provisioning secrets scheduler status # Result: @@ -375,7 +375,7 @@ provisioning secrets scheduler status **Scenario**: Share DB credential between `librecloud` and `staging` -```text +```bash # REST API POST /api/v1/secrets/{path}/grant @@ -401,7 +401,7 @@ POST /api/v1/secrets/{path}/grant **CLI**: -```text +```bash provisioning secrets grant --secret librecloud/wuji/postgres/admin_password --target-workspace staging @@ -416,7 +416,7 @@ provisioning secrets grant #### Revoke a Grant -```text +```bash # Revoke access immediately POST /api/v1/secrets/grant/{grant_id}/revoke { @@ -434,7 +434,7 @@ provisioning secrets revoke-grant grant-12345 #### List Grants -```text +```bash # All workspace grants GET /api/v1/secrets/grants?workspace=librecloud @@ -460,7 +460,7 @@ GET /api/v1/secrets/grants?workspace=librecloud #### Dashboard Metrics -```text +```bash GET /api/v1/secrets/monitoring/dashboard Response: @@ -495,7 +495,7 @@ Response: **CLI**: -```text +```bash provisioning secrets monitoring dashboard # ✓ Secrets Dashboard - Librecloud @@ -516,7 +516,7 @@ provisioning secrets monitoring dashboard #### Expiring Secrets Alerts -```text +```bash GET /api/v1/secrets/monitoring/expiring?days=7 Response: @@ -541,7 +541,7 @@ All operations are protected by **Cedar policies**: ### Example Policy: Production Secret Access -```text +```bash // Requires MFA for production secrets @id("prod-secret-access-mfa") permit ( @@ -566,7 +566,7 @@ permit ( ### Verify Authorization -```text +```bash # Test Cedar decision provisioning policies check alice can access secret:librecloud/postgres/password @@ -585,7 +585,7 @@ provisioning policies check alice can access secret:librecloud/postgres/password ### Secret in Database -```text +```bash -- Table vault_secrets (SurrealDB) { id: "secret:uuid123", @@ -616,7 +616,7 @@ provisioning policies check alice can access secret:librecloud/postgres/password ### Secret Hierarchy -```text +```bash librecloud (Workspace) ├── wuji (Infrastructure) │ ├── postgres (Domain) @@ -644,7 +644,7 @@ librecloud (Workspace) ### Workflow 1: Create and Rotate Database Credential -```text +```bash 1. Admin creates credential POST /api/v1/secrets/database @@ -677,7 +677,7 @@ librecloud (Workspace) ### Workflow 2: Share Secret Between Workspaces -```text +```bash 1. Admin of librecloud creates grant POST /api/v1/secrets/{path}/grant @@ -709,7 +709,7 @@ librecloud (Workspace) ### Workflow 3: Access Temporal SSH Secret -```text +```bash 1. User requests temporary SSH key POST /api/v1/secrets/ssh {ttl: "2h"} @@ -741,7 +741,7 @@ librecloud (Workspace) ### Example 1: Manage PostgreSQL Secrets -```text +```bash # 1. Create credential provisioning secrets create database postgres --workspace librecloud @@ -773,7 +773,7 @@ provisioning secrets monitoring dashboard | grep postgres ### Example 2: Temporary SSH Access -```text +```bash # 1. Generate temporary SSH key (4 hours) provisioning secrets create ssh --workspace librecloud @@ -796,7 +796,7 @@ ssh -i ~/.ssh/web01_temp ubuntu@web01.librecloud.internal ### Example 3: CI/CD Integration -```text +```bash # GitLab CI / GitHub Actions jobs: deploy: @@ -835,7 +835,7 @@ jobs: ### Audit -```text +```json { "timestamp": "2025-12-06T10:30:45Z", "user_id": "alice", @@ -855,7 +855,7 @@ jobs: ### All 25 Integration Tests Passing -```text +```bash ✅ Phase 3.1: Rotation Scheduler (9 tests) - Schedule creation - Status transitions @@ -882,7 +882,7 @@ jobs: **Execution**: -```text +```bash cargo test --test secrets_phases_integration_test test result: ok. 25 passed; 0 failed @@ -897,7 +897,7 @@ test result: ok. 25 passed; 0 failed **Cause**: User lacks permissions in policy **Solution**: -```text +```bash # Check user and permission provisioning policies check $USER can access secret:librecloud/postgres/admin_password @@ -916,7 +916,7 @@ provisioning secrets grant **Cause**: Typo in path or workspace doesn't exist **Solution**: -```text +```bash # List available secrets provisioning secrets list --workspace librecloud @@ -932,7 +932,7 @@ provisioning workspace switch librecloud **Cause**: Operation requires MFA but not verified **Solution**: -```text +```bash # Check MFA status provisioning auth status diff --git a/docs/src/integration/service-mesh-ingress-guide.md b/docs/src/integration/service-mesh-ingress-guide.md index c628269..75e8e8d 100644 --- a/docs/src/integration/service-mesh-ingress-guide.md +++ b/docs/src/integration/service-mesh-ingress-guide.md @@ -75,7 +75,7 @@ Handles **North-South traffic** (external to internal): **Installation**: -```text +```bash provisioning taskserv create istio ``` @@ -131,7 +131,7 @@ provisioning taskserv create istio **Installation**: -```text +```bash # Linkerd requires cert-manager provisioning taskserv create cert-manager provisioning taskserv create linkerd @@ -222,13 +222,13 @@ provisioning taskserv create nginx-ingress # Or traefik/contour **Installation**: -```text +```bash provisioning taskserv create nginx-ingress ``` **With Linkerd**: -```text +```bash provisioning taskserv create linkerd provisioning taskserv create nginx-ingress ``` @@ -278,13 +278,13 @@ provisioning taskserv create nginx-ingress **Installation**: -```text +```bash provisioning taskserv create traefik ``` **With Linkerd**: -```text +```bash provisioning taskserv create linkerd provisioning taskserv create traefik ``` @@ -330,7 +330,7 @@ provisioning taskserv create traefik **Installation**: -```text +```bash provisioning taskserv create contour ``` @@ -378,7 +378,7 @@ provisioning taskserv create contour **Why**: Lightweight mesh + proven ingress = great balance -```text +```bash provisioning taskserv create cert-manager provisioning taskserv create linkerd provisioning taskserv create nginx-ingress @@ -401,7 +401,7 @@ provisioning taskserv create nginx-ingress **Why**: All-in-one service mesh with built-in gateway -```text +```bash provisioning taskserv create istio ``` @@ -423,7 +423,7 @@ provisioning taskserv create istio **Why**: Lightweight mesh + modern ingress -```text +```bash provisioning taskserv create cert-manager provisioning taskserv create linkerd provisioning taskserv create traefik @@ -441,7 +441,7 @@ provisioning taskserv create traefik **Why**: Just get traffic in without service mesh -```text +```bash provisioning taskserv create nginx-ingress ``` @@ -499,7 +499,7 @@ This is the recommended configuration for most deployments - lightweight and pro **File**: `workspace/infra/my-cluster/taskservs/cert-manager.ncl` -```text +```nickel import provisioning.extensions.taskservs.infrastructure.cert_manager as cm # Cert-manager is required for Linkerd's mTLS certificates @@ -511,7 +511,7 @@ _taskserv = cm.CertManager { **File**: `workspace/infra/my-cluster/taskservs/linkerd.ncl` -```text +```nickel import provisioning.extensions.taskservs.networking.linkerd as linkerd # Lightweight service mesh with minimal overhead @@ -541,7 +541,7 @@ _taskserv = linkerd.Linkerd { **File**: `workspace/infra/my-cluster/taskservs/nginx-ingress.ncl` -```text +```nickel import provisioning.extensions.taskservs.networking.nginx_ingress as nginx # Battle-tested ingress controller @@ -568,7 +568,7 @@ _taskserv = nginx.NginxIngress { #### Step 2: Deploy Service Mesh Components -```text +```bash # Install cert-manager (prerequisite for Linkerd) provisioning taskserv create cert-manager @@ -587,7 +587,7 @@ kubectl get deploy -n ingress-nginx **File**: `workspace/infra/my-cluster/clusters/web-api.ncl` -```text +```nickel import provisioning.kcl.k8s_deploy as k8s import provisioning.extensions.taskservs.networking.nginx_ingress as nginx @@ -652,7 +652,7 @@ service = k8s.K8sDeploy { **File**: `workspace/infra/my-cluster/ingress/web-api-ingress.yaml` -```text +```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: @@ -690,7 +690,7 @@ Complete service mesh with built-in ingress gateway. **File**: `workspace/infra/my-cluster/taskservs/istio.ncl` -```text +```nickel import provisioning.extensions.taskservs.networking.istio as istio # Full-featured service mesh @@ -730,7 +730,7 @@ _taskserv = istio.Istio { #### Step 2: Deploy Istio -```text +```bash # Install Istio provisioning taskserv create istio @@ -742,7 +742,7 @@ istioctl verify-install **File**: `workspace/infra/my-cluster/clusters/api-service.ncl` -```text +```nickel import provisioning.kcl.k8s_deploy as k8s service = k8s.K8sDeploy { @@ -821,7 +821,7 @@ Lightweight mesh with modern ingress controller and automatic TLS. **File**: `workspace/infra/my-cluster/taskservs/linkerd.ncl` -```text +```nickel import provisioning.extensions.taskservs.networking.linkerd as linkerd _taskserv = linkerd.Linkerd { @@ -834,7 +834,7 @@ _taskserv = linkerd.Linkerd { **File**: `workspace/infra/my-cluster/taskservs/traefik.ncl` -```text +```nickel import provisioning.extensions.taskservs.networking.traefik as traefik # Modern ingress with middleware and auto-TLS @@ -862,7 +862,7 @@ _taskserv = traefik.Traefik { #### Step 2: Deploy -```text +```bash provisioning taskserv create cert-manager provisioning taskserv create linkerd provisioning taskserv create traefik @@ -872,7 +872,7 @@ provisioning taskserv create traefik **File**: `workspace/infra/my-cluster/ingress/api-route.yaml` -```text +```yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: @@ -903,7 +903,7 @@ For simple deployments that don't need service mesh. **File**: `workspace/infra/my-cluster/taskservs/nginx-ingress.ncl` -```text +```nickel import provisioning.extensions.taskservs.networking.nginx_ingress as nginx _taskserv = nginx.NginxIngress { @@ -915,7 +915,7 @@ _taskserv = nginx.NginxIngress { #### Step 2: Deploy -```text +```bash provisioning taskserv create nginx-ingress ``` @@ -923,7 +923,7 @@ provisioning taskserv create nginx-ingress **File**: `workspace/infra/my-cluster/clusters/simple-app.ncl` -```text +```nickel import provisioning.kcl.k8s_deploy as k8s service = k8s.K8sDeploy { @@ -957,7 +957,7 @@ service = k8s.K8sDeploy { **File**: `workspace/infra/my-cluster/ingress/simple-app-ingress.yaml` -```text +```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: @@ -984,7 +984,7 @@ spec: ### For Linkerd -```text +```bash # Label namespace for automatic sidecar injection kubectl annotate namespace production linkerd.io/inject=enabled @@ -994,7 +994,7 @@ kubectl annotate pod my-pod linkerd.io/inject=enabled ### For Istio -```text +```bash # Label namespace for automatic sidecar injection kubectl label namespace production istio-injection=enabled @@ -1008,7 +1008,7 @@ kubectl describe pod -n production | grep istio-proxy ### Linkerd Dashboard -```text +```bash # Open Linkerd Viz dashboard linkerd viz dashboard @@ -1019,7 +1019,7 @@ linkerd viz tap -n production ### Istio Dashboards -```text +```bash # Kiali (service mesh visualization) kubectl port-forward -n istio-system svc/kiali 20000:20000 # http://localhost:20000 @@ -1035,7 +1035,7 @@ kubectl port-forward -n istio-system svc/jaeger-query 16686:16686 ### Traefik Dashboard -```text +```bash # Forward Traefik dashboard kubectl port-forward -n traefik svc/traefik 8080:8080 # http://localhost:8080/dashboard/ @@ -1049,7 +1049,7 @@ kubectl port-forward -n traefik svc/traefik 8080:8080 #### Service Mesh - Istio -```text +```bash # Install Istio (includes built-in ingress gateway) provisioning taskserv create istio @@ -1066,7 +1066,7 @@ kubectl port-forward -n istio-system svc/kiali 20000:20000 #### Service Mesh - Linkerd -```text +```bash # Install cert-manager first (Linkerd requirement) provisioning taskserv create cert-manager @@ -1085,7 +1085,7 @@ linkerd viz dashboard #### Ingress Controllers -```text +```bash # Install Nginx Ingress (most popular) provisioning taskserv create nginx-ingress @@ -1105,7 +1105,7 @@ provisioning taskserv create haproxy-ingress **Lightweight mesh + proven ingress** -```text +```bash # Step 1: Install cert-manager provisioning taskserv create cert-manager @@ -1128,7 +1128,7 @@ kubectl apply -f my-app.yaml **Full-featured service mesh with built-in gateway** -```text +```bash # Install Istio provisioning taskserv create istio @@ -1146,7 +1146,7 @@ kubectl apply -f my-app.yaml **Lightweight mesh + modern ingress with auto TLS** -```text +```bash # Install prerequisites provisioning taskserv create cert-manager @@ -1164,7 +1164,7 @@ kubectl annotate namespace default linkerd.io/inject=enabled **Simple deployments without service mesh** -```text +```bash # Install ingress controller provisioning taskserv create nginx-ingress @@ -1176,7 +1176,7 @@ kubectl apply -f ingress.yaml #### Check Linkerd -```text +```bash # Full system check linkerd check @@ -1192,7 +1192,7 @@ linkerd version --server #### Check Istio -```text +```bash # Full system analysis istioctl analyze @@ -1208,7 +1208,7 @@ istioctl version #### Check Ingress Controllers -```text +```bash # List ingress resources kubectl get ingress -A @@ -1228,7 +1228,7 @@ kubectl logs -n traefik deployment/traefik #### Service Mesh Issues -```text +```bash # Linkerd - Check proxy status linkerd check -n @@ -1244,7 +1244,7 @@ istioctl analyze #### Ingress Controller Issues -```text +```bash # Check ingress controller logs kubectl logs -n ingress-nginx deployment/ingress-nginx-controller kubectl logs -n traefik deployment/traefik @@ -1261,7 +1261,7 @@ kubectl get svc -n traefik #### Remove Linkerd -```text +```bash # Remove annotations from namespaces kubectl annotate namespace linkerd.io/inject- --all @@ -1274,7 +1274,7 @@ kubectl delete namespace linkerd #### Remove Istio -```text +```bash # Remove labels from namespaces kubectl label namespace istio-injection- --all @@ -1287,7 +1287,7 @@ kubectl delete namespace istio-system #### Remove Ingress Controllers -```text +```bash # Nginx helm uninstall ingress-nginx -n ingress-nginx kubectl delete namespace ingress-nginx @@ -1301,7 +1301,7 @@ kubectl delete namespace traefik #### Linkerd Resource Limits -```text +```bash # Adjust proxy resource limits in linkerd.ncl _taskserv = linkerd.Linkerd { resources: { @@ -1313,7 +1313,7 @@ _taskserv = linkerd.Linkerd { #### Istio Profile Selection -```text +```bash # Different resource profiles available profile = "default" # Full features (default) profile = "demo" # Demo mode (more resources) @@ -1327,7 +1327,7 @@ profile = "remote" # Control plane only (advanced) After implementing these examples, your workspace should look like: -```text +```bash workspace/infra/my-cluster/ ├── taskservs/ │ ├── cert-manager.ncl # For Linkerd mTLS diff --git a/docs/src/operations/break-glass-training-guide.md b/docs/src/operations/break-glass-training-guide.md index 6cf0d74..71f82c2 100644 --- a/docs/src/operations/break-glass-training-guide.md +++ b/docs/src/operations/break-glass-training-guide.md @@ -124,7 +124,7 @@ Use break-glass if **ALL** apply: ### Phase 1: Request (5 minutes) -```text +```bash ┌─────────────────────────────────────────────────────────┐ │ 1. Requester submits emergency access request │ │ - Reason: "Production database cluster down" │ @@ -142,7 +142,7 @@ Use break-glass if **ALL** apply: ### Phase 2: Approval (10-15 minutes) -```text +```bash ┌─────────────────────────────────────────────────────────┐ │ 3. First approver reviews request │ │ - Verifies emergency is real │ @@ -167,7 +167,7 @@ Use break-glass if **ALL** apply: ### Phase 3: Activation (1-2 minutes) -```text +```bash ┌─────────────────────────────────────────────────────────┐ │ 6. Requester activates approved session │ │ - Receives emergency JWT token │ @@ -184,7 +184,7 @@ Use break-glass if **ALL** apply: ### Phase 4: Usage (Variable) -```text +```bash ┌─────────────────────────────────────────────────────────┐ │ 8. Requester performs emergency actions │ │ - Uses emergency token for access │ @@ -202,7 +202,7 @@ Use break-glass if **ALL** apply: ### Phase 5: Revocation (Immediate) -```text +```bash ┌─────────────────────────────────────────────────────────┐ │ 10. Session ends (one of): │ │ - Manual revocation by requester │ @@ -227,7 +227,7 @@ Use break-glass if **ALL** apply: #### 1. Request Emergency Access -```text +```bash provisioning break-glass request "Production database cluster unresponsive" --justification "Need direct SSH access to diagnose PostgreSQL failure. @@ -249,7 +249,7 @@ provisioning break-glass request #### 2. Approve Request (Approver) -```text +```bash # First approver (Security team) provisioning break-glass approve BG-20251008-001 --reason "Emergency verified via incident INC-2025-234. Database cluster confirmed down, affecting production." @@ -261,7 +261,7 @@ provisioning break-glass approve BG-20251008-001 # Status: Pending (need 1 more approval) ``` -```text +```bash # Second approver (Platform team) provisioning break-glass approve BG-20251008-001 --reason "Confirmed with monitoring. PostgreSQL master node unreachable. Emergency access justified." @@ -277,7 +277,7 @@ provisioning break-glass approve BG-20251008-001 #### 3. Activate Session -```text +```bash provisioning break-glass activate BG-20251008-001 # Output: @@ -299,7 +299,7 @@ export EMERGENCY_TOKEN="eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9..." #### 4. Use Emergency Access -```text +```bash # SSH to database server provisioning ssh connect db-master-01 --token $EMERGENCY_TOKEN @@ -314,7 +314,7 @@ sudo tail -f /var/log/postgresql/postgresql.log #### 5. Revoke Session -```text +```bash # When done, immediately revoke provisioning break-glass revoke BGS-20251008-001 --reason "Database cluster restored. PostgreSQL master node restarted successfully. All services online." @@ -368,7 +368,7 @@ provisioning break-glass revoke BGS-20251008-001 **Request**: -```text +```bash provisioning break-glass request "Production PostgreSQL cluster completely unresponsive" --justification "Database cluster (3 nodes) not responding. @@ -404,7 +404,7 @@ provisioning break-glass request **Request**: -```text +```bash provisioning break-glass request "Active security breach detected - need immediate containment" --justification "IDS alerts show unauthorized access from IP 203.0.113.42 to API. @@ -441,7 +441,7 @@ provisioning break-glass request **Request**: -```text +```bash provisioning break-glass request "Critical customer data accidentally deleted from production" --justification "Database migration script ran against production instead of staging. @@ -513,7 +513,7 @@ Every break-glass session logs: ### Compliance Reports -```text +```bash # Generate break-glass usage report provisioning break-glass audit --from "2025-01-01" @@ -565,7 +565,7 @@ provisioning break-glass audit **Incident Report**: -```text +```bash # Break-Glass Incident Report: BG-20251008-001 **Incident**: Production database cluster outage @@ -670,7 +670,7 @@ Always manually revoke when done. **A**: Yes, in **development environment only**: -```text +```bash PROVISIONING_ENV=dev provisioning break-glass request "Test emergency access procedure" ``` diff --git a/docs/src/operations/cedar-policies-production-guide.md b/docs/src/operations/cedar-policies-production-guide.md index aed450e..25eece1 100644 --- a/docs/src/operations/cedar-policies-production-guide.md +++ b/docs/src/operations/cedar-policies-production-guide.md @@ -41,7 +41,7 @@ that balance security with operational efficiency. ### Core Concepts -```text +```bash permit ( principal, # Who (user, team, role) action, # What (create, delete, deploy) @@ -78,7 +78,7 @@ permit ( #### Level 1: Development (Permissive) -```text +```bash // Developers have full access to dev environment permit ( principal in Team::"developers", @@ -89,7 +89,7 @@ permit ( #### Level 2: Staging (MFA Required) -```text +```bash // All operations require MFA permit ( principal in Team::"developers", @@ -102,7 +102,7 @@ permit ( #### Level 3: Production (MFA + Approval) -```text +```bash // Deployments require MFA + approval permit ( principal in Team::"platform-admin", @@ -117,7 +117,7 @@ permit ( #### Level 4: Critical (Break-Glass Only) -```text +```bash // Only emergency access permit ( principal, @@ -135,7 +135,7 @@ permit ( ### 1. Role-Based Access Control (RBAC) -```text +```bash // Admin: Full access permit ( principal in Role::"Admin", @@ -177,7 +177,7 @@ permit ( ### 2. Team-Based Policies -```text +```bash // Platform team: Infrastructure management permit ( principal in Team::"platform", @@ -210,7 +210,7 @@ permit ( ### 3. Time-Based Restrictions -```text +```bash // Deployments only during business hours permit ( principal, @@ -234,7 +234,7 @@ permit ( ### 4. IP-Based Restrictions -```text +```bash // Production access only from office network permit ( principal, @@ -258,7 +258,7 @@ permit ( ### 5. Resource-Specific Policies -```text +```bash // Database servers: Extra protection forbid ( principal, @@ -281,7 +281,7 @@ permit ( ### 6. Self-Service Policies -```text +```bash // Users can manage their own MFA devices permit ( principal, @@ -316,7 +316,7 @@ permit ( **Example Requirements Document**: -```text +```bash # Requirement: Production Deployment **Who**: DevOps team members @@ -330,7 +330,7 @@ permit ( ### Step 2: Write Policy -```text +```bash @id("prod-deploy-devops") @description("DevOps can deploy to production during business hours with approval") permit ( @@ -349,7 +349,7 @@ permit ( ### Step 3: Validate Syntax -```text +```bash # Use Cedar CLI to validate cedar validate --policies provisioning/config/cedar-policies/production.cedar @@ -360,7 +360,7 @@ cedar validate ### Step 4: Test in Development -```text +```bash # Deploy to development environment first cp production.cedar provisioning/config/cedar-policies/development.cedar @@ -385,7 +385,7 @@ provisioning server create test-server --check ### Step 6: Deploy to Production -```text +```bash # Backup current policies cp provisioning/config/cedar-policies/production.cedar provisioning/config/cedar-policies/production.cedar.backup.$(date +%Y%m%d) @@ -408,7 +408,7 @@ provisioning cedar list Create test cases for each policy: -```text +```bash # tests/cedar/prod-deploy-devops.yaml policy_id: prod-deploy-devops @@ -447,7 +447,7 @@ test_cases: Run tests: -```text +```bash provisioning cedar test tests/cedar/ ``` @@ -455,7 +455,7 @@ provisioning cedar test tests/cedar/ Test with real API calls: -```text +```bash # Setup test user export TEST_USER="alice" export TEST_TOKEN=$(provisioning login --user $TEST_USER --output token) @@ -479,7 +479,7 @@ curl -H "Authorization: Bearer $TEST_TOKEN" Verify policy evaluation performance: -```text +```bash # Generate load provisioning cedar bench --policies production.cedar @@ -495,7 +495,7 @@ provisioning cedar bench ### Development → Staging → Production -```text +```bash #!/bin/bash # deploy-policies.sh @@ -529,7 +529,7 @@ echo "✅ Policies deployed to $ENVIRONMENT" ### Rollback Procedure -```text +```bash # List backups ls -ltr provisioning/config/cedar-policies/backups/production/ @@ -550,7 +550,7 @@ provisioning cedar list ### Monitor Authorization Decisions -```text +```bash # Query denied requests (last 24 hours) provisioning audit query --action authorization_denied @@ -568,7 +568,7 @@ provisioning audit query ### Alert on Suspicious Activity -```text +```bash # alerts/cedar-policies.yaml alerts: - name: "High Denial Rate" @@ -585,7 +585,7 @@ alerts: ### Policy Usage Statistics -```text +```bash # Which policies are most used? provisioning cedar stats --top 10 @@ -632,7 +632,7 @@ provisioning cedar stats --top 10 **Debug**: -```text +```bash # Enable debug mode export PROVISIONING_DEBUG=1 @@ -656,7 +656,7 @@ provisioning audit query - Use `@priority` annotations (higher number = higher priority) - Make policies more specific to avoid conflicts -```text +```bash @priority(100) permit ( principal in Role::"Admin", @@ -680,7 +680,7 @@ forbid ( ### 1. Start Restrictive, Loosen Gradually -```text +```bash // ❌ BAD: Too permissive initially permit (principal, action, resource); @@ -694,7 +694,7 @@ permit ( ### 2. Use Annotations -```text +```bash @id("prod-deploy-mfa") @description("Production deployments require MFA verification") @owner("platform-team") @@ -713,7 +713,7 @@ permit ( Give users **minimum permissions** needed: -```text +```bash // ❌ BAD: Overly broad permit (principal in Team::"developers", action, resource); @@ -727,7 +727,7 @@ permit ( ### 4. Document Context Requirements -```text +```bash // Context required for this policy: // - mfa_verified: boolean (from JWT claims) // - approval_id: string (from request header) @@ -747,7 +747,7 @@ permit ( **File organization**: -```text +```bash cedar-policies/ ├── schema.cedar # Entity/action definitions ├── rbac.cedar # Role-based policies @@ -760,7 +760,7 @@ cedar-policies/ ### 6. Version Control -```text +```bash # Git commit each policy change git add provisioning/config/cedar-policies/production.cedar git commit -m "feat(cedar): Add MFA requirement for prod deployments @@ -792,7 +792,7 @@ git push ### Common Policy Patterns -```text +```bash # Allow all permit (principal, action, resource); @@ -829,7 +829,7 @@ permit ( ### Useful Commands -```text +```bash # Validate policies provisioning cedar validate diff --git a/docs/src/operations/control-center.md b/docs/src/operations/control-center.md index 41fe401..b03c892 100644 --- a/docs/src/operations/control-center.md +++ b/docs/src/operations/control-center.md @@ -45,7 +45,7 @@ A comprehensive Cedar policy engine implementation with advanced security featur ### Installation -```text +```bash cd provisioning/platform/control-center cargo build --release ``` @@ -54,13 +54,13 @@ cargo build --release Copy and edit the configuration: -```text +```toml cp config.toml.example config.toml ``` Configuration example: -```text +```toml [database] url = "surreal://localhost:8000" username = "root" @@ -80,13 +80,13 @@ detection_threshold = 2.5 ### Start Server -```text +```bash ./target/release/control-center server --port 8080 ``` ### Test Policy Evaluation -```text +```bash curl -X POST http://localhost:8080/policies/evaluate -H "Content-Type: application/json" -d '{ @@ -101,7 +101,7 @@ curl -X POST http://localhost:8080/policies/evaluate ### Multi-Factor Authentication Policy -```text +```bash permit( principal, action == Action::"access", @@ -116,7 +116,7 @@ permit( ### Production Approval Policy -```text +```bash permit( principal, action in [Action::"deploy", Action::"modify", Action::"delete"], @@ -131,7 +131,7 @@ permit( ### Geographic Restrictions -```text +```bash permit( principal, action, @@ -147,7 +147,7 @@ permit( ### Policy Management -```text +```bash # Validate policies control-center policy validate policies/ @@ -160,7 +160,7 @@ control-center policy impact policies/new_policy.cedar ### Compliance Checking -```text +```bash # Check SOC2 compliance control-center compliance soc2 @@ -241,7 +241,7 @@ The system follows PAP (Project Architecture Principles) with: ### Docker -```text +```bash FROM rust:1.75 as builder WORKDIR /app COPY . . @@ -256,7 +256,7 @@ CMD ["control-center", "server"] ### Kubernetes -```text +```yaml apiVersion: apps/v1 kind: Deployment metadata: diff --git a/docs/src/operations/coredns-guide.md b/docs/src/operations/coredns-guide.md index ebe7099..cf38020 100644 --- a/docs/src/operations/coredns-guide.md +++ b/docs/src/operations/coredns-guide.md @@ -51,7 +51,7 @@ The CoreDNS integration provides comprehensive DNS management capabilities for t ### Install CoreDNS Binary -```text +```bash # Install latest version provisioning dns install @@ -66,7 +66,7 @@ The binary will be installed to `~/.provisioning/bin/coredns`. ### Verify Installation -```text +```bash # Check CoreDNS version ~/.provisioning/bin/coredns -version @@ -82,7 +82,7 @@ ls -lh ~/.provisioning/bin/coredns Add CoreDNS configuration to your infrastructure config: -```text +```toml # In workspace/infra/{name}/config.ncl let coredns_config = { mode = "local", @@ -121,7 +121,7 @@ coredns_config Run CoreDNS as a local binary process: -```text +```javascript let coredns_config = { mode = "local", local = { @@ -136,7 +136,7 @@ coredns_config Run CoreDNS in Docker container: -```text +```javascript let coredns_config = { mode = "local", local = { @@ -155,7 +155,7 @@ coredns_config Connect to external CoreDNS service: -```text +```javascript let coredns_config = { mode = "remote", remote = { @@ -172,7 +172,7 @@ coredns_config Disable CoreDNS integration: -```text +```javascript let coredns_config = { mode = "disabled", } in @@ -185,7 +185,7 @@ coredns_config ### Service Management -```text +```bash # Check status provisioning dns status @@ -216,7 +216,7 @@ provisioning dns logs --lines 100 ### Health & Monitoring -```text +```bash # Check health provisioning dns health @@ -236,14 +236,14 @@ provisioning dns config generate ### List Zones -```text +```bash # List all zones provisioning dns zone list ``` **Output:** -```text +```bash DNS Zones ========= • provisioning.local ✓ @@ -252,7 +252,7 @@ DNS Zones ### Create Zone -```text +```bash # Create new zone provisioning dns zone create myapp.local @@ -262,7 +262,7 @@ provisioning dns zone create myapp.local --check ### Show Zone Details -```text +```bash # Show all records in zone provisioning dns zone show provisioning.local @@ -275,7 +275,7 @@ provisioning dns zone show provisioning.local --format yaml ### Delete Zone -```text +```bash # Delete zone (with confirmation) provisioning dns zone delete myapp.local @@ -294,7 +294,7 @@ provisioning dns zone delete myapp.local --check #### A Record (IPv4) -```text +```bash provisioning dns record add server-01 A 10.0.1.10 # With custom TTL @@ -309,31 +309,31 @@ provisioning dns record add server-01 A 10.0.1.10 --zone myapp.local #### AAAA Record (IPv6) -```text +```bash provisioning dns record add server-01 AAAA 2001:db8::1 ``` #### CNAME Record -```text +```bash provisioning dns record add web CNAME server-01.provisioning.local ``` #### MX Record -```text +```bash provisioning dns record add @ MX mail.example.com --priority 10 ``` #### TXT Record -```text +```bash provisioning dns record add @ TXT "v=spf1 mx -all" ``` ### Remove Records -```text +```bash # Remove record provisioning dns record remove server-01 @@ -346,7 +346,7 @@ provisioning dns record remove server-01 --check ### Update Records -```text +```bash # Update record value provisioning dns record update server-01 A 10.0.1.20 @@ -356,7 +356,7 @@ provisioning dns record update server-01 A 10.0.1.20 --ttl 1800 ### List Records -```text +```bash # List all records in zone provisioning dns record list @@ -372,7 +372,7 @@ provisioning dns record list --format yaml **Example Output:** -```text +```bash DNS Records - Zone: provisioning.local ╭───┬──────────────┬──────┬─────────────┬─────╮ @@ -393,14 +393,14 @@ DNS Records - Zone: provisioning.local Ensure Docker and docker-compose are installed: -```text +```bash docker --version docker-compose --version ``` ### Start CoreDNS in Docker -```text +```bash # Start CoreDNS container provisioning dns docker start @@ -410,7 +410,7 @@ provisioning dns docker start --check ### Manage Docker Container -```text +```bash # Check status provisioning dns docker status @@ -432,7 +432,7 @@ provisioning dns docker health ### Update Docker Image -```text +```bash # Pull latest image provisioning dns docker pull @@ -445,7 +445,7 @@ provisioning dns docker update ### Remove Container -```text +```bash # Remove container (with confirmation) provisioning dns docker remove @@ -461,7 +461,7 @@ provisioning dns docker remove --check ### View Configuration -```text +```toml # Show docker-compose config provisioning dns docker config ``` @@ -474,7 +474,7 @@ provisioning dns docker config When dynamic DNS is enabled, servers are automatically registered: -```text +```bash # Create server (automatically registers in DNS) provisioning server create web-01 --infra myapp @@ -483,7 +483,7 @@ provisioning server create web-01 --infra myapp ### Manual Registration -```text +```bash use lib_provisioning/coredns/integration.nu * # Register server @@ -502,7 +502,7 @@ bulk-register-servers [ ### Sync Infrastructure with DNS -```text +```bash # Sync all servers in infrastructure with DNS provisioning dns sync myapp @@ -512,7 +512,7 @@ provisioning dns sync myapp --check ### Service Registration -```text +```bash use lib_provisioning/coredns/integration.nu * # Register service @@ -528,7 +528,7 @@ unregister-service-from-dns "api" ### Using CLI -```text +```bash # Query A record provisioning dns query server-01 @@ -544,7 +544,7 @@ provisioning dns query server-01 --server 127.0.0.1 --port 5353 ### Using dig -```text +```bash # Query from local CoreDNS dig @127.0.0.1 -p 5353 server-01.provisioning.local @@ -730,7 +730,7 @@ dig @127.0.0.1 -p 5353 example.com MX Add custom plugins to Corefile: -```text +```bash use lib_provisioning/coredns/corefile.nu * # Add plugin to zone @@ -742,7 +742,7 @@ add-corefile-plugin ### Backup and Restore -```text +```bash # Backup configuration tar czf coredns-backup.tar.gz ~/.provisioning/coredns/ @@ -752,7 +752,7 @@ tar xzf coredns-backup.tar.gz -C ~/ ### Zone File Backup -```text +```bash use lib_provisioning/coredns/zones.nu * # Backup zone @@ -765,7 +765,7 @@ backup-zone-file "provisioning.local" CoreDNS exposes Prometheus metrics on port 9153: -```text +```bash # View metrics curl http://localhost:9153/metrics @@ -777,7 +777,7 @@ curl http://localhost:9153/metrics ### Multi-Zone Setup -```text +```bash coredns_config: CoreDNSConfig = { local = { zones = [ @@ -795,7 +795,7 @@ coredns_config: CoreDNSConfig = { Configure different zones for internal/external: -```text +```toml coredns_config: CoreDNSConfig = { local = { zones = ["internal.local"] @@ -856,7 +856,7 @@ coredns_config: CoreDNSConfig = { ### Complete Setup Example -```text +```bash # 1. Install CoreDNS provisioning dns install @@ -884,7 +884,7 @@ provisioning dns health ### Docker Deployment Example -```text +```bash # 1. Start CoreDNS in Docker provisioning dns docker start @@ -935,7 +935,7 @@ provisioning dns docker stop ### Installation -```text +```bash # Install CoreDNS binary provisioning dns install @@ -947,7 +947,7 @@ provisioning dns install 1.11.1 ### Service Management -```text +```bash # Status provisioning dns status @@ -976,7 +976,7 @@ provisioning dns health ### Zone Management -```text +```bash # List zones provisioning dns zone list @@ -996,7 +996,7 @@ provisioning dns zone delete myapp.local --force ### Record Management -```text +```bash # Add A record provisioning dns record add server-01 A 10.0.1.10 @@ -1035,7 +1035,7 @@ provisioning dns record list --format json ### DNS Queries -```text +```bash # Query A record provisioning dns query server-01 @@ -1054,7 +1054,7 @@ dig @127.0.0.1 -p 5353 provisioning.local SOA ### Configuration -```text +```toml # Show configuration provisioning dns config show @@ -1069,7 +1069,7 @@ provisioning dns config generate ### Docker Deployment -```text +```bash # Start Docker container provisioning dns docker start @@ -1111,7 +1111,7 @@ provisioning dns docker config #### Initial Setup -```text +```bash # 1. Install provisioning dns install @@ -1125,7 +1125,7 @@ provisioning dns health #### Add Server -```text +```bash # Add DNS record for new server provisioning dns record add web-01 A 10.0.1.10 @@ -1135,7 +1135,7 @@ provisioning dns query web-01 #### Create Custom Zone -```text +```bash # 1. Create zone provisioning dns zone create myapp.local @@ -1152,7 +1152,7 @@ dig @127.0.0.1 -p 5353 web-01.myapp.local #### Docker Setup -```text +```bash # 1. Start container provisioning dns docker start @@ -1170,7 +1170,7 @@ dig @127.0.0.1 -p 5353 server-01.provisioning.local ### Troubleshooting -```text +```bash # Check if CoreDNS is running provisioning dns status ps aux | grep coredns @@ -1202,7 +1202,7 @@ docker ps -a | grep coredns ### File Locations -```text +```bash # Binary ~/.provisioning/bin/coredns @@ -1226,7 +1226,7 @@ provisioning/config/coredns/docker-compose.yml ### Configuration Example -```text +```toml import provisioning.coredns as dns coredns_config: dns.CoreDNSConfig = { @@ -1249,7 +1249,7 @@ coredns_config: dns.CoreDNSConfig = { ### Environment Variables -```text +```bash # None required - configuration via Nickel ``` diff --git a/docs/src/operations/deployment-guide.md b/docs/src/operations/deployment-guide.md index ac537af..4a0986f 100644 --- a/docs/src/operations/deployment-guide.md +++ b/docs/src/operations/deployment-guide.md @@ -53,7 +53,7 @@ Practical guide for deploying the 9-service provisioning platform in any environ ### Directory Structure -```text +```bash # Ensure base directories exist mkdir -p provisioning/schemas/platform mkdir -p provisioning/platform/logs @@ -169,7 +169,7 @@ mkdir -p provisioning/config/runtime ### 1. Clone Repository -```text +```bash git clone https://github.com/your-org/project-provisioning.git cd project-provisioning ``` @@ -178,7 +178,7 @@ cd project-provisioning Choose your mode based on use case: -```text +```bash # For development export DEPLOYMENT_MODE=solo @@ -196,7 +196,7 @@ export DEPLOYMENT_MODE=enterprise All services use mode-specific TOML configs automatically loaded via environment variables: -```text +```toml # Vault Service export VAULT_MODE=$DEPLOYMENT_MODE @@ -215,7 +215,7 @@ export DAEMON_MODE=$DEPLOYMENT_MODE ### 4. Build All Services -```text +```bash # Build all platform crates cargo build --release -p vault-service -p extension-registry @@ -230,7 +230,7 @@ cargo build --release -p vault-service ### 5. Start Services (Order Matters) -```text +```bash # Start in dependency order: # 1. Core infrastructure (KMS, storage) @@ -257,7 +257,7 @@ cargo run --release -p installer & ### 6. Verify Services -```text +```bash # Check all services are running pgrep -l "vault-service|extension-registry|provisioning-rag|ai-service" @@ -278,7 +278,7 @@ curl http://localhost:8080/health # Control Center ### Step 1: Verify Solo Configuration Files -```text +```toml # Check that solo schemas are available ls -la provisioning/schemas/platform/defaults/deployment/solo-defaults.ncl @@ -292,7 +292,7 @@ ls -la provisioning/schemas/platform/defaults/deployment/solo-defaults.ncl ### Step 2: Set Solo Environment Variables -```text +```bash # Set all services to solo mode export VAULT_MODE=solo export REGISTRY_MODE=solo @@ -306,14 +306,14 @@ echo $VAULT_MODE # Should output: solo ### Step 3: Build Services -```text +```bash # Build in release mode for better performance cargo build --release ``` ### Step 4: Create Local Data Directories -```text +```bash # Create storage directories for solo mode mkdir -p /tmp/provisioning-solo/{vault,registry,rag,ai,daemon} chmod 755 /tmp/provisioning-solo/{vault,registry,rag,ai,daemon} @@ -321,7 +321,7 @@ chmod 755 /tmp/provisioning-solo/{vault,registry,rag,ai,daemon} ### Step 5: Start Services -```text +```bash # Start each service in a separate terminal or use tmux: # Terminal 1: Vault @@ -348,7 +348,7 @@ cargo run --release -p provisioning-daemon ### Step 6: Test Services -```text +```bash # Wait 10-15 seconds for services to start, then test # Check service health @@ -362,7 +362,7 @@ curl -X GET http://localhost:9090/api/v1/health ### Step 7: Verify Persistence (Optional) -```text +```bash # Check that data is stored locally ls -la /tmp/provisioning-solo/vault/ ls -la /tmp/provisioning-solo/registry/ @@ -372,7 +372,7 @@ ls -la /tmp/provisioning-solo/registry/ ### Cleanup -```text +```bash # Stop all services pkill -f "cargo run --release" @@ -394,7 +394,7 @@ rm -rf /tmp/provisioning-solo ### Step 1: Deploy SurrealDB -```text +```bash # Using Docker (recommended) docker run -d --name surrealdb @@ -408,7 +408,7 @@ surreal start --user root --pass root ### Step 2: Verify SurrealDB Connectivity -```text +```bash # Test SurrealDB connection curl -s http://localhost:8000/health @@ -417,7 +417,7 @@ curl -s http://localhost:8000/health ### Step 3: Set Multiuser Environment Variables -```text +```bash # Configure all services for multiuser mode export VAULT_MODE=multiuser export REGISTRY_MODE=multiuser @@ -438,13 +438,13 @@ export RAG_HOST=rag.internal ### Step 4: Build Services -```text +```bash cargo build --release ``` ### Step 5: Create Shared Data Directories -```text +```bash # Create directories on shared storage (NFS, etc.) mkdir -p /mnt/provisioning-data/{vault,registry,rag,ai} chmod 755 /mnt/provisioning-data/{vault,registry,rag,ai} @@ -455,7 +455,7 @@ mkdir -p /var/lib/provisioning/{vault,registry,rag,ai} ### Step 6: Start Services on Multiple Machines -```text +```bash # Machine 1: Infrastructure services ssh ops@machine1 export VAULT_MODE=multiuser @@ -482,7 +482,7 @@ cargo run --release -p provisioning-daemon & ### Step 7: Test Multi-Machine Setup -```text +```bash # From any machine, test cross-machine connectivity curl -s http://machine1:8200/health curl -s http://machine2:8083/health @@ -496,7 +496,7 @@ curl -X POST http://machine3:9090/api/v1/provision ### Step 8: Enable User Access -```text +```bash # Create shared credentials export VAULT_TOKEN=s.xxxxxxxxxxx @@ -510,7 +510,7 @@ export VAULT_MODE=multiuser ### Monitoring Multiuser Deployment -```text +```bash # Check all services are connected to SurrealDB for host in machine1 machine2 machine3 machine4; do ssh ops@$host "curl -s http://localhost/api/v1/health | jq .database_connected" @@ -537,7 +537,7 @@ CICD mode services: ### Step 2: Set CICD Environment Variables -```text +```bash # Use cicd mode for all services export VAULT_MODE=cicd export REGISTRY_MODE=cicd @@ -551,7 +551,7 @@ export CI_ENVIRONMENT=true ### Step 3: Containerize Services (Optional) -```text +```bash # Dockerfile for CICD deployments FROM rust:1.75-slim @@ -582,7 +582,7 @@ CMD ["sh", "-c", " ### Step 4: GitHub Actions Example -```text +```bash name: CICD Platform Deployment on: @@ -645,7 +645,7 @@ jobs: ### Step 5: Run CICD Tests -```text +```bash # Simulate CI environment locally export VAULT_MODE=cicd export CI_ENVIRONMENT=true @@ -684,7 +684,7 @@ cargo test --release #### 1.1 Deploy Etcd Cluster -```text +```bash # Node 1, 2, 3 etcd --name=node-1 --listen-client-urls=http://0.0.0.0:2379 @@ -698,7 +698,7 @@ etcdctl --endpoints=http://localhost:2379 member list #### 1.2 Deploy Load Balancer -```text +```bash # HAProxy configuration for vault-service (example) frontend vault_frontend bind *:8200 @@ -715,7 +715,7 @@ backend vault_backend #### 1.3 Configure TLS -```text +```toml # Generate certificates (or use existing) mkdir -p /etc/provisioning/tls @@ -733,7 +733,7 @@ chmod 644 /etc/provisioning/tls/*-cert.pem ### Step 2: Set Enterprise Environment Variables -```text +```bash # All machines: Set enterprise mode export VAULT_MODE=enterprise export REGISTRY_MODE=enterprise @@ -761,7 +761,7 @@ export AUDIT_LOG_ENABLED=true ### Step 3: Deploy Services Across Cluster -```text +```bash # Ansible playbook (simplified) --- - hosts: provisioning_cluster @@ -789,7 +789,7 @@ export AUDIT_LOG_ENABLED=true ### Step 4: Monitor Cluster Health -```text +```bash # Check cluster status curl -s https://vault.internal:8200/health | jq .state @@ -805,7 +805,7 @@ etcdctl --endpoints=https://node-1.internal:2379 election list ### Step 5: Enable Monitoring & Alerting -```text +```bash # Prometheus configuration global: scrape_interval: 30s @@ -827,7 +827,7 @@ scrape_configs: ### Step 6: Backup & Recovery -```text +```bash # Daily backup script #!/bin/bash BACKUP_DIR="/mnt/provisioning-backups" @@ -858,7 +858,7 @@ find "$BACKUP_DIR" -mtime +30 -delete #### Individual Service Startup -```text +```bash # Start one service export VAULT_MODE=enterprise cargo run --release -p vault-service @@ -870,7 +870,7 @@ cargo run --release -p extension-registry #### Batch Startup -```text +```bash # Start all services (dependency order) #!/bin/bash set -e @@ -911,7 +911,7 @@ echo "All services started. PIDs: $VAULT_PID $REGISTRY_PID $RAG_PID $AI_PID $ORC ### Stopping Services -```text +```bash # Stop all services gracefully pkill -SIGTERM -f "cargo run --release -p" @@ -927,7 +927,7 @@ pgrep -f "cargo run --release -p" && echo "Services still running" || echo "All ### Restarting Services -```text +```bash # Restart single service pkill -SIGTERM vault-service sleep 2 @@ -945,7 +945,7 @@ cargo run --release -p vault-service & ### Checking Service Status -```text +```bash # Check running processes pgrep -a "cargo run --release" @@ -968,7 +968,7 @@ done ### Manual Health Verification -```text +```bash # Vault Service curl -s http://localhost:8200/health | jq . # Expected: {"status":"ok","uptime":123.45} @@ -992,7 +992,7 @@ curl -s http://localhost:8080/health | jq . ### Service Integration Tests -```text +```bash # Test vault <-> registry integration curl -X POST http://localhost:8200/api/encrypt -H "Content-Type: application/json" @@ -1020,7 +1020,7 @@ curl -X POST http://localhost:9090/api/v1/provision #### Prometheus Metrics -```text +```bash # Query service uptime curl -s 'http://prometheus:9090/api/v1/query?query=up' | jq . @@ -1033,7 +1033,7 @@ curl -s 'http://prometheus:9090/api/v1/query?query=rate(http_errors_total[5m])' #### Log Aggregation -```text +```bash # Follow vault logs tail -f /var/log/provisioning/vault-service.log @@ -1049,7 +1049,7 @@ tail -f /var/log/provisioning/orchestrator.log | grep -E "ERROR|WARN" ### Alerting -```text +```bash # AlertManager configuration groups: - name: provisioning @@ -1080,7 +1080,7 @@ groups: **Problem**: `error: failed to bind to port 8200` **Solutions**: -```text +```bash # Check if port is in use lsof -i :8200 ss -tlnp | grep 8200 @@ -1098,7 +1098,7 @@ cargo run --release -p vault-service **Problem**: `error: failed to load config from mode file` **Solutions**: -```text +```toml # Verify schemas exist ls -la provisioning/schemas/platform/schemas/vault-service.ncl @@ -1121,7 +1121,7 @@ cargo run --release -p vault-service **Problem**: `error: failed to connect to database` **Solutions**: -```text +```bash # Verify database is running curl http://surrealdb:8000/health etcdctl --endpoints=http://etcd:2379 endpoint health @@ -1144,7 +1144,7 @@ cargo run --release -p vault-service **Problem**: Service exits with code 1 or 139 **Solutions**: -```text +```bash # Run with verbose logging RUST_LOG=debug cargo run -p vault-service 2>&1 | head -50 @@ -1164,7 +1164,7 @@ rust-gdb --args target/release/vault-service **Problem**: Service consuming > expected memory **Solutions**: -```text +```bash # Check memory usage ps aux | grep vault-service | grep -v grep @@ -1184,7 +1184,7 @@ valgrind --leak-check=full target/release/vault-service **Problem**: `error: failed to resolve hostname` **Solutions**: -```text +```bash # Test DNS resolution nslookup vault.internal dig vault.internal @@ -1205,7 +1205,7 @@ netstat -nr **Problem**: Data lost after restart **Solutions**: -```text +```bash # Verify backup exists ls -la /mnt/provisioning-backups/ ls -la /var/lib/provisioning/ @@ -1225,7 +1225,7 @@ chmod 755 /var/lib/provisioning/vault/* When troubleshooting, use this systematic approach: -```text +```bash # 1. Check service is running pgrep -f vault-service || echo "Service not running" @@ -1258,7 +1258,7 @@ free -h && df -h && top -bn1 | head -10 ### Updating Service Configuration -```text +```toml # 1. Edit the schema definition vim provisioning/schemas/platform/schemas/vault-service.ncl @@ -1282,7 +1282,7 @@ curl http://localhost:8200/api/config | jq . ### Mode Migration -```text +```bash # Migrate from solo to multiuser: # 1. Stop services @@ -1342,7 +1342,7 @@ Before deploying to production: ### Useful Commands Reference -```text +```bash # View all available commands cargo run -- --help diff --git a/docs/src/operations/incident-response-runbooks.md b/docs/src/operations/incident-response-runbooks.md index 6cff99b..e6a8358 100644 --- a/docs/src/operations/incident-response-runbooks.md +++ b/docs/src/operations/incident-response-runbooks.md @@ -34,13 +34,13 @@ ### Phase 1: Immediate Response (0-2 minutes) **1.1 Acknowledge alert** -```text +```bash # In PagerDuty/Slack # React with 🚨 to acknowledge you're investigating ``` **1.2 Verify service is actually down** -```text +```bash # Check if service is running pgrep -a service-name ps aux | grep service-name | grep -v grep @@ -55,7 +55,7 @@ curl -v http://SERVICE_HOSTNAME:PORT/health ``` **1.3 Check recent events** -```text +```bash # View last 20 log lines tail -20 /var/log/provisioning/service-name.log @@ -70,7 +70,7 @@ journalctl -u provisioning-service-name -n 50 ### Phase 2: Diagnosis (2-5 minutes) **2.1 Determine if service crashed or hung** -```text +```bash # Check process exists pgrep service-name && echo "Process running" || echo "Process crashed" @@ -83,7 +83,7 @@ ps aux | grep service-name | grep -v grep | awk '{print $3, $6}' ``` **2.2 Check dependencies** -```text +```bash # Test database connectivity (if applicable) curl http://surrealdb:8000/health etcdctl --endpoints=http://etcd:2379 endpoint health @@ -98,7 +98,7 @@ dig service-name.internal ``` **2.3 Review configuration** -```text +```toml # Check schema exists and is valid nickel typecheck provisioning/schemas/platform/schemas/service.ncl @@ -112,7 +112,7 @@ env | grep -E "SERVICE_|VAULT_|REGISTRY_" ### Phase 3: Remediation (5-10 minutes) **3.1 Attempt restart** -```text +```bash # Stop service gracefully pkill -SIGTERM service-name sleep 5 @@ -135,7 +135,7 @@ curl http://localhost:PORT/health ``` **3.2 If restart fails** -```text +```bash # Check full error output RUST_LOG=trace cargo run -p service-name 2>&1 | head -100 @@ -153,7 +153,7 @@ curl http://localhost:9999/health ``` **3.3 If still failing** -```text +```bash # Restore schemas from backup git checkout HEAD~1 -- provisioning/schemas/platform/schemas/service.ncl git checkout HEAD~1 -- provisioning/schemas/platform/defaults/service-defaults.ncl @@ -174,7 +174,7 @@ curl http://localhost:PORT/health ### Phase 4: Verification (10-15 minutes) **4.1 Confirm service is healthy** -```text +```bash # Health check curl -s http://localhost:PORT/health | jq . @@ -189,7 +189,7 @@ tail -f /var/log/provisioning/service-name.log ``` **4.2 Check downstream services** -```text +```bash # If this service is dependency for others, verify they recovered for port in 8200 8081 8083 8082 9090 8080; do curl -s http://localhost:$port/health && echo "✓ Port $port" || echo "✗ Port $port" @@ -200,7 +200,7 @@ curl -s http://localhost:9090/api/v1/status | jq . ``` **4.3 Update incident status** -```text +```bash # In PagerDuty/Slack # Post: "✓ Service recovered at HH:MM, investigating root cause" # Start war room if needed @@ -209,7 +209,7 @@ curl -s http://localhost:9090/api/v1/status | jq . ### Phase 5: Root Cause Analysis (During + After Incident) **5.1 Collect data** -```text +```bash # Full logs from incident window grep "2026-01-05 14:" /var/log/provisioning/service-name.log > /tmp/incident-logs.txt @@ -226,7 +226,7 @@ df -h ``` **5.2 Investigate common causes** -```text +```bash # OOM (Out of Memory) dmesg | grep -i "killed\|oom" | tail -5 grep "memory" /var/log/provisioning/service-name.log @@ -249,7 +249,7 @@ grep -i "timeout\|connection reset" /var/log/provisioning/service-name.log ### Phase 6: Post-Incident **6.1 Document incident** -```text +```bash # Create incident summary cat > /tmp/incident-summary.md << EOF ## Incident: SERVICE-NAME Down @@ -263,7 +263,7 @@ EOF ``` **6.2 Implement prevention** -```text +```bash # Add monitoring/alerting if missing # Implement auto-restart if applicable # Update runbooks based on findings @@ -283,7 +283,7 @@ EOF ### Phase 1: Immediate Response (0-2 minutes) **1.1 Acknowledge alert and gather metrics** -```text +```bash # Get current error rate curl -s 'http://localhost:9090/api/v1/query?query=rate(http_requests_total{status=~"5.."}[5m])' | jq . @@ -295,7 +295,7 @@ curl -s 'http://localhost:9090/api/v1/query?query=rate(http_requests_total{statu ``` **1.2 Check service health** -```text +```bash # Check all services for port in 8200 8081 8083 8082 9090 8080; do echo "=== Port $port ===" @@ -309,7 +309,7 @@ tail -50 /var/log/provisioning/affected-service.log | grep -i error ### Phase 2: Diagnosis (2-5 minutes) **2.1 Identify error pattern** -```text +```bash # Get detailed error messages curl -s http://localhost:9090/api/v1/alerts | jq '.data.alerts[] | select(.labels.alertname=="HighErrorRate")' @@ -321,7 +321,7 @@ tail -100 /var/log/provisioning/affected-service.log | grep -A 20 "panic" ``` **2.2 Check dependencies** -```text +```bash # Test database curl http://surrealdb:8000/health surreal sql --endpoint http://surrealdb:8000 --username root --password root --query "SELECT count(*) FROM services" @@ -335,7 +335,7 @@ curl http://registry:8081/health ``` **2.3 Check for resource issues** -```text +```bash # Memory free -h ps aux | grep service | awk '{print $6}' | head -5 @@ -354,7 +354,7 @@ netstat -an | grep ESTABLISHED | wc -l ### Phase 3: Remediation (5-10 minutes) **3.1 If dependency is unhealthy** -```text +```bash # Restart database systemctl restart surrealdb @@ -368,7 +368,7 @@ cargo run --release -p affected-service & ``` **3.2 If resource exhausted** -```text +```bash # Scale up if possible export SERVICE_WORKERS=16 # Increase workers pkill -SIGTERM affected-service @@ -380,7 +380,7 @@ cargo run --release -p affected-service & ``` **3.3 If code issue (deployment-related)** -```text +```bash # Rollback to previous version git checkout HEAD~1 cargo build --release -p affected-service @@ -400,7 +400,7 @@ curl -s 'http://localhost:9090/api/v1/query?query=rate(http_requests_total{statu ### Phase 4: Verification (10-15 minutes) **4.1 Confirm error rate recovered** -```text +```bash # Check current error rate curl -s 'http://localhost:9090/api/v1/query?query=rate(http_requests_total{status=~"5.."}[5m])' | jq . @@ -413,7 +413,7 @@ curl -s http://localhost:9093/api/v1/alerts | jq '.data.alerts[] | select(.label ``` **4.2 Monitor for recurrence** -```text +```bash # Set up dashboard to watch for 15 minutes watch -n 10 'curl -s "http://localhost:9090/api/v1/query?query=rate(http_requests_total{status=~\"5..\"}[5m])" | jq .' @@ -434,7 +434,7 @@ watch -n 10 'curl -s "http://localhost:9090/api/v1/query?query=rate(http_request ### Phase 1: Immediate Response (0-2 minutes) **1.1 Verify memory issue** -```text +```bash # Check system memory free -h # Look for very low "available" @@ -448,7 +448,7 @@ ps aux --sort=-%mem | head -10 ``` **1.2 Check for memory leak** -```text +```bash # Monitor memory growth over time for i in {1..5}; do ps aux | grep service-name | grep -v grep | awk '{printf "%s: %d MB @@ -462,7 +462,7 @@ done ### Phase 2: Diagnosis (2-5 minutes) **2.1 Identify leak source** -```text +```bash # Check goroutine count (Rust threads) curl -s http://localhost:PORT/debug/pprof/goroutine | head -20 @@ -475,7 +475,7 @@ curl -s http://localhost:PORT/debug/pprof/heap | head -50 ``` **2.2 Check for unbounded growth** -```text +```bash # Cache size curl -s http://localhost:PORT/metrics | grep "cache.*bytes" @@ -489,7 +489,7 @@ curl -s http://localhost:PORT/metrics | grep "queue.*depth" ### Phase 3: Remediation (5-10 minutes) **3.1 Immediate: Restart service** -```text +```bash # Graceful shutdown (allows cleanup) pkill -SIGTERM service-name sleep 30 @@ -510,7 +510,7 @@ watch -n 5 'ps aux | grep service-name | grep -v grep | awk "{printf \"Memory: % ``` **3.2 If memory still grows** -```text +```bash # Stop service pkill -9 service-name @@ -527,7 +527,7 @@ grep -i "oom\|memory\|alloc" /var/log/provisioning/service-name.log ``` **3.3 Temporary mitigation** -```text +```bash # Enable memory limits via systemd cat > /etc/systemd/system/service-name.service.d/memory-limit.conf << EOF [Service] @@ -542,7 +542,7 @@ systemctl restart service-name ### Phase 4: Verification (10-15 minutes) **4.1 Confirm memory stabilized** -```text +```bash # Monitor for 10 minutes for i in {1..20}; do echo "=== $(date) ===" @@ -555,7 +555,7 @@ done ``` **4.2 Post-incident actions** -```text +```bash # Enable memory profiling RUST_LOG=debug MALLOC_TRACE=/tmp/mem-trace.log cargo run -p service-name @@ -578,7 +578,7 @@ heapdump analyze /tmp/heap.bin ### Phase 1: Immediate Response (0-2 minutes) **1.1 Verify disk space issue** -```text +```bash # Check disk usage df -h df -h /var/lib/provisioning @@ -591,7 +591,7 @@ find /var/lib/provisioning -type d -exec du -sh {} + | sort -rh | head -10 ``` **1.2 Identify culprit** -```text +```bash # Check logs size du -sh /var/log/provisioning/ @@ -607,7 +607,7 @@ du -sh /dev/shm/ ### Phase 2: Remediation (2-10 minutes) **2.1 Clean up temporary files** -```text +```bash # Remove old logs (keep 7 days) find /var/log/provisioning -name "*.log" -mtime +7 -delete @@ -620,7 +620,7 @@ rm -rf /tmp/service-*.lock ``` **2.2 Archive old metrics (if Prometheus)** -```text +```bash # Archive old Prometheus data cd /var/lib/prometheus tar -czf /archive/prometheus-old-$(date +%s).tar.gz snapshots/ @@ -631,7 +631,7 @@ rm -rf snapshots/ ``` **2.3 Rotate logs** -```text +```bash # Manual log rotation logrotate -f /etc/logrotate.d/provisioning @@ -641,7 +641,7 @@ mv /var/log/provisioning/*.log.gz /archive/ ``` **2.4 Check disk usage again** -```text +```bash # Verify freed space df -h @@ -652,7 +652,7 @@ df -h ### Phase 3: Permanent Fix **3.1 Enable automatic log rotation** -```text +```bash # Create logrotate config cat > /etc/logrotate.d/provisioning << EOF /var/log/provisioning/*.log { @@ -673,7 +673,7 @@ logrotate -f /etc/logrotate.d/provisioning ``` **3.2 Implement metrics retention policy** -```text +```bash # Update Prometheus config cat >> /etc/prometheus/prometheus.yml << EOF global: @@ -686,7 +686,7 @@ systemctl restart prometheus ``` **3.3 Plan for growth** -```text +```bash # Calculate daily growth du -sh /var/lib/provisioning # Check again tomorrow @@ -710,7 +710,7 @@ du -sh /var/lib/provisioning ### Phase 1: Immediate Response (0-2 minutes) **1.1 Verify database is running** -```text +```bash # Check SurrealDB (if multiuser/enterprise) curl -s http://surrealdb:8000/health curl -s https://surrealdb:8000/health # Try HTTPS if HTTP fails @@ -724,7 +724,7 @@ pgrep etcd || echo "Etcd not running" ``` **1.2 Test connectivity** -```text +```bash # From affected service curl -v http://surrealdb:8000/health telnet surrealdb 8000 @@ -739,7 +739,7 @@ traceroute surrealdb ``` **1.3 Get error details** -```text +```bash # Check affected service logs tail -50 /var/log/provisioning/affected-service.log | grep -i "database\|connection" @@ -750,7 +750,7 @@ grep "connection refused\|connection reset\|timeout" /var/log/provisioning/affec ### Phase 2: Diagnosis (2-5 minutes) **2.1 If database is down** -```text +```bash # Check why database crashed journalctl -u surrealdb -n 100 tail -100 /var/log/surrealdb.log | grep -i error @@ -765,7 +765,7 @@ dmesg | tail -20 ``` **2.2 If database is running but unreachable** -```text +```bash # Check database is listening ss -tlnp | grep surrealdb lsof -i :8000 @@ -780,7 +780,7 @@ cat /etc/surrealdb/config.toml | head -20 ``` **2.3 If connection pool exhausted** -```text +```bash # Count active connections curl -s http://surrealdb:8000/api/v1/stats | jq .connections @@ -794,7 +794,7 @@ ulimit -n ### Phase 3: Remediation (5-10 minutes) **3.1 If database crashed - Restart** -```text +```bash # Restart database systemctl restart surrealdb @@ -809,7 +809,7 @@ journalctl -u surrealdb -n 20 ``` **3.2 If database won't start** -```text +```bash # Check storage corruption ls -la /var/lib/surrealdb/ @@ -825,7 +825,7 @@ systemctl restart surrealdb ``` **3.3 If connection pool exhausted** -```text +```bash # Reduce service worker count (uses fewer connections) export SERVICE_WORKERS=2 pkill affected-service @@ -838,7 +838,7 @@ systemctl restart surrealdb ``` **3.4 Restart affected service to clear connections** -```text +```bash # Graceful restart clears connection pool pkill -SIGTERM affected-service sleep 5 @@ -857,7 +857,7 @@ curl http://localhost:PORT/health ### Phase 4: Verification (10-15 minutes) **4.1 Confirm connectivity restored** -```text +```bash # Test database access curl -s http://surrealdb:8000/health | jq . @@ -873,7 +873,7 @@ tail -20 /var/log/provisioning/affected-service.log | grep -v "debug\|info" ``` **4.2 Monitor for recurrence** -```text +```bash # Watch for connection errors watch -n 5 'curl -s "http://localhost:9090/api/v1/query?query=increase(database_connection_errors_total[5m])" | jq .' @@ -893,7 +893,7 @@ watch -n 5 'curl -s "http://localhost:9090/api/v1/query?query=increase(database_ ### Phase 1: Immediate Response (0-2 minutes) **1.1 Check queue status** -```text +```bash # Get queue depth curl -s http://localhost:9090/api/v1/queue/status | jq . @@ -905,7 +905,7 @@ curl -s 'http://localhost:9090/api/v1/query?query=orchestrator_queue_depth by (s ``` **1.2 Check processing rate** -```text +```bash # Tasks processed per minute curl -s 'http://localhost:9090/api/v1/query?query=rate(orchestrator_tasks_total[5m])' | jq . @@ -917,7 +917,7 @@ curl -s 'http://localhost:9090/api/v1/query?query=rate(orchestrator_tasks_failed ``` **1.3 Get task samples** -```text +```bash # Get recent tasks curl -s http://localhost:9090/api/v1/tasks?limit=20 | jq . @@ -931,7 +931,7 @@ curl -s http://localhost:9090/api/v1/tasks?status=failed&limit=10 | jq . ### Phase 2: Diagnosis (2-5 minutes) **2.1 Identify processing bottleneck** -```text +```bash # Check orchestrator CPU top -p ORCHESTRATOR_PID -n 1 | grep orchestrator @@ -944,7 +944,7 @@ tail -50 /var/log/provisioning/orchestrator.log | grep -i error ``` **2.2 Check if workers are healthy** -```text +```bash # Get worker status curl -s http://localhost:9090/api/v1/workers | jq . @@ -956,7 +956,7 @@ tail -20 /var/log/provisioning/orchestrator.log | grep -i "worker" ``` **2.3 Check for deadlock** -```text +```bash # Get task dependencies curl -s http://localhost:9090/api/v1/tasks/TASK_ID | jq '.dependencies' @@ -970,7 +970,7 @@ curl -s http://localhost:9090/api/v1/tasks?status=blocked | jq length ### Phase 3: Remediation (5-10 minutes) **3.1 Scale up workers** -```text +```bash # Increase worker count export ORCHESTRATOR_WORKERS=16 pkill -SIGTERM orchestrator @@ -985,7 +985,7 @@ watch -n 5 'curl -s http://localhost:9090/api/v1/queue/status | jq .depth' ``` **3.2 Skip failing tasks (if safe)** -```text +```bash # Get failed task curl -s http://localhost:9090/api/v1/tasks?status=failed&limit=1 | jq '.[] | .id' @@ -999,7 +999,7 @@ curl -s http://localhost:9090/api/v1/queue/status | jq .depth ``` **3.3 Drain queue safely** -```text +```bash # Get queue depth before curl -s http://localhost:9090/api/v1/queue/status | jq .depth @@ -1017,7 +1017,7 @@ done ``` **3.4 Clear stuck tasks (last resort)** -```text +```bash # Get list of stuck tasks STUCK_TASKS=$(curl -s http://localhost:9090/api/v1/tasks?status=pending | jq -r '.[] | select(.duration_seconds > 900) | .id') @@ -1036,7 +1036,7 @@ curl -s http://localhost:9090/api/v1/queue/status | jq .depth ### Phase 4: Verification (10-15 minutes) **4.1 Confirm queue depth decreasing** -```text +```bash # Check trend curl -s 'http://localhost:9090/api/v1/query_range?query=orchestrator_queue_depth&start=10m&step=1m' | jq . @@ -1049,7 +1049,7 @@ watch -n 5 'curl -s http://localhost:9090/api/v1/queue/status | jq ".depth"' ``` **4.2 Check task completion** -```text +```bash # Success rate curl -s 'http://localhost:9090/api/v1/query?query=rate(orchestrator_tasks_successful_total[5m])' | jq . @@ -1073,7 +1073,7 @@ curl -s 'http://localhost:9090/api/v1/query?query=histogram_quantile(0.95, rate( ### Phase 1: Immediate Response (0-2 minutes) **1.1 Check registry status** -```text +```bash # Registry health curl -s http://localhost:8081/health | jq . @@ -1087,7 +1087,7 @@ tail -30 /var/log/provisioning/extension-registry.log | grep -i auth ``` **1.2 Check auth backend** -```text +```bash # If using Gitea curl -s http://gitea:3000/health @@ -1101,7 +1101,7 @@ grep "auth.*error\|authentication.*failed" /var/log/provisioning/extension-regis ### Phase 2: Diagnosis (2-5 minutes) **2.1 Check auth configuration** -```text +```toml # View registry schema nickel typecheck provisioning/schemas/platform/schemas/extension-registry.ncl @@ -1113,7 +1113,7 @@ cat provisioning/schemas/platform/defaults/deployment/*-defaults.ncl | grep -A 5 ``` **2.2 Check credential validity** -```text +```bash # Test Gitea login if applicable curl -X POST http://gitea:3000/api/v1/user/login -H "Content-Type: application/json" @@ -1124,7 +1124,7 @@ docker login registry.internal ``` **2.3 Check network connectivity** -```text +```bash # Test connection to auth backend curl -v http://gitea:3000/health curl -v http://ldap:389/health @@ -1137,7 +1137,7 @@ dig gitea ### Phase 3: Remediation (5-10 minutes) **3.1 If credential expired** -```text +```bash # Generate new token curl -X POST http://gitea:3000/api/v1/users/admin/tokens -H "Authorization: token OLD_TOKEN" @@ -1156,7 +1156,7 @@ cargo run --release -p extension-registry & ``` **3.2 If auth backend down** -```text +```bash # Check if Gitea is running pgrep gitea || echo "Gitea not running" systemctl status gitea @@ -1175,7 +1175,7 @@ cargo run --release -p extension-registry & ``` **3.3 If auth service misconfigured** -```text +```toml # Check auth configuration cat /etc/gitea/app.ini | grep -A 5 "\[auth" @@ -1193,7 +1193,7 @@ cargo run --release -p extension-registry & ### Phase 4: Verification (10-15 minutes) **4.1 Confirm auth working** -```text +```bash # Test login curl -X POST http://localhost:8081/api/auth/login -H "Content-Type: application/json" @@ -1209,7 +1209,7 @@ curl -H "Authorization: Bearer $TOKEN" http://localhost:8081/api/extensions ``` **4.2 Check alert cleared** -```text +```bash # Verify no more auth errors curl -s 'http://localhost:9090/api/v1/query?query=rate(registry_auth_failures_total[5m])' | jq . @@ -1234,7 +1234,7 @@ curl -s http://localhost:9093/api/v1/alerts | jq '.data.alerts[] | select(.label ### Phase 1: Immediate Response (0-2 minutes) **1.1 Isolate affected service** -```text +```bash # Stop affected service to prevent further corruption pkill -SIGTERM affected-service sleep 5 @@ -1245,7 +1245,7 @@ pgrep affected-service || echo "Stopped" ``` **1.2 Assess scope** -```text +```bash # What data is affected curl -s http://localhost:9090/api/v1/health | jq '.data_integrity' @@ -1260,7 +1260,7 @@ grep -i "corrupt\|checksum\|validation.*failed" /var/log/provisioning/*.log ### Phase 2: Diagnosis (2-10 minutes) **2.1 Determine extent of corruption** -```text +```bash # Run integrity check curl -X POST http://localhost:9090/api/v1/admin/integrity-check @@ -1272,7 +1272,7 @@ curl -s http://localhost:9090/api/v1/admin/integrity-check/errors | jq . ``` **2.2 Locate backup** -```text +```bash # Find most recent backup ls -lrt /backup/surrealdb*.sql | tail -5 ls -lrt /backup/provisioning*.tar.gz | tail -5 @@ -1283,7 +1283,7 @@ tar -tzf /backup/provisioning-latest.tar.gz | head -5 ``` **2.3 Determine restore point** -```text +```bash # When was corruption detected grep "corrupt\|checksum" /var/log/provisioning/*.log | head -1 @@ -1294,7 +1294,7 @@ grep "corrupt\|checksum" /var/log/provisioning/*.log | head -1 ### Phase 3: Remediation (10-30 minutes) **3.1 Restore from backup** -```text +```bash # Stop all services using the data pkill -SIGTERM -f "cargo run --release" sleep 5 @@ -1310,7 +1310,7 @@ ls -la /var/lib/provisioning/ ``` **3.2 If database restore needed** -```text +```bash # Backup current corrupted database mv /var/lib/surrealdb /var/lib/surrealdb.corrupted @@ -1328,7 +1328,7 @@ surreal sql --endpoint http://surrealdb:8000 --username root --password root ``` **3.3 Restart services and verify** -```text +```bash # Start services one by one cargo run --release -p vault-service & sleep 5 @@ -1347,7 +1347,7 @@ curl -X POST http://localhost:9090/api/v1/admin/integrity-check ### Phase 4: Post-Incident (After Incident) **4.1 Root cause analysis** -```text +```bash # What caused corruption # Check logs for: # - Crashes during writes @@ -1359,7 +1359,7 @@ grep -B 5 -A 5 "corrupt" /var/log/provisioning/*.log > /tmp/corruption-analysis. ``` **4.2 Prevent recurrence** -```text +```bash # Implement integrity checks # Add scheduled checksum validation # Add crash consistency checks @@ -1368,7 +1368,7 @@ grep -B 5 -A 5 "corrupt" /var/log/provisioning/*.log > /tmp/corruption-analysis. ``` **4.3 Document incident** -```text +```bash # Create detailed incident report cat > /tmp/data-corruption-incident.md << EOF ## Data Corruption Incident @@ -1395,7 +1395,7 @@ EOF ### Phase 1: Immediate Response (0-5 minutes) **1.1 Assess situation** -```text +```bash # Check all services for port in 8200 8081 8083 8082 9090 8080; do curl -s http://localhost:$port/health || echo "Port $port DOWN" @@ -1412,7 +1412,7 @@ ps aux | head -20 ``` **1.2 Declare incident** -```text +```bash # Create war room # Notify all stakeholders # Post: "INCIDENT: Platform unavailable - full cluster failure - details TBD" @@ -1421,7 +1421,7 @@ ps aux | head -20 ``` **1.3 Check recent changes** -```text +```bash # What changed? git log --oneline | head -5 @@ -1435,7 +1435,7 @@ journalctl --since "30 min ago" | grep -i "update\|upgrade\|restart" ### Phase 2: Diagnosis (5-15 minutes) **2.1 Identify common cause** -```text +```bash # All services crashed? pgrep -a cargo | grep "release -p" @@ -1456,7 +1456,7 @@ dmesg | tail -20 | grep -i "killed\|oom" ``` **2.2 Check shared infrastructure** -```text +```bash # Load balancer status curl http://loadbalancer:8080/health @@ -1473,7 +1473,7 @@ systemctl status firewalld ``` **2.3 Determine if issue is infrastructure or application** -```text +```bash # Infrastructure signs: # - No network connectivity # - DNS failures @@ -1490,7 +1490,7 @@ systemctl status firewalld ### Phase 3: Recovery (15-60 minutes) **3.1 Infrastructure restart (if infrastructure issue)** -```text +```bash # Restart network systemctl restart networking @@ -1507,7 +1507,7 @@ nslookup google.com ``` **3.2 Restart all services (methodical order)** -```text +```bash # Stop everything pkill -9 -f "cargo run" @@ -1537,7 +1537,7 @@ curl http://localhost:8081/health ``` **3.3 Monitor recovery** -```text +```bash # Watch startup watch -n 5 'for port in 8200 8081 8083 8082 9090 8080; do curl -s http://localhost:$port/health | jq -r .status && echo "Port $port: OK" || echo "Port $port: DOWN" @@ -1553,7 +1553,7 @@ watch -n 5 'free -h && echo "---" && df -h' ### Phase 4: Verification (After Recovery) **4.1 System fully recovered** -```text +```bash # All services responding for port in 8200 8081 8083 8082 9090 8080; do status=$(curl -s http://localhost:$port/health | jq -r .status 2>/dev/null) @@ -1573,7 +1573,7 @@ df -h ``` **4.2 Data integrity verified** -```text +```bash # Run integrity check curl -X POST http://localhost:9090/api/v1/admin/integrity-check @@ -1584,7 +1584,7 @@ curl -s http://localhost:9090/api/v1/admin/integrity-check/status | jq '.corrupt ``` **4.3 Close incident** -```text +```bash # Resolve in incident tracker # Post: "✓ RESOLVED at HH:MM UTC - full cluster recovered" @@ -1625,7 +1625,7 @@ curl -s http://localhost:9090/api/v1/admin/integrity-check/status | jq '.corrupt ## Quick Reference: Critical Numbers -```text +```bash Service Availability: Should be 99.9% or better Error Rate: Should be < 0.1% (< 1 error per 1000 requests) P95 Latency: Should be < 500 ms diff --git a/docs/src/operations/installer-system.md b/docs/src/operations/installer-system.md index 3c31914..3c9063a 100644 --- a/docs/src/operations/installer-system.md +++ b/docs/src/operations/installer-system.md @@ -11,7 +11,7 @@ and MCP integration. Beautiful terminal user interface with step-by-step guidance. -```text +```bash provisioning-installer ``` @@ -37,7 +37,7 @@ provisioning-installer CLI-only installation without interactive prompts, suitable for scripting. -```text +```bash provisioning-installer --headless --mode solo --yes ``` @@ -51,7 +51,7 @@ provisioning-installer --headless --mode solo --yes **Common Usage**: -```text +```bash # Solo deployment provisioning-installer --headless --mode solo --provider upcloud --yes @@ -66,7 +66,7 @@ provisioning-installer --headless --mode cicd --config ci-config.toml --yes Zero-interaction mode using pre-defined configuration files, ideal for infrastructure automation. -```text +```toml provisioning-installer --unattended --config config.toml ``` @@ -95,7 +95,7 @@ Each mode configures resource allocation and features appropriately: Define installation parameters in TOML format for unattended mode: -```text +```toml [installation] mode = "solo" # solo, multiuser, cicd, enterprise provider = "upcloud" # upcloud, aws, etc. @@ -139,7 +139,7 @@ Model Context Protocol integration provides intelligent configuration: - Network configuration advisor - Monitoring setup assistant -```text +```toml # Use MCP for intelligent config suggestion provisioning-installer --unattended --mcp-suggest > config.toml ``` @@ -150,7 +150,7 @@ provisioning-installer --unattended --mcp-suggest > config.toml Complete deployment automation scripts for popular container runtimes: -```text +```bash # Docker deployment ./provisioning/platform/installer/deploy/docker.nu --config config.toml @@ -168,7 +168,7 @@ Complete deployment automation scripts for popular container runtimes: Infrastructure components can query MCP and install themselves: -```text +```bash # Taskservs auto-install with dependencies taskserv install-self kubernetes taskserv install-self prometheus @@ -177,7 +177,7 @@ taskserv install-self cilium ## Command Reference -```text +```bash # Show interactive installer provisioning-installer @@ -210,7 +210,7 @@ provisioning-installer --headless --mode solo --provider upcloud --cpu 2 --memor ### GitOps Workflow -```text +```bash # Define in Git cat > infrastructure/installer.toml << EOF [installation] @@ -228,7 +228,7 @@ provisioning-installer --unattended --config infrastructure/installer.toml ### Terraform Integration -```text +```bash # Call installer as part of Terraform provisioning resource "null_resource" "provisioning_installer" { provisioner "local-exec" { @@ -239,7 +239,7 @@ resource "null_resource" "provisioning_installer" { ### Ansible Integration -```text +```bash - name: Run provisioning installer shell: provisioning-installer --unattended --config /tmp/config.toml vars: @@ -265,7 +265,7 @@ Pre-built templates available in `provisioning/config/installer-templates/`: ## Help and Support -```text +```bash # Show installer help provisioning-installer --help diff --git a/docs/src/operations/installer.md b/docs/src/operations/installer.md index 706d6d5..45da19a 100644 --- a/docs/src/operations/installer.md +++ b/docs/src/operations/installer.md @@ -16,7 +16,7 @@ Interactive Ratatui-based installer for the Provisioning Platform with Nushell f ## Installation -```text +```bash cd provisioning/platform/installer cargo build --release cargo install --path . @@ -26,7 +26,7 @@ cargo install --path . ### Interactive TUI (Default) -```text +```bash provisioning-installer ``` @@ -41,7 +41,7 @@ The TUI guides you through: ### Headless Mode (Automation) -```text +```bash # Quick deploy with auto-detection provisioning-installer --headless --mode solo --yes @@ -60,7 +60,7 @@ provisioning-installer --headless --config my-deployment.toml --yes ### Configuration Generation -```text +```toml # Generate config without deploying provisioning-installer --config-only @@ -72,7 +72,7 @@ provisioning-installer --headless --config ~/.provisioning/installer-config.toml ### Docker Compose -```text +```bash provisioning-installer --platform docker --mode solo ``` @@ -80,7 +80,7 @@ provisioning-installer --platform docker --mode solo ### OrbStack (macOS) -```text +```bash provisioning-installer --platform orbstack --mode solo ``` @@ -88,7 +88,7 @@ provisioning-installer --platform orbstack --mode solo ### Podman (Rootless) -```text +```bash provisioning-installer --platform podman --mode solo ``` @@ -96,7 +96,7 @@ provisioning-installer --platform podman --mode solo ### Kubernetes -```text +```yaml provisioning-installer --platform kubernetes --mode enterprise ``` @@ -130,7 +130,7 @@ provisioning-installer --platform kubernetes --mode enterprise ## CLI Options -```text +```bash provisioning-installer [OPTIONS] OPTIONS: @@ -150,7 +150,7 @@ OPTIONS: ### GitLab CI -```text +```bash deploy_platform: stage: deploy script: @@ -161,7 +161,7 @@ deploy_platform: ### GitHub Actions -```text +```bash - name: Deploy Provisioning Platform run: | provisioning-installer --headless --mode cicd --platform docker --yes @@ -171,7 +171,7 @@ deploy_platform: If the Rust binary is unavailable: -```text +```rust cd provisioning/platform/installer/scripts nu deploy.nu --mode solo --platform orbstack --yes ``` diff --git a/docs/src/operations/mfa-admin-setup-guide.md b/docs/src/operations/mfa-admin-setup-guide.md index e98782b..4073576 100644 --- a/docs/src/operations/mfa-admin-setup-guide.md +++ b/docs/src/operations/mfa-admin-setup-guide.md @@ -79,7 +79,7 @@ Administrators have elevated privileges including: ### Timeline for Rollout -```text +```bash Week 1-2: Pilot Program ├─ Platform admins enable MFA ├─ Document issues and refine process @@ -102,7 +102,7 @@ Week 5+: Maintenance ### Step 1: Initial Login (Password Only) -```text +```bash # Login with username/password provisioning login --user admin@example.com --workspace production @@ -122,7 +122,7 @@ provisioning login --user admin@example.com --workspace production ### Step 2: Choose MFA Method -```text +```bash # Check available MFA methods provisioning mfa methods @@ -154,7 +154,7 @@ Choose one or both methods (TOTP + WebAuthn recommended): After enrollment, login again with MFA: -```text +```bash # Login (returns partial token) provisioning login --user admin@example.com --workspace production @@ -189,13 +189,13 @@ provisioning mfa verify 123456 #### 1. Initiate TOTP Enrollment -```text +```bash provisioning mfa totp enroll ``` **Output**: -```text +```bash ╔════════════════════════════════════════════════════════════╗ ║ TOTP ENROLLMENT ║ ╚════════════════════════════════════════════════════════════╝ @@ -245,7 +245,7 @@ TOTP Configuration: #### 3. Verify TOTP Code -```text +```bash # Get current code from authenticator app (6 digits, changes every 30s) # Example code: 123456 @@ -254,7 +254,7 @@ provisioning mfa totp verify 123456 **Success Response**: -```text +```bash ✓ TOTP verified successfully! Backup Codes (SAVE THESE SECURELY): @@ -280,7 +280,7 @@ TOTP enrollment complete. MFA is now active for your account. **Critical**: Store backup codes in a secure location: -```text +```bash # Copy backup codes to password manager or encrypted file # NEVER store in plaintext, email, or cloud storage @@ -293,7 +293,7 @@ provisioning mfa backup-codes --show #### 5. Test TOTP Login -```text +```bash # Logout to test full login flow provisioning logout @@ -324,7 +324,7 @@ provisioning mfa verify 654321 #### 1. Check WebAuthn Support -```text +```bash # Verify WebAuthn support on your system provisioning mfa webauthn check @@ -337,13 +337,13 @@ WebAuthn Support: #### 2. Initiate WebAuthn Registration -```text +```bash provisioning mfa webauthn register --device-name "YubiKey-Admin-Primary" ``` **Output**: -```text +```bash ╔════════════════════════════════════════════════════════════╗ ║ WEBAUTHN DEVICE REGISTRATION ║ ╚════════════════════════════════════════════════════════════╝ @@ -381,7 +381,7 @@ Waiting for device interaction... **Success Response**: -```text +```bash ✓ WebAuthn device registered successfully! Device Details: @@ -398,7 +398,7 @@ You can now use this device for authentication. **Best Practice**: Register 2+ WebAuthn devices (primary + backup) -```text +```bash # Register backup YubiKey provisioning mfa webauthn register --device-name "YubiKey-Admin-Backup" @@ -408,7 +408,7 @@ provisioning mfa webauthn register --device-name "MacBook-TouchID" #### 5. List Registered Devices -```text +```bash provisioning mfa webauthn list # Output: @@ -431,7 +431,7 @@ Total: 3 devices #### 6. Test WebAuthn Login -```text +```bash # Logout to test provisioning logout @@ -457,7 +457,7 @@ provisioning mfa webauthn verify **Location**: `provisioning/config/cedar-policies/production.cedar` -```text +```toml // Production operations require MFA verification permit ( principal, @@ -498,7 +498,7 @@ permit ( **Location**: `provisioning/config/cedar-policies/development.cedar` -```text +```toml // Development: MFA recommended but not enforced permit ( principal, @@ -522,7 +522,7 @@ permit ( ### Policy Deployment -```text +```bash # Validate Cedar policies provisioning cedar validate --policies config/cedar-policies/ @@ -539,7 +539,7 @@ provisioning cedar status production ### Testing MFA Enforcement -```text +```bash # Test 1: Production access WITHOUT MFA (should fail) provisioning login --user admin@example.com --workspace production provisioning server create web-01 --plan medium --check @@ -562,7 +562,7 @@ provisioning server create web-01 --plan medium --check Backup codes are automatically generated during first MFA enrollment: -```text +```bash # View existing backup codes (requires MFA verification) provisioning mfa backup-codes --show @@ -599,7 +599,7 @@ New Backup Codes: **Login with backup code**: -```text +```bash # Login (partial token) provisioning login --user admin@example.com --workspace production @@ -632,7 +632,7 @@ provisioning mfa verify-backup X7Y2-Z9A4-B6C1 **Example: Encrypted Storage**: -```text +```bash # Encrypt backup codes with Age provisioning mfa backup-codes --export | age -p -o ~/secure/mfa-backup-codes.age @@ -651,7 +651,7 @@ age -d ~/secure/mfa-backup-codes.age **Recovery Steps**: -```text +```bash # Step 1: Use backup code to login provisioning login --user admin@example.com --workspace production provisioning mfa verify-backup X7Y2-Z9A4-B6C1 @@ -674,7 +674,7 @@ provisioning mfa backup-codes --regenerate **Recovery Steps**: -```text +```bash # Step 1: Login with alternative method (TOTP or backup code) provisioning login --user admin@example.com --workspace production provisioning mfa verify 123456 # TOTP from authenticator app @@ -701,7 +701,7 @@ provisioning mfa webauthn register --device-name "YubiKey-Admin-Replacement" **Recovery Steps** (Requires Admin Assistance): -```text +```bash # User contacts Security Team / Platform Admin # Admin performs MFA reset (requires 2+ admin approval) @@ -739,7 +739,7 @@ provisioning mfa webauthn register --device-name "YubiKey-New" **Recovery Steps**: -```text +```bash # Login with last backup code provisioning login --user admin@example.com --workspace production provisioning mfa verify-backup D9E2-F7G4-H6J1 @@ -763,7 +763,7 @@ provisioning mfa backup-codes --regenerate **Symptoms**: -```text +```bash provisioning mfa verify 123456 ✗ Error: Invalid TOTP code ``` @@ -776,7 +776,7 @@ provisioning mfa verify 123456 **Solutions**: -```text +```bash # Check time sync (device clock must be accurate) # macOS: sudo sntp -sS time.apple.com @@ -804,14 +804,14 @@ date && curl -s http://worldtimeapi.org/api/ip | grep datetime **Symptoms**: -```text +```bash provisioning mfa webauthn register ✗ Error: No WebAuthn authenticator detected ``` **Solutions**: -```text +```bash # Check USB connection (for hardware keys) # macOS: system_profiler SPUSBDataType | grep -i yubikey @@ -832,7 +832,7 @@ provisioning mfa webauthn check **Symptoms**: -```text +```bash provisioning server create web-01 ✗ Error: Authorization denied (MFA verification required) ``` @@ -841,7 +841,7 @@ provisioning server create web-01 **Solution**: -```text +```bash # Check token expiration provisioning auth status @@ -876,7 +876,7 @@ provisioning auth decode-token **Solutions**: -```text +```bash # Use manual entry instead provisioning mfa totp enroll --manual @@ -897,7 +897,7 @@ open ~/mfa-qr.png # View in image viewer **Symptoms**: -```text +```bash provisioning mfa verify-backup X7Y2-Z9A4-B6C1 ✗ Error: Invalid or already used backup code ``` @@ -910,7 +910,7 @@ provisioning mfa verify-backup X7Y2-Z9A4-B6C1 **Solutions**: -```text +```bash # Check backup code status (requires alternative login method) provisioning mfa backup-codes --status @@ -939,7 +939,7 @@ Backup Codes Status: - **Backup**: WebAuthn (YubiKey or Touch ID) - **Emergency**: Backup codes (stored securely) -```text +```bash # Enroll all three provisioning mfa totp enroll provisioning mfa webauthn register --device-name "YubiKey-Primary" @@ -948,7 +948,7 @@ provisioning mfa backup-codes --save-encrypted ~/secure/codes.enc #### 2. Secure Backup Code Storage -```text +```bash # Store in password manager (1Password example) provisioning mfa backup-codes --show | op item create --category "Secure Note" @@ -962,7 +962,7 @@ provisioning mfa backup-codes --export | #### 3. Regular Device Audits -```text +```bash # Monthly: Review registered devices provisioning mfa devices --all @@ -973,7 +973,7 @@ provisioning mfa totp remove "Old-Phone" #### 4. Test Recovery Procedures -```text +```bash # Quarterly: Test backup code login provisioning logout provisioning login --user admin@example.com --workspace dev @@ -987,7 +987,7 @@ cat ~/secure/mfa-backup-codes.enc | age -d #### 1. MFA Enrollment Verification -```text +```bash # Generate MFA enrollment report provisioning admin mfa-report --format csv > mfa-enrollment.csv @@ -999,7 +999,7 @@ provisioning admin mfa-report --format csv > mfa-enrollment.csv #### 2. Enforce MFA Deadlines -```text +```bash # Set MFA enrollment deadline provisioning admin mfa-deadline set 2025-11-01 --roles Admin,Developer @@ -1013,7 +1013,7 @@ provisioning admin mfa-remind #### 3. Monitor MFA Usage -```text +```bash # Audit: Find production logins without MFA provisioning audit query --action "auth:login" @@ -1037,7 +1037,7 @@ provisioning monitoring alert create - Time-limited reset window (24 hours) - Mandatory re-enrollment before production access -```text +```bash # MFA reset workflow provisioning admin mfa-reset create user@example.com --reason "Lost all devices" @@ -1052,7 +1052,7 @@ provisioning admin mfa-reset approve MFA-RESET-001 #### 1. Cedar Policy Best Practices -```text +```bash // Require MFA for high-risk actions permit ( principal, @@ -1071,7 +1071,7 @@ permit ( #### 2. MFA Grace Periods (For Rollout) -```text +```bash # Development: No MFA required export PROVISIONING_MFA_REQUIRED=false @@ -1091,7 +1091,7 @@ export PROVISIONING_MFA_REQUIRED=true - Only used when primary admins locked out - Requires incident report after use -```text +```bash # Create emergency admin provisioning admin create emergency-admin@example.com --role EmergencyAdmin @@ -1111,7 +1111,7 @@ provisioning mfa backup-codes --show --user emergency-admin@example.com > emerge All MFA events are logged to the audit system: -```text +```bash # View MFA enrollment events provisioning audit query --action-type "mfa:*" @@ -1142,7 +1142,7 @@ provisioning audit query #### SOC2 Compliance (Access Control) -```text +```bash # Generate SOC2 access control report provisioning compliance report soc2 --control "CC6.1" @@ -1165,7 +1165,7 @@ Evidence: #### ISO 27001 Compliance (A.9.4.2 - Secure Log-on) -```text +```bash # ISO 27001 A.9.4.2 compliance report provisioning compliance report iso27001 --control "A.9.4.2" @@ -1182,7 +1182,7 @@ provisioning compliance report iso27001 #### GDPR Compliance (MFA Data Handling) -```text +```bash # GDPR data subject request (MFA data export) provisioning compliance gdpr export admin@example.com --include mfa @@ -1211,7 +1211,7 @@ provisioning compliance gdpr delete admin@example.com --include-mfa ### MFA Metrics Dashboard -```text +```bash # Generate MFA metrics provisioning admin mfa-metrics --period 30d @@ -1249,7 +1249,7 @@ Incidents: ### Daily Admin Operations -```text +```bash # Login with MFA provisioning login --user admin@example.com --workspace production provisioning mfa verify 123456 @@ -1263,7 +1263,7 @@ provisioning mfa devices ### MFA Management -```text +```bash # TOTP provisioning mfa totp enroll # Enroll TOTP provisioning mfa totp verify 123456 # Verify TOTP code @@ -1282,7 +1282,7 @@ provisioning mfa verify-backup X7Y2-Z9A4-B6C1 # Use backup code ### Emergency Procedures -```text +```bash # Lost device recovery (use backup code) provisioning login --user admin@example.com provisioning mfa verify-backup [code] @@ -1351,7 +1351,7 @@ provisioning admin mfa-report ### CLI Help -```text +```bash provisioning mfa help # MFA command help provisioning mfa totp --help # TOTP-specific help provisioning mfa webauthn --help # WebAuthn-specific help diff --git a/docs/src/operations/monitoring-alerting-setup.md b/docs/src/operations/monitoring-alerting-setup.md index 449a54b..63a5446 100644 --- a/docs/src/operations/monitoring-alerting-setup.md +++ b/docs/src/operations/monitoring-alerting-setup.md @@ -21,7 +21,7 @@ This guide provides complete setup instructions for monitoring and alerting on t ## Architecture -```text +```bash Services (metrics endpoints) ↓ Prometheus (scrapes every 30s) @@ -43,7 +43,7 @@ Dashboards & Visualization ### Software Requirements -```text +```bash # Prometheus (for metrics) wget https://github.com/prometheus/prometheus/releases/download/v2.48.0/prometheus-2.48.0.linux-amd64.tar.gz tar xvfz prometheus-2.48.0.linux-amd64.tar.gz @@ -80,7 +80,7 @@ sudo mv alertmanager-0.26.0.linux-amd64 /opt/alertmanager All platform services expose metrics on the `/metrics` endpoint: -```text +```bash # Health and metrics endpoints for each service curl http://localhost:8200/health # Vault health curl http://localhost:8200/metrics # Vault metrics (Prometheus format) @@ -110,7 +110,7 @@ curl http://localhost:8084/metrics # MCP Server metrics ### 1. Create Prometheus Config -```text +```toml # /etc/prometheus/prometheus.yml global: scrape_interval: 30s @@ -215,7 +215,7 @@ scrape_configs: ### 2. Start Prometheus -```text +```bash # Create necessary directories sudo mkdir -p /etc/prometheus /var/lib/prometheus sudo mkdir -p /etc/prometheus/rules @@ -255,7 +255,7 @@ sudo systemctl start prometheus ### 3. Verify Prometheus -```text +```bash # Check Prometheus is running curl -s http://localhost:9090/-/healthy @@ -272,7 +272,7 @@ curl -s 'http://localhost:9090/api/v1/query?query=up' | jq . ### 1. Create Alert Rules -```text +```bash # /etc/prometheus/rules/platform-alerts.yml groups: - name: platform_availability @@ -453,7 +453,7 @@ groups: ### 2. Validate Alert Rules -```text +```bash # Check rule syntax /opt/prometheus/promtool check rules /etc/prometheus/rules/platform-alerts.yml @@ -467,7 +467,7 @@ curl -X POST http://localhost:9090/-/reload ### 1. Create AlertManager Config -```text +```toml # /etc/alertmanager/alertmanager.yml global: resolve_timeout: 5m @@ -562,7 +562,7 @@ inhibit_rules: ### 2. Start AlertManager -```text +```bash cd /opt/alertmanager sudo ./alertmanager --config.file=/etc/alertmanager/alertmanager.yml --storage.path=/var/lib/alertmanager @@ -595,7 +595,7 @@ sudo systemctl start alertmanager ### 3. Verify AlertManager -```text +```bash # Check AlertManager is running curl -s http://localhost:9093/-/healthy @@ -612,7 +612,7 @@ curl -s http://localhost:9093/api/v1/status | jq . ### 1. Install Grafana -```text +```bash # Install Grafana sudo apt-get install -y grafana-server @@ -626,7 +626,7 @@ sudo systemctl start grafana-server ### 2. Add Prometheus Data Source -```text +```bash # Via API curl -X POST http://localhost:3000/api/datasources -H "Content-Type: application/json" @@ -642,7 +642,7 @@ curl -X POST http://localhost:3000/api/datasources ### 3. Create Platform Overview Dashboard -```text +```json { "dashboard": { "title": "Platform Overview", @@ -727,7 +727,7 @@ curl -X POST http://localhost:3000/api/datasources ### 4. Import Dashboard via API -```text +```bash # Save dashboard JSON to file cat > platform-overview.json << 'EOF' { @@ -748,7 +748,7 @@ curl -X POST http://localhost:3000/api/dashboards/db ### 1. Service Health Check Script -```text +```bash #!/bin/bash # scripts/check-service-health.sh @@ -788,7 +788,7 @@ exit 0 ### 2. Liveness Probe Configuration -```text +```toml # For Kubernetes deployments apiVersion: v1 kind: Pod @@ -821,7 +821,7 @@ spec: ### 1. Elasticsearch Setup -```text +```bash # Install Elasticsearch wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.11.0-linux-x86_64.tar.gz tar xvfz elasticsearch-8.11.0-linux-x86_64.tar.gz @@ -831,7 +831,7 @@ cd elasticsearch-8.11.0/bin ### 2. Filebeat Configuration -```text +```toml # /etc/filebeat/filebeat.yml filebeat.inputs: - type: log @@ -855,7 +855,7 @@ logging.files: ### 3. Kibana Dashboard -```text +```bash # Access at http://localhost:5601 # Create index pattern: provisioning-* # Create visualizations for: @@ -871,7 +871,7 @@ logging.files: ### Common Prometheus Queries -```text +```bash # Service availability (last hour) avg(increase(up[1h])) by (job) @@ -922,7 +922,7 @@ database_connection_pool_usage{job="orchestrator"} ### 1. Test Alert Firing -```text +```bash # Manually fire test alert curl -X POST http://localhost:9093/api/v1/alerts -H 'Content-Type: application/json' @@ -943,7 +943,7 @@ curl -X POST http://localhost:9093/api/v1/alerts ### 2. Stop Service to Trigger Alert -```text +```bash # Stop a service to trigger ServiceDown alert pkill -9 vault-service @@ -958,7 +958,7 @@ cargo run --release -p vault-service & ### 3. Generate Load to Test Error Alerts -```text +```bash # Generate request load ab -n 10000 -c 100 http://localhost:9090/api/v1/health @@ -972,7 +972,7 @@ curl -s 'http://localhost:9090/api/v1/query?query=rate(http_requests_total{statu ### 1. Prometheus Data Backup -```text +```bash #!/bin/bash # scripts/backup-prometheus-data.sh @@ -997,7 +997,7 @@ find "$BACKUP_DIR" -mtime +$RETENTION_DAYS -delete ### 2. Prometheus Retention Configuration -```text +```toml # Keep metrics for 15 days /opt/prometheus/prometheus --storage.tsdb.retention.time=15d @@ -1012,7 +1012,7 @@ find "$BACKUP_DIR" -mtime +$RETENTION_DAYS -delete #### Prometheus Won't Scrape Service -```text +```bash # Check configuration /opt/prometheus/promtool check config /etc/prometheus/prometheus.yml @@ -1028,7 +1028,7 @@ curl -s http://localhost:9090/api/v1/targets | jq '.data.activeTargets[] | .last #### AlertManager Not Sending Notifications -```text +```bash # Verify AlertManager config /opt/alertmanager/amtool config routes @@ -1044,7 +1044,7 @@ curl -s http://localhost:9093/api/v1/receivers #### High Memory Usage -```text +```bash # Reduce Prometheus retention prometheus --storage.tsdb.retention.time=7d --storage.tsdb.max-block-duration=2h @@ -1079,7 +1079,7 @@ ps aux | grep prometheus | grep -v grep ## Quick Commands Reference -```text +```bash # Prometheus curl http://localhost:9090/api/v1/targets # List scrape targets curl 'http://localhost:9090/api/v1/query?query=up' # Query metric @@ -1106,7 +1106,7 @@ amtool config routes ### Sample Runbook: Service Down -```text +```bash # Service Down Alert ## Detection diff --git a/docs/src/operations/orchestrator-system.md b/docs/src/operations/orchestrator-system.md index 7f60c28..057b966 100644 --- a/docs/src/operations/orchestrator-system.md +++ b/docs/src/operations/orchestrator-system.md @@ -14,7 +14,7 @@ A production-ready hybrid Rust/Nushell orchestrator has been implemented to solv ## Orchestrator Management -```text +```bash # Start orchestrator in background cd provisioning/platform/orchestrator ./scripts/start-orchestrator.nu --background --provisioning-path "/usr/local/bin/provisioning" @@ -35,7 +35,7 @@ The orchestrator provides comprehensive workflow management: ### Server Workflows -```text +```bash # Submit server creation workflow nu -c "use core/nulib/workflows/server_create.nu *; server_create_workflow 'wuji' '' [] --check" @@ -45,7 +45,7 @@ provisioning servers create --orchestrated --check ### Taskserv Workflows -```text +```bash # Create taskserv workflow nu -c "use core/nulib/workflows/taskserv.nu *; taskserv create 'kubernetes' 'wuji' --check" @@ -57,7 +57,7 @@ nu -c "use core/nulib/workflows/taskserv.nu *; taskserv check-updates" ### Cluster Workflows -```text +```bash # Create cluster workflow nu -c "use core/nulib/workflows/cluster.nu *; cluster create 'buildkit' 'wuji' --check" @@ -67,7 +67,7 @@ nu -c "use core/nulib/workflows/cluster.nu *; cluster delete 'buildkit' 'wuji' - ### Workflow Management -```text +```bash # List all workflows nu -c "use core/nulib/workflows/management.nu *; workflow list" diff --git a/docs/src/operations/orchestrator.md b/docs/src/operations/orchestrator.md index 95a0a0a..2278b6a 100644 --- a/docs/src/operations/orchestrator.md +++ b/docs/src/operations/orchestrator.md @@ -34,7 +34,7 @@ The orchestrator implements a hybrid multi-storage approach: **Default Build (Filesystem Only)**: -```text +```bash cd provisioning/platform/orchestrator cargo build --release cargo run -- --port 8080 --data-dir ./data @@ -42,7 +42,7 @@ cargo run -- --port 8080 --data-dir ./data **With SurrealDB Support**: -```text +```bash cargo build --release --features surrealdb # Run with SurrealDB embedded @@ -56,7 +56,7 @@ cargo run --features surrealdb -- --storage-type surrealdb-server ### Submit Workflow -```text +```bash curl -X POST http://localhost:8080/workflows/servers/create -H "Content-Type: application/json" -d '{ @@ -111,7 +111,7 @@ Test multi-node cluster configurations (Kubernetes, etcd, etc.). ### Nushell CLI Integration -```text +```nushell # Quick test provisioning test quick kubernetes diff --git a/docs/src/operations/platform.md b/docs/src/operations/platform.md index beb085b..a329d24 100644 --- a/docs/src/operations/platform.md +++ b/docs/src/operations/platform.md @@ -145,7 +145,7 @@ Nushell-based CLI. ## Architecture -```text +```bash ┌─────────────────────────────────────────────────────────────┐ │ Provisioning Platform │ ├─────────────────────────────────────────────────────────────┤ @@ -171,7 +171,7 @@ Nushell-based CLI. ### Starting All Services -```text +```bash # Using platform installer (recommended) provisioning-installer --headless --mode solo --yes @@ -188,7 +188,7 @@ provisioning platform start api-server ### Checking Service Status -```text +```bash # Check all services provisioning platform status @@ -203,7 +203,7 @@ provisioning platform logs orchestrator --tail 100 --follow Each service exposes a health endpoint: -```text +```bash # Orchestrator curl http://localhost:8080/health @@ -225,7 +225,7 @@ curl http://localhost:5000/v2/ ## Service Dependencies -```text +```bash Orchestrator └── Nushell CLI @@ -252,7 +252,7 @@ OCI Registry Each service uses TOML-based configuration: -```text +```toml provisioning/ ├── config/ │ ├── orchestrator.toml @@ -269,7 +269,7 @@ provisioning/ Services expose Prometheus metrics: -```text +```bash # prometheus.yml scrape_configs: - job_name: 'orchestrator' @@ -289,7 +289,7 @@ scrape_configs: All services use structured logging: -```text +```bash # View aggregated logs provisioning platform logs --all @@ -324,7 +324,7 @@ provisioning platform logs --export /tmp/platform-logs.json ### Service Won't Start -```text +```bash # Check logs provisioning platform logs --tail 100 @@ -337,7 +337,7 @@ lsof -i : ### Service Unhealthy -```text +```bash # Check dependencies provisioning platform deps @@ -350,7 +350,7 @@ provisioning platform restart --clean ### High Resource Usage -```text +```bash # Check resource usage provisioning platform resources diff --git a/docs/src/operations/production-readiness-checklist.md b/docs/src/operations/production-readiness-checklist.md index b9e70fb..21aea86 100644 --- a/docs/src/operations/production-readiness-checklist.md +++ b/docs/src/operations/production-readiness-checklist.md @@ -113,7 +113,7 @@ production standards. ### Phase 1: Installation (30 minutes) -```text +```bash # 1. Run installation script ./scripts/install-provisioning.sh @@ -126,7 +126,7 @@ nu scripts/health-check.nu ### Phase 2: Initial Configuration (15 minutes) -```text +```toml # 1. Run setup wizard provisioning setup system --interactive @@ -139,7 +139,7 @@ provisioning platform health ### Phase 3: Workspace Setup (10 minutes) -```text +```bash # 1. Create production workspace provisioning setup workspace production @@ -152,7 +152,7 @@ provisioning setup validate ### Phase 4: Verification (10 minutes) -```text +```bash # 1. Run comprehensive health check provisioning setup validate --verbose @@ -206,7 +206,7 @@ provisioning server create --check **Solution**: -```text +```bash # Check Nushell installation nu --version @@ -218,7 +218,7 @@ provisioning -x setup system --interactive **Solution**: -```text +```bash # Check configuration provisioning setup validate --verbose @@ -234,7 +234,7 @@ provisioning setup system --interactive **Solution**: -```text +```bash # Run detailed health check nu scripts/health-check.nu @@ -249,7 +249,7 @@ provisioning platform restart **Solution**: -```text +```bash # Dry-run to see what would happen provisioning server create --check @@ -305,7 +305,7 @@ Expected performance on modern hardware (4+ cores, 8+ GB RAM): If issues occur post-deployment: -```text +```bash # 1. Take backup of current configuration provisioning setup backup --path rollback-$(date +%Y%m%d-%H%M%S).tar.gz diff --git a/docs/src/operations/provisioning-server.md b/docs/src/operations/provisioning-server.md index df58b4d..b8e4700 100644 --- a/docs/src/operations/provisioning-server.md +++ b/docs/src/operations/provisioning-server.md @@ -18,7 +18,7 @@ A comprehensive REST API server for remote provisioning operations, enabling thi ## Architecture -```text +```bash ┌─────────────────┐ │ REST Client │ │ (curl, CI/CD) │ @@ -49,7 +49,7 @@ A comprehensive REST API server for remote provisioning operations, enabling thi ## Installation -```text +```bash cd provisioning/platform/provisioning-server cargo build --release ``` @@ -58,7 +58,7 @@ cargo build --release Create `config.toml`: -```text +```toml [server] host = "0.0.0.0" port = 8083 @@ -83,7 +83,7 @@ json_format = false ### Starting the Server -```text +```bash # Using config file provisioning-server --config config.toml @@ -100,7 +100,7 @@ provisioning-server #### Login -```text +```bash curl -X POST http://localhost:8083/v1/auth/login -H "Content-Type: application/json" -d '{ @@ -111,7 +111,7 @@ curl -X POST http://localhost:8083/v1/auth/login Response: -```text +```json { "token": "eyJhbGc...", "refresh_token": "eyJhbGc...", @@ -121,7 +121,7 @@ Response: #### Using Token -```text +```javascript export TOKEN="eyJhbGc..." curl -X GET http://localhost:8083/v1/servers @@ -200,7 +200,7 @@ Read-only access to all resources and status information. ### GitHub Actions -```text +```bash - name: Deploy Infrastructure run: | TOKEN=$(curl -X POST https://api.example.com/v1/auth/login diff --git a/docs/src/operations/service-management-guide.md b/docs/src/operations/service-management-guide.md index 116c7bb..21095b7 100644 --- a/docs/src/operations/service-management-guide.md +++ b/docs/src/operations/service-management-guide.md @@ -50,7 +50,7 @@ registry, MCP server, API gateway). ### System Architecture -```text +```bash ┌─────────────────────────────────────────┐ │ Service Management CLI │ │ (platform/services commands) │ @@ -121,7 +121,7 @@ registry, MCP server, API gateway). ### Service Definition Structure -```text +```toml [services.] name = "" type = "platform" | "infrastructure" | "utility" @@ -162,7 +162,7 @@ max_restarts = 3 ### Example: Orchestrator Service -```text +```toml [services.orchestrator] name = "orchestrator" type = "platform" @@ -200,7 +200,7 @@ Platform commands manage all services as a cohesive system. Start all auto-start services or specific services: -```text +```bash # Start all auto-start services provisioning platform start @@ -223,7 +223,7 @@ provisioning platform start --force orchestrator Stop all running services or specific services: -```text +```bash # Stop all running services provisioning platform stop @@ -245,7 +245,7 @@ provisioning platform stop --force orchestrator Restart running services: -```text +```bash # Restart all running services provisioning platform restart @@ -257,13 +257,13 @@ provisioning platform restart orchestrator Show status of all services: -```text +```bash provisioning platform status ``` **Output**: -```text +```bash Platform Services Status Running: 3/7 @@ -292,13 +292,13 @@ Running: 3/7 Check health of all running services: -```text +```bash provisioning platform health ``` **Output**: -```text +```bash Platform Health Check ✅ orchestrator: Healthy - HTTP health check passed @@ -313,7 +313,7 @@ Summary: 3 healthy, 0 unhealthy, 4 not running View service logs: -```text +```bash # View last 50 lines provisioning platform logs orchestrator @@ -332,7 +332,7 @@ Individual service management commands. ### List Services -```text +```bash # List all services provisioning services list @@ -345,7 +345,7 @@ provisioning services list --category orchestration **Output**: -```text +```bash name type category status deployment_mode auto_start orchestrator platform orchestration running binary true control-center platform ui stopped binary false @@ -356,13 +356,13 @@ coredns infrastructure dns stopped docker false Get detailed status of a service: -```text +```bash provisioning services status orchestrator ``` **Output**: -```text +```bash Service: orchestrator Type: platform Category: orchestration @@ -377,7 +377,7 @@ Dependencies: [] ### Start Service -```text +```bash # Start service (with pre-flight checks) provisioning services start orchestrator @@ -394,7 +394,7 @@ provisioning services start orchestrator --force ### Stop Service -```text +```bash # Stop service (with dependency check) provisioning services stop orchestrator @@ -404,7 +404,7 @@ provisioning services stop orchestrator --force ### Restart Service -```text +```bash provisioning services restart orchestrator ``` @@ -412,13 +412,13 @@ provisioning services restart orchestrator Check service health: -```text +```bash provisioning services health orchestrator ``` **Output**: -```text +```bash Service: orchestrator Status: healthy Healthy: true @@ -429,7 +429,7 @@ Check duration: 15 ms ### Service Logs -```text +```bash # View logs provisioning services logs orchestrator @@ -444,13 +444,13 @@ provisioning services logs orchestrator --lines 200 Check which services are required for an operation: -```text +```bash provisioning services check server ``` **Output**: -```text +```bash Operation: server Required services: orchestrator All running: true @@ -460,7 +460,7 @@ All running: true View dependency graph: -```text +```bash # View all dependencies provisioning services dependencies @@ -472,13 +472,13 @@ provisioning services dependencies control-center Validate all service configurations: -```text +```toml provisioning services validate ``` **Output**: -```text +```bash Total services: 7 Valid: 6 Invalid: 1 @@ -492,13 +492,13 @@ Invalid services: Get platform readiness report: -```text +```bash provisioning services readiness ``` **Output**: -```text +```bash Platform Readiness Report Total services: 7 @@ -517,7 +517,7 @@ Services: Continuous health monitoring: -```text +```bash # Monitor with default interval (30s) provisioning services monitor orchestrator @@ -535,7 +535,7 @@ Run services as native binaries. **Configuration**: -```text +```toml [services.orchestrator.deployment] mode = "binary" @@ -558,7 +558,7 @@ Run services as Docker containers. **Configuration**: -```text +```toml [services.coredns.deployment] mode = "docker" @@ -581,7 +581,7 @@ Run services via Docker Compose. **Configuration**: -```text +```toml [services.platform.deployment] mode = "docker-compose" @@ -599,7 +599,7 @@ Run services on Kubernetes. **Configuration**: -```text +```toml [services.orchestrator.deployment] mode = "kubernetes" @@ -620,7 +620,7 @@ Connect to remotely-running services. **Configuration**: -```text +```toml [services.orchestrator.deployment] mode = "remote" @@ -638,7 +638,7 @@ auth_token_path = "${HOME}/.provisioning/tokens/orchestrator.token" #### HTTP Health Check -```text +```toml [services.orchestrator.health_check] type = "http" @@ -650,7 +650,7 @@ method = "GET" #### TCP Health Check -```text +```toml [services.coredns.health_check] type = "tcp" @@ -661,7 +661,7 @@ port = 5353 #### Command Health Check -```text +```toml [services.custom.health_check] type = "command" @@ -672,7 +672,7 @@ expected_exit_code = 0 #### File Health Check -```text +```toml [services.custom.health_check] type = "file" @@ -689,13 +689,13 @@ must_exist = true ### Continuous Monitoring -```text +```bash provisioning services monitor orchestrator --interval 30 ``` **Output**: -```text +```bash Starting health monitoring for orchestrator (interval: 30s) Press Ctrl+C to stop 2025-10-06 14:30:00 ✅ orchestrator: HTTP health check passed @@ -711,7 +711,7 @@ Press Ctrl+C to stop Services can depend on other services: -```text +```toml [services.control-center] dependencies = ["orchestrator"] @@ -723,7 +723,7 @@ dependencies = ["orchestrator", "control-center", "mcp-server"] Services start in topological order: -```text +```bash orchestrator (order: 10) └─> control-center (order: 20) └─> api-gateway (order: 45) @@ -733,14 +733,14 @@ orchestrator (order: 10) Automatic dependency resolution when starting services: -```text +```bash # Starting control-center automatically starts orchestrator first provisioning services start control-center ``` **Output**: -```text +```bash Starting dependency: orchestrator ✅ Started orchestrator with PID 12345 Waiting for orchestrator to become healthy... @@ -754,20 +754,20 @@ Starting service: control-center Services can conflict with each other: -```text +```toml [services.coredns] conflicts = ["dnsmasq", "systemd-resolved"] ``` Attempting to start a conflicting service will fail: -```text +```bash provisioning services start coredns ``` **Output**: -```text +```bash ❌ Pre-flight check failed: conflicts Conflicting services running: dnsmasq ``` @@ -776,13 +776,13 @@ Conflicting services running: dnsmasq Check which services depend on a service: -```text +```bash provisioning services dependencies orchestrator ``` **Output**: -```text +```bash ## orchestrator - Type: platform - Category: orchestration @@ -796,13 +796,13 @@ provisioning services dependencies orchestrator System prevents stopping services with running dependents: -```text +```bash provisioning services stop orchestrator ``` **Output**: -```text +```bash ❌ Cannot stop orchestrator: Dependent services running: control-center, mcp-server, api-gateway Use --force to stop anyway @@ -826,13 +826,13 @@ Pre-flight checks ensure services can start successfully before attempting to st Pre-flight checks run automatically when starting services: -```text +```bash provisioning services start orchestrator ``` **Check Process**: -```text +```bash Running pre-flight checks for orchestrator... ✅ Binary found: /Users/user/.provisioning/bin/provisioning-orchestrator ✅ No conflicts detected @@ -844,13 +844,13 @@ Starting service: orchestrator Validate all services: -```text +```bash provisioning services validate ``` Validate specific service: -```text +```bash provisioning services status orchestrator ``` @@ -858,14 +858,14 @@ provisioning services status orchestrator Services with `auto_start = true` can be started automatically when needed: -```text +```bash # Orchestrator auto-starts if needed for server operations provisioning server create ``` **Output**: -```text +```bash Starting required services... ✅ Orchestrator started Creating server... @@ -879,7 +879,7 @@ Creating server... **Check prerequisites**: -```text +```bash provisioning services validate provisioning services status ``` @@ -895,13 +895,13 @@ provisioning services status **View health status**: -```text +```bash provisioning services health ``` **Check logs**: -```text +```bash provisioning services logs --follow ``` @@ -915,19 +915,19 @@ provisioning services logs --follow **View dependency tree**: -```text +```bash provisioning services dependencies ``` **Check dependency status**: -```text +```bash provisioning services status ``` **Start with dependencies**: -```text +```bash provisioning platform start ``` @@ -935,7 +935,7 @@ provisioning platform start **Validate dependency graph**: -```text +```bash # This is done automatically but you can check manually nu -c "use lib_provisioning/services/mod.nu *; validate-dependency-graph" ``` @@ -944,7 +944,7 @@ nu -c "use lib_provisioning/services/mod.nu *; validate-dependency-graph" If service reports running but isn't: -```text +```bash # Manual cleanup rm ~/.provisioning/services/pids/.pid @@ -956,13 +956,13 @@ provisioning services restart **Find process using port**: -```text +```bash lsof -i :9090 ``` **Kill conflicting process**: -```text +```bash kill ``` @@ -970,20 +970,20 @@ kill **Check Docker status**: -```text +```bash docker ps docker info ``` **View container logs**: -```text +```bash docker logs provisioning- ``` **Restart Docker daemon**: -```text +```bash # macOS killall Docker && open /Applications/Docker.app @@ -995,13 +995,13 @@ systemctl restart docker **View recent logs**: -```text +```bash tail -f ~/.provisioning/services/logs/.log ``` **Search logs**: -```text +```bash grep "ERROR" ~/.provisioning/services/logs/.log ``` @@ -1017,14 +1017,14 @@ Add custom services by editing `provisioning/config/services.toml`. Services automatically start when required by workflows: -```text +```bash # Orchestrator starts automatically if not running provisioning workflow submit my-workflow ``` ### CI/CD Integration -```text +```bash # GitLab CI before_script: - provisioning platform start orchestrator @@ -1055,7 +1055,7 @@ Services can integrate with monitoring systems via health endpoints. ### Platform Commands (Manage All Services) -```text +```bash # Start all auto-start services provisioning platform start @@ -1085,7 +1085,7 @@ provisioning platform logs orchestrator --follow ### Service Commands (Individual Services) -```text +```bash # List all services provisioning services list @@ -1127,7 +1127,7 @@ provisioning services monitor orchestrator --interval 30 ### Dependency & Validation -```text +```bash # View dependency graph provisioning services dependencies @@ -1162,7 +1162,7 @@ provisioning services check server ### Docker Compose -```text +```bash # Start all services cd provisioning/platform docker-compose up -d @@ -1187,7 +1187,7 @@ docker-compose down -v ### Service State Directories -```text +```bash ~/.provisioning/services/ ├── pids/ # Process ID files ├── state/ # Service state (JSON) @@ -1214,7 +1214,7 @@ docker-compose down -v #### Start Platform for Development -```text +```bash # Start core services provisioning platform start orchestrator @@ -1227,7 +1227,7 @@ provisioning platform health #### Start Full Platform Stack -```text +```bash # Use Docker Compose cd provisioning/platform docker-compose up -d @@ -1239,7 +1239,7 @@ provisioning platform health #### Debug Service Issues -```text +```bash # Check service status provisioning services status @@ -1258,7 +1258,7 @@ provisioning services restart #### Safe Service Shutdown -```text +```bash # Check dependents nu -c "use lib_provisioning/services/mod.nu *; can-stop-service orchestrator" @@ -1275,7 +1275,7 @@ provisioning services stop orchestrator --force #### Service Won't Start -```text +```bash # 1. Check prerequisites provisioning services validate @@ -1292,7 +1292,7 @@ docker images | grep #### Health Check Failing -```text +```bash # Check endpoint manually curl http://localhost:9090/health @@ -1305,7 +1305,7 @@ provisioning services monitor --interval 10 #### PID File Stale -```text +```bash # Remove stale PID file rm ~/.provisioning/services/pids/.pid @@ -1315,7 +1315,7 @@ provisioning services restart #### Port Already in Use -```text +```bash # Find process using port lsof -i :9090 @@ -1332,7 +1332,7 @@ provisioning services start #### Server Operations -```text +```bash # Orchestrator auto-starts if needed provisioning server create @@ -1342,7 +1342,7 @@ provisioning services check server #### Workflow Operations -```text +```bash # Orchestrator auto-starts provisioning workflow submit my-workflow @@ -1352,7 +1352,7 @@ provisioning services status orchestrator #### Test Operations -```text +```bash # Orchestrator required for test environments provisioning test quick kubernetes @@ -1375,7 +1375,7 @@ Services start based on: Edit `provisioning/config/services.toml`: -```text +```toml [services..startup] auto_start = true # Enable auto-start start_timeout = 30 # Timeout in seconds @@ -1384,7 +1384,7 @@ start_order = 10 # Startup priority #### Health Check Configuration -```text +```toml [services..health_check] type = "http" # http, tcp, command, file interval = 10 # Seconds between checks @@ -1409,7 +1409,7 @@ expected_status = 200 ### Getting Help -```text +```bash # View documentation cat docs/user/SERVICE_MANAGEMENT_GUIDE.md | less diff --git a/docs/src/quick-reference/general.md b/docs/src/quick-reference/general.md index 6f91819..a5669c5 100644 --- a/docs/src/quick-reference/general.md +++ b/docs/src/quick-reference/general.md @@ -20,7 +20,7 @@ ### Key Files -```text +```bash provisioning/platform/rag/src/ ├── agent.rs - RAG orchestration ├── llm.rs - Claude API client @@ -37,20 +37,20 @@ provisioning/platform/rag/src/ ### Build & Test -```text +```bash cd /Users/Akasha/project-provisioning/provisioning/platform cargo test -p provisioning-rag ``` ### Run Example -```text +```bash cargo run --example rag_agent ``` ### Check Tests -```text +```bash cargo test -p provisioning-rag --lib # Result: test result: ok. 22 passed; 0 failed ``` @@ -74,7 +74,7 @@ cargo test -p provisioning-rag --lib ### Environment Variables -```text +```bash # Required for Claude integration export ANTHROPIC_API_KEY="sk-..." @@ -98,21 +98,21 @@ export OPENAI_API_KEY="sk-..." ### 1. Ask Questions -```text +```javascript let response = agent.ask("How do I deploy?").await?; // Returns: answer + sources + confidence ``` ### 2. Semantic Search -```text +```javascript let results = retriever.search("deployment", Some(5)).await?; // Returns: top-5 similar documents ``` ### 3. Workspace Awareness -```text +```javascript let context = workspace.enrich_query("deploy"); // Automatically includes: taskservs, providers, infrastructure ``` @@ -190,7 +190,7 @@ Coming soon (next phase): ### As a Library -```text +```bash use provisioning_rag::{RagAgent, DbConnection, RetrieverEngine}; // Initialize @@ -204,7 +204,7 @@ let response = agent.ask("question").await?; ### Via MCP Server (When Enabled) -```text +```bash POST /tools/rag_answer_question { "question": "How do I deploy?" @@ -213,7 +213,7 @@ POST /tools/rag_answer_question ### From CLI (via example) -```text +```bash cargo run --example rag_agent ``` @@ -298,7 +298,7 @@ None - System is production ready ## 🎓 Architecture Overview -```text +```bash User Question ↓ Query Enrichment (Workspace context) diff --git a/docs/src/quick-reference/justfile-recipes.md b/docs/src/quick-reference/justfile-recipes.md index b5f88d1..3dce8dd 100644 --- a/docs/src/quick-reference/justfile-recipes.md +++ b/docs/src/quick-reference/justfile-recipes.md @@ -2,7 +2,7 @@ ## Authentication (auth.just) -```text +```bash # Login & Logout just auth-login # Login to platform just auth-logout # Logout current session @@ -28,7 +28,7 @@ just auth-help # Complete authentication guide ## KMS (kms.just) -```text +```bash # Encryption just kms-encrypt # Encrypt file with RustyVault just kms-decrypt # Decrypt file @@ -60,7 +60,7 @@ just kms-help # Complete KMS guide ## Orchestrator (orchestrator.just) -```text +```bash # Status just orch-status # Show orchestrator status just orch-health # Health check @@ -105,7 +105,7 @@ just orch-help # Complete orchestrator guide ## Plugin Testing -```text +```bash just test-plugins # Test all plugins just test-plugin-auth # Test auth plugin just test-plugin-kms # Test KMS plugin @@ -117,7 +117,7 @@ just list-plugins # List installed plugins ### Complete Authentication Setup -```text +```bash just auth-login alice just mfa-enroll-totp just auth-status @@ -125,7 +125,7 @@ just auth-status ### Production Deployment Workflow -```text +```bash # Login with MFA just auth-login-prod alice @@ -140,7 +140,7 @@ just batch-monitor ### KMS Setup and Testing -```text +```bash # Setup KMS just kms-setup @@ -153,7 +153,7 @@ just encrypt-configs config/ ### Monitoring Operations -```text +```bash # Check orchestrator health just orch-health @@ -169,7 +169,7 @@ just orch-metrics ### Cleanup Operations -```text +```bash # Cleanup old workflows just workflow-cleanup-old 30 diff --git a/docs/src/quick-reference/oci.md b/docs/src/quick-reference/oci.md index 0493e10..24eed88 100644 --- a/docs/src/quick-reference/oci.md +++ b/docs/src/quick-reference/oci.md @@ -6,7 +6,7 @@ ## Prerequisites -```text +```bash # Install OCI tool (choose one) brew install oras # Recommended brew install skopeo # Alternative @@ -17,7 +17,7 @@ go install github.com/google/go-containerregistry/cmd/crane@latest # Alternativ ## Quick Start (5 Minutes) -```text +```bash # 1. Start local OCI registry provisioning oci-registry start @@ -41,7 +41,7 @@ provisioning oci list ### Extension Discovery -```text +```bash # List all extensions provisioning oci list @@ -57,7 +57,7 @@ provisioning oci inspect kubernetes:1.28.0 ### Extension Installation -```text +```bash # Pull specific version provisioning oci pull kubernetes:1.28.0 @@ -72,7 +72,7 @@ provisioning oci pull postgres:15.0 ### Extension Publishing -```text +```bash # Login (one-time) provisioning oci login localhost:5000 @@ -88,7 +88,7 @@ provisioning oci tags redis ### Dependency Management -```text +```bash # Resolve all dependencies provisioning dep resolve @@ -113,7 +113,7 @@ provisioning dep validate **File**: `workspace/config/provisioning.yaml` -```text +```yaml dependencies: extensions: source_type: "oci" @@ -140,7 +140,7 @@ dependencies: **File**: `extensions/{type}/{name}/manifest.yaml` -```text +```yaml name: redis type: taskserv version: 1.0.0 @@ -165,7 +165,7 @@ min_provisioning_version: "3.0.0" ## Extension Development Workflow -```text +```bash # 1. Create extension provisioning generate extension taskserv redis @@ -195,7 +195,7 @@ provisioning oci inspect redis:1.0.0 ### Local Registry (Development) -```text +```bash # Start provisioning oci-registry start @@ -211,7 +211,7 @@ provisioning oci-registry status ### Remote Registry (Production) -```text +```bash # Login to Harbor provisioning oci login harbor.company.com --username admin @@ -228,7 +228,7 @@ provisioning oci login harbor.company.com --username admin ## Migration from Monorepo -```text +```bash # 1. Dry-run migration (preview) provisioning migrate-to-oci workspace_dev --dry-run @@ -251,7 +251,7 @@ provisioning rollback-migration workspace_dev ### Registry Not Running -```text +```bash # Check if registry is running curl http://localhost:5000/v2/_catalog @@ -261,7 +261,7 @@ provisioning oci-registry start ### Authentication Failed -```text +```bash # Login again provisioning oci login localhost:5000 @@ -271,7 +271,7 @@ echo "your-token" > ~/.provisioning/tokens/oci ### Extension Not Found -```text +```bash # Check registry connection provisioning oci config @@ -284,7 +284,7 @@ provisioning oci list --namespace provisioning-extensions ### Dependency Resolution Failed -```text +```bash # Validate dependencies provisioning dep validate @@ -303,13 +303,13 @@ provisioning dep check-updates ✅ **DO**: Use semantic versioning (MAJOR.MINOR.PATCH) -```text +```bash version: 1.2.3 ``` ❌ **DON'T**: Use arbitrary versions -```text +```bash version: latest # Unpredictable ``` @@ -317,7 +317,7 @@ version: latest # Unpredictable ✅ **DO**: Specify version constraints -```text +```bash dependencies: containerd: ">=1.7.0" etcd: "^3.5.0" @@ -325,7 +325,7 @@ dependencies: ❌ **DON'T**: Use wildcards -```text +```bash dependencies: containerd: "*" # Too permissive ``` @@ -349,7 +349,7 @@ dependencies: ### Pull and Install -```text +```bash # Pull extension provisioning oci pull kubernetes:1.28.0 @@ -362,7 +362,7 @@ provisioning taskserv create kubernetes ### Update Extensions -```text +```bash # Check for updates provisioning dep check-updates @@ -375,7 +375,7 @@ provisioning dep resolve --update ### Copy Between Registries -```text +```bash # Copy from local to production provisioning oci copy localhost:5000/provisioning-extensions/kubernetes:1.28.0 @@ -384,7 +384,7 @@ provisioning oci copy ### Publish Multiple Extensions -```text +```bash # Publish all taskservs for dir in (ls extensions/taskservs); do provisioning oci push $dir.name $dir.name 1.0.0 @@ -395,7 +395,7 @@ done ## Environment Variables -```text +```bash # Override registry export PROVISIONING_OCI_REGISTRY="harbor.company.com" @@ -410,7 +410,7 @@ export PROVISIONING_OCI_TOKEN="your-token-here" ## File Locations -```text +```bash ~/.provisioning/ ├── oci-cache/ # OCI artifact cache ├── oci-registry/ # Local Zot registry data diff --git a/docs/src/quick-reference/platform-operations-cheatsheet.md b/docs/src/quick-reference/platform-operations-cheatsheet.md index fd1a705..e8a6367 100644 --- a/docs/src/quick-reference/platform-operations-cheatsheet.md +++ b/docs/src/quick-reference/platform-operations-cheatsheet.md @@ -6,7 +6,7 @@ ## Mode Selection (One Command) -```text +```bash # Development/Testing export VAULT_MODE=solo REGISTRY_MODE=solo RAG_MODE=solo AI_SERVICE_MODE=solo DAEMON_MODE=solo @@ -39,7 +39,7 @@ export VAULT_MODE=enterprise REGISTRY_MODE=enterprise RAG_MODE=enterprise AI_SER ## Service Startup (Order Matters) -```text +```bash # Build everything first cargo build --release @@ -74,7 +74,7 @@ cargo run --release -p installer & ## Quick Checks (All Services) -```text +```bash # Check all services running pgrep -a cargo | grep "release -p" @@ -96,7 +96,7 @@ ps aux | grep "cargo run --release" | grep -v grep ### View Config Files -```text +```toml # List all available schemas ls -la provisioning/schemas/platform/schemas/ @@ -109,7 +109,7 @@ nickel typecheck provisioning/schemas/platform/schemas/vault-service.ncl ### Apply Config Changes -```text +```toml # 1. Update schema or defaults vim provisioning/schemas/platform/schemas/vault-service.ncl # Or update defaults: @@ -137,7 +137,7 @@ curl http://localhost:8200/api/config | jq . ### Stop Services -```text +```bash # Stop all gracefully pkill -SIGTERM -f "cargo run --release" @@ -153,7 +153,7 @@ pkill -9 -f "cargo run --release" ### Restart Services -```text +```bash # Single service pkill -SIGTERM vault-service && sleep 2 && cargo run --release -p vault-service & @@ -166,7 +166,7 @@ cargo build --release ### Check Logs -```text +```bash # Follow service logs (if using journalctl) journalctl -fu provisioning-vault journalctl -fu provisioning-orchestrator @@ -184,7 +184,7 @@ grep -i error /var/log/provisioning/*.log ### SurrealDB (Multiuser/Enterprise) -```text +```bash # Check SurrealDB status curl -s http://surrealdb:8000/health | jq . @@ -206,7 +206,7 @@ surreal import --endpoint http://surrealdb:8000 ### Etcd (Enterprise HA) -```text +```bash # Check Etcd cluster health etcdctl --endpoints=http://etcd:2379 endpoint health @@ -232,7 +232,7 @@ etcdctl --endpoints=http://etcd:2379 snapshot restore backup.db ### Override Individual Settings -```text +```toml # Vault overrides export VAULT_SERVER_URL=http://vault-custom:8200 export VAULT_STORAGE_BACKEND=etcd @@ -268,7 +268,7 @@ export DAEMON_LOGGING_LEVEL=info ### Quick Status (30 seconds) -```text +```bash # Test all services with visual status curl -s http://localhost:8200/health && echo "✓ Vault" || echo "✗ Vault" curl -s http://localhost:8081/health && echo "✓ Registry" || echo "✗ Registry" @@ -280,7 +280,7 @@ curl -s http://localhost:8080/health && echo "✓ Control Center" || echo "✗ C ### Detailed Status -```text +```bash # Orchestrator cluster status curl -s http://localhost:9090/api/v1/cluster/status | jq . @@ -303,7 +303,7 @@ curl -s http://localhost:9090/api/v1/tasks?limit=10 | jq . ### System Resources -```text +```bash # Memory usage free -h @@ -325,7 +325,7 @@ watch -n 1 'free -h && echo "---" && df -h' ### Service Performance -```text +```bash # Monitor service memory usage ps aux | grep "cargo run" | awk '{print $2, $6}' | while read pid mem; do echo "$pid: $(bc <<< "$mem / 1024")MB" @@ -344,7 +344,7 @@ curl -s http://localhost:9090/api/v1/metrics/errors | jq . ### Service Won't Start -```text +```bash # Check port in use lsof -i :8200 ss -tlnp | grep 8200 @@ -364,7 +364,7 @@ ls -la provisioning/schemas/platform/defaults/deployment/$VAULT_MODE-defaults.nc ### High Memory Usage -```text +```bash # Identify top memory consumers ps aux --sort=-%mem | head -10 @@ -380,7 +380,7 @@ valgrind --leak-check=full target/release/vault-service ### Database Connection Error -```text +```bash # Test database connectivity curl http://surrealdb:8000/health etcdctl --endpoints=http://etcd:2379 endpoint health @@ -400,7 +400,7 @@ grep -i "connection" /var/log/provisioning/*.log ### Services Not Communicating -```text +```bash # Test inter-service connectivity curl http://localhost:8200/health curl http://localhost:8081/health @@ -420,7 +420,7 @@ echo "127.0.0.1 vault.internal" >> /etc/hosts ### Full Service Recovery -```text +```bash # 1. Stop everything pkill -9 -f "cargo run" @@ -444,7 +444,7 @@ curl http://localhost:8081/health ### Rollback to Previous Configuration -```text +```toml # 1. Stop affected service pkill -SIGTERM vault-service @@ -467,7 +467,7 @@ curl http://localhost:8200/api/config | jq . ### Data Recovery -```text +```bash # Restore SurrealDB from backup surreal import --endpoint http://surrealdb:8000 --username root --password root < /backup/surreal-20260105.sql @@ -484,7 +484,7 @@ chmod -R 755 /tmp/provisioning-solo/vault/ ## File Locations -```text +```bash # Configuration files (PUBLIC - version controlled) provisioning/schemas/platform/ # Nickel schemas & defaults provisioning/.typedialog/platform/ # Forms & generation scripts @@ -536,7 +536,7 @@ target/release/provisioning-daemon ### Deploy Mode Change -```text +```bash # Migrate solo to multiuser pkill -SIGTERM -f "cargo run" sleep 5 @@ -549,7 +549,7 @@ cargo run --release -p extension-registry & ### Restart Single Service Without Downtime -```text +```bash # For load-balanced deployments: # 1. Remove from load balancer # 2. Graceful shutdown @@ -565,7 +565,7 @@ curl http://localhost:8200/health ### Scale Workers for Load -```text +```bash # Increase workers when under load export VAULT_SERVER_WORKERS=16 pkill -SIGTERM vault-service @@ -586,7 +586,7 @@ cargo run --release -p vault-service & ## Diagnostic Bundle -```text +```bash # Generate complete diagnostics for support echo "=== Processes ===" && pgrep -a cargo echo "=== Listening Ports ===" && ss -tlnp diff --git a/docs/src/quick-reference/sudo-password-handling.md b/docs/src/quick-reference/sudo-password-handling.md index abac429..44ddf7f 100644 --- a/docs/src/quick-reference/sudo-password-handling.md +++ b/docs/src/quick-reference/sudo-password-handling.md @@ -11,7 +11,7 @@ Sudo password is needed when `fix_local_hosts: true` in your server configuratio ### ✅ Best: Cache Credentials First -```text +```bash sudo -v && provisioning -c server create ``` @@ -19,7 +19,7 @@ Credentials cached for 5 minutes, no prompts during operation. ### ✅ Alternative: Disable Host Fixing -```text +```bash # In your settings.ncl or server config fix_local_hosts = false ``` @@ -28,7 +28,7 @@ No sudo required, manual `/etc/hosts` management. ### ✅ Manual: Enter Password When Prompted -```text +```bash provisioning -c server create # Enter password when prompted # Or press CTRL-C to cancel @@ -43,7 +43,7 @@ behavior** and cannot be caught by Nushell. When you press CTRL-C at the password prompt: -```text +```bash Password: [CTRL-C] Error: nu::shell::error @@ -59,7 +59,7 @@ The system **does** handle these cases gracefully: **No password provided** (just press Enter): -```text +```bash Password: [Enter] ⚠ Operation cancelled - sudo password required but not provided @@ -68,7 +68,7 @@ Password: [Enter] **Wrong password 3 times**: -```text +```bash Password: [wrong] Password: [wrong] Password: [wrong] @@ -81,7 +81,7 @@ Password: [wrong] To avoid password prompts entirely: -```text +```bash # Best: Pre-cache credentials (lasts 5 minutes) sudo -v && provisioning -c server create @@ -91,7 +91,7 @@ sudo -v && provisioning -c server create ## Common Commands -```text +```bash # Cache sudo for 5 minutes sudo -v @@ -119,19 +119,19 @@ prvng -c server create ### Development (Local) -```text +```bash fix_local_hosts = true # Convenient for local testing ``` ### CI/CD (Automation) -```text +```bash fix_local_hosts = false # No interactive prompts ``` ### Production (Servers) -```text +```bash fix_local_hosts = false # Managed by configuration management ``` diff --git a/docs/src/roadmap/README.md b/docs/src/roadmap/README.md index 454c3d6..75c128b 100644 --- a/docs/src/roadmap/README.md +++ b/docs/src/roadmap/README.md @@ -53,7 +53,7 @@ Full Rust implementations with graceful HTTP fallback: - Auth verification: 5x faster (10ms vs 50ms) **Status**: Source code complete with comprehensive tests. Binaries NOT YET BUILT - requires: -```text +```bash cargo build --release -p nu_plugin_auth cargo build --release -p nu_plugin_kms cargo build --release -p nu_plugin_orchestrator @@ -89,28 +89,28 @@ Type-safe infrastructure orchestration with 275+ schema files: ## Using These Features **AI Integration**: -```text +```bash provisioning ai template --prompt "describe infrastructure" provisioning ai query --prompt "configuration question" provisioning ai chat # Interactive mode ``` **Workflows**: -```text +```bash batch submit workflow.ncl --name "deployment" --wait batch monitor batch status ``` **Plugins** (when built): -```text +```bash provisioning auth verify-token $token provisioning kms encrypt "secret" provisioning orch tasks ``` **Help**: -```text +```bash provisioning help ai provisioning help plugins provisioning help workflows diff --git a/docs/src/roadmap/ai-integration.md b/docs/src/roadmap/ai-integration.md index 7fd791f..e822741 100644 --- a/docs/src/roadmap/ai-integration.md +++ b/docs/src/roadmap/ai-integration.md @@ -26,7 +26,7 @@ decisions. - Interactive refinement of configurations **Example** (future): -```text +```toml User: "I need a Kubernetes cluster with 3 worker nodes, PostgreSQL database, and Redis cache" AI: → Generates provisioning/workspace/config/cluster.ncl + database.ncl + cache.ncl ``` diff --git a/docs/src/roadmap/native-plugins.md b/docs/src/roadmap/native-plugins.md index f34aeb8..c83226e 100644 --- a/docs/src/roadmap/native-plugins.md +++ b/docs/src/roadmap/native-plugins.md @@ -19,7 +19,7 @@ This document describes the complete Nushell plugin system with all core plugins - Dynamic configuration generation **Usage**: -```text +```toml use provisioning/core/plugins/nushell-plugins/nu_plugin_tera template render "config.j2" $variables ``` @@ -39,7 +39,7 @@ template render "config.j2" $variables - ✅ Multi-factor authentication **Usage**: -```text +```bash provisioning auth verify-token $token provisioning auth generate-jwt --user alice provisioning auth enable-mfa --type totp @@ -58,7 +58,7 @@ provisioning auth enable-mfa --type totp - ✅ Hardware security module (HSM) support **Usage**: -```text +```bash provisioning kms encrypt --key primary "secret data" provisioning kms decrypt "encrypted:..." provisioning kms rotate --key primary @@ -83,7 +83,7 @@ provisioning kms rotate --key primary - ✅ Progress monitoring **Usage**: -```text +```bash provisioning orchestrator status provisioning workflow execute deployment.nu provisioning workflow list @@ -158,7 +158,7 @@ Fallback implementations allow core functionality without native plugins. ### Available -```text +```bash # Template rendering (nu_plugin_tera) provisioning config generate --template workspace.j2 @@ -168,7 +168,7 @@ provisioning help plugins ### Fallback (HTTP-based) -```text +```bash # Authentication (HTTP fallback) provisioning auth verify-token $token @@ -181,7 +181,7 @@ provisioning orchestrator status ### Manual Nushell Workflows -```text +```nushell # Use Nushell workflows instead of plugins provisioning workflow list provisioning workflow execute deployment.nu diff --git a/docs/src/roadmap/nickel-workflows.md b/docs/src/roadmap/nickel-workflows.md index 81e8bda..81b02b5 100644 --- a/docs/src/roadmap/nickel-workflows.md +++ b/docs/src/roadmap/nickel-workflows.md @@ -20,7 +20,7 @@ This document describes the complete Nickel workflow system. Both Nushell and Ni - Logging and debugging **Usage**: -```text +```nickel # List available workflows provisioning workflow list @@ -48,7 +48,7 @@ Nickel workflows provide type-safe, validated workflow definitions with: #### Type-Safe Workflow Definitions -```text +```nickel # Example (future) let workflow = { name = "multi-provider-deployment", @@ -198,7 +198,7 @@ When Nickel workflows become available: ## Example: Future Nickel Workflow -```text +```nickel # Future example (not yet working) let deployment_workflow = { metadata = { diff --git a/docs/src/security/authentication-layer-guide.md b/docs/src/security/authentication-layer-guide.md index 0d93e2e..a298f3f 100644 --- a/docs/src/security/authentication-layer-guide.md +++ b/docs/src/security/authentication-layer-guide.md @@ -54,7 +54,7 @@ MFA support, providing enterprise-grade security with graceful user experience. ### 1. Login to Platform -```text +```bash # Interactive login (password prompt) provisioning auth login @@ -67,7 +67,7 @@ provisioning auth login admin --url http://control.example.com:9080 ### 2. Enroll MFA (First Time) -```text +```bash # Enroll TOTP (Google Authenticator) provisioning auth mfa enroll totp @@ -77,14 +77,14 @@ provisioning auth mfa enroll totp ### 3. Verify MFA (For Sensitive Operations) -```text +```bash # Get 6-digit code from authenticator app provisioning auth mfa verify --code 123456 ``` ### 4. Check Authentication Status -```text +```bash # View current authentication status provisioning auth status @@ -98,7 +98,7 @@ provisioning auth verify ### Server Operations -```text +```bash # ✅ CREATE - Requires auth (prod: +MFA) provisioning server create web-01 # Auth required provisioning server create web-01 --check # Auth skipped (check mode) @@ -114,7 +114,7 @@ provisioning server ssh web-01 # No auth required ### Task Service Operations -```text +```bash # ✅ CREATE - Requires auth (prod: +MFA) provisioning taskserv create kubernetes # Auth required provisioning taskserv create kubernetes --check # Auth skipped @@ -128,7 +128,7 @@ provisioning taskserv list # No auth required ### Cluster Operations -```text +```bash # ✅ CREATE - Requires auth (prod: +MFA) provisioning cluster create buildkit # Auth required provisioning cluster create buildkit --check # Auth skipped @@ -139,7 +139,7 @@ provisioning cluster delete buildkit # Auth + MFA required ### Batch Workflows -```text +```bash # ✅ SUBMIT - Requires auth (prod: +MFA) provisioning batch submit workflow.ncl # Auth required provisioning batch submit workflow.ncl --skip-auth # Auth skipped (if allowed) @@ -155,7 +155,7 @@ provisioning batch status # No auth required ### Security Settings (`config.defaults.toml`) -```text +```toml [security] require_auth = true # Enable authentication system require_mfa_for_production = true # MFA for prod environment @@ -175,7 +175,7 @@ url = "http://localhost:9080" # Control center URL ### Environment-Specific Configuration -```text +```toml # Development [environments.dev] security.bypass.allow_skip_auth = true # Allow auth bypass in dev @@ -192,7 +192,7 @@ security.require_mfa_for_production = true ### Environment Variable Method -```text +```bash # Export environment variable (dev/test only) export PROVISIONING_SKIP_AUTH=true @@ -205,14 +205,14 @@ unset PROVISIONING_SKIP_AUTH ### Per-Command Flag -```text +```bash # Some commands support --skip-auth flag provisioning batch submit workflow.ncl --skip-auth ``` ### Check Mode (Always Bypasses Auth) -```text +```bash # Check mode is always allowed without auth provisioning server create web-01 --check provisioning taskserv create kubernetes --check @@ -227,7 +227,7 @@ provisioning taskserv create kubernetes --check ### Not Authenticated -```text +```bash ❌ Authentication Required Operation: server create web-01 @@ -245,7 +245,7 @@ Note: Your credentials will be securely stored in the system keyring. ### MFA Required -```text +```bash ❌ MFA Verification Required Operation: server delete web-01 @@ -265,7 +265,7 @@ Don't have MFA set up? ### Token Expired -```text +```bash ❌ Authentication Required Operation: server create web-02 @@ -282,7 +282,7 @@ Error: Token verification failed All authenticated operations are logged to the audit log file with the following information: -```text +```json { "timestamp": "2025-10-09 14:32:15", "user": "admin", @@ -299,7 +299,7 @@ All authenticated operations are logged to the audit log file with the following ### Viewing Audit Logs -```text +```bash # View raw audit log cat provisioning/logs/audit.log @@ -328,7 +328,7 @@ The authentication system integrates with the provisioning platform's control ce ### Starting Control Center -```text +```bash # Start control center (required for authentication) cd provisioning/platform/control-center cargo run --release @@ -336,7 +336,7 @@ cargo run --release Or use the orchestrator which includes control center: -```text +```bash cd provisioning/platform/orchestrator ./scripts/start-orchestrator.nu --background ``` @@ -347,7 +347,7 @@ cd provisioning/platform/orchestrator ### Manual Testing -```text +```bash # 1. Start control center cd provisioning/platform/control-center cargo run --release & @@ -367,7 +367,7 @@ provisioning server create test-server --check ### Automated Testing -```text +```bash # Run authentication tests nu provisioning/core/nulib/lib_provisioning/plugins/auth_test.nu ``` @@ -430,7 +430,7 @@ nu provisioning/core/nulib/lib_provisioning/plugins/auth_test.nu ### Authentication Flow -```text +```bash ┌─────────────┐ │ User Command│ └──────┬──────┘ @@ -489,7 +489,7 @@ nu provisioning/core/nulib/lib_provisioning/plugins/auth_test.nu ### File Structure -```text +```bash provisioning/ ├── config/ │ └── config.defaults.toml # Security configuration @@ -585,28 +585,28 @@ MIT License - See LICENSE file for details #### Login -```text +```bash provisioning auth login # Interactive password provisioning auth login --save # Save to keyring ``` #### MFA -```text +```bash provisioning auth mfa enroll totp # Enroll TOTP provisioning auth mfa verify --code 123456 # Verify code ``` #### Status -```text +```bash provisioning auth status # Show auth status provisioning auth verify # Verify token ``` #### Logout -```text +```bash provisioning auth logout # Logout current session provisioning auth logout --all # Logout all sessions ``` @@ -632,7 +632,7 @@ provisioning auth logout --all # Logout all sessions #### Environment Variable -```text +```javascript export PROVISIONING_SKIP_AUTH=true provisioning server create test unset PROVISIONING_SKIP_AUTH @@ -640,14 +640,14 @@ unset PROVISIONING_SKIP_AUTH #### Check Mode (Always Allowed) -```text +```bash provisioning server create prod --check provisioning taskserv delete k8s --check ``` #### Config Flag -```text +```toml [security.bypass] allow_skip_auth = true # Only in dev/test ``` @@ -658,7 +658,7 @@ allow_skip_auth = true # Only in dev/test #### Security Settings -```text +```toml [security] require_auth = true require_mfa_for_production = true @@ -681,7 +681,7 @@ url = "http://localhost:3000" #### Not Authenticated -```text +```bash ❌ Authentication Required Operation: server create web-01 To login: provisioning auth login @@ -691,7 +691,7 @@ To login: provisioning auth login #### MFA Required -```text +```bash ❌ MFA Verification Required Operation: server delete web-01 Reason: destructive operation @@ -701,7 +701,7 @@ Reason: destructive operation #### Token Expired -```text +```bash Error: Token verification failed ``` @@ -723,7 +723,7 @@ Error: Token verification failed ### Audit Logs -```text +```bash # View audit log cat provisioning/logs/audit.log @@ -740,20 +740,20 @@ cat provisioning/logs/audit.log | jq '. | select(.operation == "server_create")' #### Option 1: Skip Auth (Dev/Test Only) -```text +```javascript export PROVISIONING_SKIP_AUTH=true provisioning server create ci-server ``` #### Option 2: Check Mode -```text +```bash provisioning server create ci-server --check ``` #### Option 3: Service Account (Future) -```text +```javascript export PROVISIONING_AUTH_TOKEN="" provisioning server create ci-server ``` @@ -794,7 +794,7 @@ provisioning server create ci-server Current Settings (from your config) -```text +```toml [security] require_auth = true # ✅ Auth is REQUIRED allow_skip_auth = false # ❌ Cannot skip with env var @@ -808,7 +808,7 @@ url = "http://localhost:3000" # Control Center endpoint The Control Center is the authentication backend: -```text +```bash # Check if it's already running curl http://localhost:3000/health @@ -823,7 +823,7 @@ curl http://localhost:3000/health Expected Output: -```text +```json {"status": "healthy"} ``` @@ -831,7 +831,7 @@ Expected Output: Check for default user setup: -```text +```bash # Look for initialization scripts ls -la /Users/Akasha/project-provisioning/provisioning/platform/control-center/ @@ -846,7 +846,7 @@ cat /Users/Akasha/project-provisioning/provisioning/platform/control-center/conf Once you have credentials (usually admin / password from setup): -```text +```bash # Interactive login - will prompt for password provisioning auth login @@ -859,7 +859,7 @@ provisioning auth status Expected Success Output: -```text +```bash ✓ Login successful! User: admin @@ -874,7 +874,7 @@ Session active and ready Once authenticated: -```text +```bash # Try server creation again provisioning server create sgoyol --check @@ -888,13 +888,13 @@ If you want to bypass authentication temporarily for testing: #### Option A: Edit config to allow skip -```text +```toml # You would need to parse and modify TOML - easier to do next option ``` #### Option B: Use environment variable (if allowed by config) -```text +```javascript export PROVISIONING_SKIP_AUTH=true provisioning server create sgoyol unset PROVISIONING_SKIP_AUTH @@ -902,7 +902,7 @@ unset PROVISIONING_SKIP_AUTH #### Option C: Use check mode (always works, no auth needed) -```text +```bash provisioning server create sgoyol --check ``` @@ -912,7 +912,7 @@ Edit: `provisioning/config/config.defaults.toml` Change line 193 to: -```text +```bash allow_skip_auth = true ``` diff --git a/docs/src/security/config-encryption-guide.md b/docs/src/security/config-encryption-guide.md index 6046333..3849181 100644 --- a/docs/src/security/config-encryption-guide.md +++ b/docs/src/security/config-encryption-guide.md @@ -61,7 +61,7 @@ The Provisioning Platform includes a comprehensive configuration encryption syst ### Verify Installation -```text +```bash # Check SOPS sops --version @@ -80,7 +80,7 @@ aws --version Generate Age keys and create SOPS configuration: -```text +```toml provisioning config init-encryption --kms age ``` @@ -94,7 +94,7 @@ This will: Add to your shell profile (`~/.zshrc` or `~/.bashrc`): -```text +```bash # Age encryption export SOPS_AGE_RECIPIENTS="age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p" export PROVISIONING_KAGE="$HOME/.config/sops/age/keys.txt" @@ -104,13 +104,13 @@ Replace the recipient with your actual public key. ### 3. Validate Setup -```text +```bash provisioning config validate-encryption ``` Expected output: -```text +```bash ✅ Encryption configuration is valid SOPS installed: true Age backend: true @@ -121,7 +121,7 @@ Expected output: ### 4. Encrypt Your First Config -```text +```toml # Create a config with sensitive data cat > workspace/config/secure.yaml < edit -> re-encrypt) provisioning config edit-secure workspace/config/secure.enc.yaml ``` @@ -221,7 +221,7 @@ This will: ### Check Encryption Status -```text +```bash # Check if file is encrypted provisioning config is-encrypted workspace/config/secure.yaml @@ -244,7 +244,7 @@ provisioning config encryption-info workspace/config/secure.yaml **Setup**: -```text +```bash # Initialize provisioning config init-encryption --kms age @@ -255,7 +255,7 @@ export PROVISIONING_KAGE="$HOME/.config/sops/age/keys.txt" **Encrypt/Decrypt**: -```text +```bash provisioning config encrypt secrets.yaml --kms age provisioning config decrypt secrets.enc.yaml ``` @@ -288,7 +288,7 @@ provisioning config decrypt secrets.enc.yaml **Encrypt/Decrypt**: -```text +```bash provisioning config encrypt secrets.yaml --kms aws-kms provisioning config decrypt secrets.enc.yaml ``` @@ -325,7 +325,7 @@ provisioning config decrypt secrets.enc.yaml **Encrypt/Decrypt**: -```text +```bash provisioning config encrypt secrets.yaml --kms vault provisioning config decrypt secrets.enc.yaml ``` @@ -357,7 +357,7 @@ provisioning config decrypt secrets.enc.yaml **Encrypt/Decrypt**: -```text +```bash provisioning config encrypt secrets.yaml --kms cosmian provisioning config decrypt secrets.enc.yaml ``` @@ -383,7 +383,7 @@ provisioning config decrypt secrets.enc.yaml ### Examples -```text +```bash # Encrypt workspace config provisioning config encrypt workspace/config/secure.yaml --in-place @@ -414,7 +414,7 @@ provisioning config validate-encryption The config loader automatically detects and decrypts encrypted files: -```text +```toml # Load encrypted config (automatically decrypted in memory) use lib_provisioning/config/loader.nu @@ -430,7 +430,7 @@ let config = (load-provisioning-config --debug) ### Manual Loading -```text +```bash use lib_provisioning/config/encryption.nu # Load encrypted config @@ -444,7 +444,7 @@ let decrypted_content = (decrypt-config-memory "workspace/config/secure.enc.yaml The system supports encrypted files at any level: -```text +```bash 1. workspace/{name}/config/provisioning.yaml ← Can be encrypted 2. workspace/{name}/config/providers/*.toml ← Can be encrypted 3. workspace/{name}/config/platform/*.toml ← Can be encrypted @@ -469,7 +469,7 @@ The system supports encrypted files at any level: **Scan for unencrypted sensitive data**: -```text +```bash provisioning config scan-sensitive workspace --recursive ``` @@ -507,7 +507,7 @@ provisioning config scan-sensitive workspace --recursive ### 4. File Organization -```text +```bash workspace/ └── config/ ├── provisioning.yaml # Plain (no secrets) @@ -523,7 +523,7 @@ workspace/ **Add to `.gitignore`**: -```text +```bash # Unencrypted sensitive files **/secrets.yaml **/credentials.yaml @@ -537,7 +537,7 @@ workspace/ **Commit encrypted files**: -```text +```bash # Encrypted files are safe to commit git add workspace/config/secure.enc.yaml git commit -m "Add encrypted configuration" @@ -547,7 +547,7 @@ git commit -m "Add encrypted configuration" **Regular Key Rotation**: -```text +```bash # Generate new Age key age-keygen -o ~/.config/sops/age/keys-new.txt @@ -567,7 +567,7 @@ provisioning config rotate-keys workspace/config/secure.yaml **Track encryption status**: -```text +```bash # Regular scans provisioning config scan-sensitive workspace --recursive @@ -589,13 +589,13 @@ provisioning config validate-encryption **Error**: -```text +```bash SOPS binary not found ``` **Solution**: -```text +```bash # Install SOPS brew install sops @@ -607,13 +607,13 @@ sops --version **Error**: -```text +```bash Age key file not found: ~/.config/sops/age/keys.txt ``` **Solution**: -```text +```bash # Generate new key mkdir -p ~/.config/sops/age age-keygen -o ~/.config/sops/age/keys.txt @@ -626,13 +626,13 @@ export PROVISIONING_KAGE="$HOME/.config/sops/age/keys.txt" **Error**: -```text +```bash no AGE_RECIPIENTS for file.yaml ``` **Solution**: -```text +```bash # Extract public key from private key grep "public key:" ~/.config/sops/age/keys.txt @@ -644,7 +644,7 @@ export SOPS_AGE_RECIPIENTS="age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sf **Error**: -```text +```bash Failed to decrypt configuration file ``` @@ -675,13 +675,13 @@ Failed to decrypt configuration file **Error**: -```text +```bash AccessDeniedException: User is not authorized to perform: kms:Decrypt ``` **Solution**: -```text +```bash # Check AWS credentials aws sts get-caller-identity @@ -693,13 +693,13 @@ aws kms describe-key --key-id **Error**: -```text +```bash Vault encryption failed: connection refused ``` **Solution**: -```text +```bash # Verify Vault address echo $VAULT_ADDR @@ -765,7 +765,7 @@ For issues or questions: ### Setup (One-time) -```text +```bash # 1. Initialize encryption provisioning config init-encryption --kms age @@ -802,7 +802,7 @@ Automatically encrypted by SOPS: ### Quick Workflow -```text +```bash # Create config with secrets cat > workspace/config/secure.yaml < #### Scan and Encrypt All -```text +```bash # Find all unencrypted sensitive configs provisioning config scan-sensitive workspace --recursive diff --git a/docs/src/security/kms-service.md b/docs/src/security/kms-service.md index da71ffc..8ae7384 100644 --- a/docs/src/security/kms-service.md +++ b/docs/src/security/kms-service.md @@ -14,7 +14,7 @@ A unified Key Management Service for the Provisioning platform with support for ## Architecture -```text +```bash ┌─────────────────────────────────────────────────────────┐ │ KMS Service │ ├─────────────────────────────────────────────────────────┤ @@ -38,7 +38,7 @@ A unified Key Management Service for the Provisioning platform with support for ### Development Setup (Age) -```text +```bash # 1. Generate Age keys mkdir -p ~/.config/provisioning/age age-keygen -o ~/.config/provisioning/age/private_key.txt @@ -54,7 +54,7 @@ cargo run --bin kms-service ### Production Setup (Cosmian) -```text +```bash # Set environment variables export PROVISIONING_ENV=prod export COSMIAN_KMS_URL=https://your-kms.example.com @@ -68,7 +68,7 @@ cargo run --bin kms-service ### Encrypt Data -```text +```bash curl -X POST http://localhost:8082/api/v1/kms/encrypt -H "Content-Type: application/json" -d '{ @@ -79,7 +79,7 @@ curl -X POST http://localhost:8082/api/v1/kms/encrypt ### Decrypt Data -```text +```bash curl -X POST http://localhost:8082/api/v1/kms/decrypt -H "Content-Type: application/json" -d '{ @@ -90,7 +90,7 @@ curl -X POST http://localhost:8082/api/v1/kms/decrypt ## Nushell CLI Integration -```text +```nushell # Encrypt data "secret-data" | kms encrypt "api-key" | kms encrypt --context "env=prod,service=api" @@ -137,7 +137,7 @@ kms decrypt-file config.yaml.enc ### Docker -```text +```bash FROM rust:1.70 as builder WORKDIR /app COPY . . @@ -153,7 +153,7 @@ ENTRYPOINT ["kms-service"] ### Kubernetes -```text +```yaml apiVersion: apps/v1 kind: Deployment metadata: diff --git a/docs/src/security/nushell-plugins-guide.md b/docs/src/security/nushell-plugins-guide.md index fe4acad..289476f 100644 --- a/docs/src/security/nushell-plugins-guide.md +++ b/docs/src/security/nushell-plugins-guide.md @@ -38,7 +38,7 @@ Three native Nushell plugins provide high-performance integration with the provi ### Build from Source -```text +```nushell cd /Users/Akasha/project-provisioning/provisioning/core/plugins/nushell-plugins # Build all plugins @@ -54,7 +54,7 @@ cargo build --release -p nu_plugin_orchestrator ### Register with Nushell -```text +```nushell # Register all plugins plugin add target/release/nu_plugin_auth plugin add target/release/nu_plugin_kms @@ -66,7 +66,7 @@ plugin list | where name =~ "provisioning" ### Verify Installation -```text +```nushell # Test auth commands auth --help @@ -101,7 +101,7 @@ Login to provisioning platform and store JWT tokens securely. **Examples**: -```text +```nushell # Interactive password prompt (recommended) auth login admin @@ -124,7 +124,7 @@ Tokens are stored securely in OS-native keyring: **Success Output**: -```text +```nushell ✓ Login successful User: admin Role: Admin @@ -139,7 +139,7 @@ Logout from current session and remove stored tokens. **Examples**: -```text +```nushell # Simple logout auth logout @@ -149,7 +149,7 @@ if (auth verify | get active) { auth logout } **Success Output**: -```text +```nushell ✓ Logged out successfully ``` @@ -161,7 +161,7 @@ Verify current session and check token validity. **Examples**: -```text +```nushell # Check session status auth verify @@ -171,7 +171,7 @@ auth verify | if $in.active { echo "Session valid" } else { echo "Session expire **Success Output**: -```text +```json { "active": true, "user": "admin", @@ -189,7 +189,7 @@ List all active sessions for current user. **Examples**: -```text +```nushell # List sessions auth sessions @@ -199,7 +199,7 @@ auth sessions | where created_at > (date now | date to-timezone UTC | into strin **Output Format**: -```text +```nushell [ { "session_id": "sess_abc123", @@ -223,7 +223,7 @@ Enroll in MFA (TOTP or WebAuthn). **Examples**: -```text +```nushell # Enroll TOTP (Google Authenticator, Authy) auth mfa enroll totp @@ -233,7 +233,7 @@ auth mfa enroll webauthn **TOTP Enrollment Output**: -```text +```nushell ✓ TOTP enrollment initiated Scan this QR code with your authenticator app: @@ -265,7 +265,7 @@ Verify MFA code (TOTP or backup code). **Examples**: -```text +```nushell # Verify TOTP code auth mfa verify --code 123456 @@ -275,7 +275,7 @@ auth mfa verify --code ABCD-EFGH-IJKL **Success Output**: -```text +```nushell ✓ MFA verification successful ``` @@ -294,7 +294,7 @@ auth mfa verify --code ABCD-EFGH-IJKL **Common Errors**: -```text +```nushell # "No active session" Error: No active session found → Run: auth login @@ -354,7 +354,7 @@ Encrypt data using KMS. **Examples**: -```text +```nushell # Auto-detect backend from environment kms encrypt "secret data" @@ -373,7 +373,7 @@ kms encrypt "data" --backend rustyvault --key provisioning-main --context "user= **Output Format**: -```text +```nushell vault:v1:abc123def456... ``` @@ -394,7 +394,7 @@ Decrypt KMS-encrypted data. **Examples**: -```text +```nushell # Auto-detect backend kms decrypt "vault:v1:abc123def456..." @@ -410,7 +410,7 @@ kms decrypt "vault:v1:abc123..." --backend rustyvault --context "user=admin" **Output**: -```text +```nushell secret data ``` @@ -427,7 +427,7 @@ Generate data encryption key (DEK) using KMS. **Examples**: -```text +```nushell # Generate AES-256 key kms generate-key @@ -440,7 +440,7 @@ kms generate-key --backend rustyvault **Output Format**: -```text +```json { "plaintext": "base64-encoded-key", "ciphertext": "vault:v1:encrypted-key", @@ -456,7 +456,7 @@ Show KMS backend status and configuration. **Examples**: -```text +```nushell # Show status kms status @@ -466,7 +466,7 @@ kms status | where backend == "rustyvault" **Output Format**: -```text +```json { "backend": "rustyvault", "status": "healthy", @@ -482,7 +482,7 @@ kms status | where backend == "rustyvault" **RustyVault Backend**: -```text +```javascript export RUSTYVAULT_ADDR="http://localhost:8200" export RUSTYVAULT_TOKEN="your-token-here" export RUSTYVAULT_MOUNT="transit" @@ -490,21 +490,21 @@ export RUSTYVAULT_MOUNT="transit" **Age Backend**: -```text +```javascript export AGE_RECIPIENT="age1xxxxxxxxx" export AGE_IDENTITY="/path/to/key.txt" ``` **HTTP Backend (Cosmian)**: -```text +```javascript export KMS_HTTP_URL="http://localhost:9998" export KMS_HTTP_BACKEND="cosmian" ``` **AWS KMS**: -```text +```javascript export AWS_REGION="us-east-1" export AWS_ACCESS_KEY_ID="..." export AWS_SECRET_ACCESS_KEY="..." @@ -540,7 +540,7 @@ Get orchestrator status from local files (no HTTP). **Examples**: -```text +```nushell # Default data dir orch status @@ -553,7 +553,7 @@ orch status | if $in.active_tasks > 0 { echo "Tasks running" } **Output Format**: -```text +```json { "active_tasks": 5, "completed_tasks": 120, @@ -580,7 +580,7 @@ Validate workflow Nickel file. **Examples**: -```text +```nushell # Basic validation orch validate workflows/deploy.ncl @@ -593,7 +593,7 @@ ls workflows/*.ncl | each { |file| orch validate $file.name } **Output Format**: -```text +```json { "valid": true, "workflow": { @@ -628,7 +628,7 @@ List orchestrator tasks. **Examples**: -```text +```nushell # All tasks orch tasks @@ -644,7 +644,7 @@ orch tasks --status failed | each { |task| echo $"Failed: ($task.name)" } **Output Format**: -```text +```nushell [ { "task_id": "task_abc123", @@ -682,7 +682,7 @@ orch tasks --status failed | each { |task| echo $"Failed: ($task.name)" } ### Authentication Flow -```text +```nushell # Login and verify in one pipeline auth login admin | if $in.success { auth verify } @@ -691,7 +691,7 @@ auth login admin ### KMS Operations -```text +```nushell # Encrypt multiple secrets ["secret1", "secret2", "secret3"] | each { |data| kms encrypt $data --backend rustyvault } @@ -705,7 +705,7 @@ open encrypted_secrets.json ### Orchestrator Monitoring -```text +```nushell # Monitor running tasks while true { orch tasks --status running @@ -716,7 +716,7 @@ while true { ### Combined Workflow -```text +```nushell # Complete deployment workflow auth login admin | auth mfa verify --code (input "MFA: ") @@ -736,7 +736,7 @@ auth login admin **"No active session"**: -```text +```nushell auth login ``` @@ -747,7 +747,7 @@ auth login **"Keyring error" (Linux)**: -```text +```nushell # Install keyring service sudo apt install gnome-keyring # Ubuntu/Debian sudo dnf install gnome-keyring # Fedora @@ -768,7 +768,7 @@ sudo apt install kwalletmanager **"RustyVault connection failed"**: -```text +```rust # Check RustyVault running curl http://localhost:8200/v1/sys/health @@ -779,7 +779,7 @@ export RUSTYVAULT_TOKEN="your-token" **"Age encryption failed"**: -```text +```nushell # Check Age keys ls -la ~/.age/ @@ -793,7 +793,7 @@ export AGE_IDENTITY="$HOME/.age/key.txt" **"AWS KMS access denied"**: -```text +```nushell # Check AWS credentials aws sts get-caller-identity @@ -807,7 +807,7 @@ aws kms describe-key --key-id alias/provisioning **"Failed to read status"**: -```text +```nushell # Check data directory exists ls provisioning/platform/orchestrator/data/ @@ -817,14 +817,14 @@ mkdir -p provisioning/platform/orchestrator/data **"Workflow validation failed"**: -```text +```nushell # Use strict mode for detailed errors orch validate workflows/deploy.ncl --strict ``` **"No tasks found"**: -```text +```nushell # Check orchestrator running ps aux | grep orchestrator @@ -839,7 +839,7 @@ cd provisioning/platform/orchestrator ### Building from Source -```text +```nushell cd provisioning/core/plugins/nushell-plugins # Clean build @@ -861,7 +861,7 @@ cargo test --all ### Adding to CI/CD -```text +```nushell name: Build Nushell Plugins on: [push, pull_request] @@ -902,7 +902,7 @@ jobs: Create `~/.config/nushell/plugin_config.nu`: -```text +```nushell # Auth plugin defaults $env.CONTROL_CENTER_URL = "https://control-center.example.com" @@ -918,7 +918,7 @@ $env.ORCHESTRATOR_DATA_DIR = "/opt/orchestrator/data" Add to `~/.config/nushell/config.nu`: -```text +```nushell # Auth shortcuts alias login = auth login alias logout = auth logout diff --git a/docs/src/security/nushell-plugins-system.md b/docs/src/security/nushell-plugins-system.md index 7fe8af0..f2f6c6d 100644 --- a/docs/src/security/nushell-plugins-system.md +++ b/docs/src/security/nushell-plugins-system.md @@ -38,7 +38,7 @@ Native Nushell plugins eliminate HTTP overhead and provide direct Rust-to-Nushel ## Quick Commands -```text +```nushell # Authentication auth login admin auth verify @@ -56,7 +56,7 @@ orch tasks --status running ## Installation -```text +```nushell cd provisioning/core/plugins/nushell-plugins cargo build --release --all diff --git a/docs/src/security/plugin-integration-guide.md b/docs/src/security/plugin-integration-guide.md index ae112db..c26e5bf 100644 --- a/docs/src/security/plugin-integration-guide.md +++ b/docs/src/security/plugin-integration-guide.md @@ -39,7 +39,7 @@ API calls: ### Architecture Benefits -```text +```bash Traditional HTTP Flow: User Command → HTTP Request → Network → Server Processing → Response → Parse JSON Total: ~50-100 ms per operation @@ -83,7 +83,7 @@ Real-world benchmarks from production workload: **Scenario**: Encrypt 100 configuration files -```text +```toml # HTTP API approach ls configs/*.yaml | each { |file| http post http://localhost:9998/encrypt { data: (open $file) } @@ -102,7 +102,7 @@ ls configs/*.yaml | each { |file| **1. Native Nushell Integration** -```text +```nushell # HTTP: Parse JSON, check status codes let result = http post http://localhost:9998/encrypt { data: "secret" } if $result.status == "success" { @@ -118,7 +118,7 @@ kms encrypt "secret" **2. Pipeline Friendly** -```text +```bash # HTTP: Requires wrapping, JSON parsing ["secret1", "secret2"] | each { |s| (http post http://localhost:9998/encrypt { data: $s }).encrypted @@ -130,7 +130,7 @@ kms encrypt "secret" **3. Tab Completion** -```text +```bash # All plugin commands have full tab completion kms # → encrypt, decrypt, generate-key, status, backends @@ -175,13 +175,13 @@ kms encrypt -- ### Step 1: Clone or Navigate to Plugin Directory -```text +```bash cd /Users/Akasha/project-provisioning/provisioning/core/plugins/nushell-plugins ``` ### Step 2: Build All Plugins -```text +```bash # Build in release mode (optimized for performance) cargo build --release --all @@ -193,8 +193,7 @@ cargo build --release -p nu_plugin_orchestrator **Expected output:** -```text - Compiling nu_plugin_auth v0.1.0 +``` Compiling nu_plugin_auth v0.1.0 Compiling nu_plugin_kms v0.1.0 Compiling nu_plugin_orchestrator v0.1.0 Finished release [optimized] target(s) in 2m 15s @@ -202,7 +201,7 @@ cargo build --release -p nu_plugin_orchestrator ### Step 3: Register Plugins with Nushell -```text +```nushell # Register all three plugins plugin add target/release/nu_plugin_auth plugin add target/release/nu_plugin_kms @@ -216,7 +215,7 @@ plugin add $PWD/target/release/nu_plugin_orchestrator ### Step 4: Verify Installation -```text +```bash # List registered plugins plugin list | where name =~ "auth|kms|orch" @@ -228,7 +227,7 @@ orch --help **Expected output:** -```text +```bash ╭───┬─────────────────────────┬─────────┬───────────────────────────────────╮ │ # │ name │ version │ filename │ ├───┼─────────────────────────┼─────────┼───────────────────────────────────┤ @@ -240,7 +239,7 @@ orch --help ### Step 5: Configure Environment (Optional) -```text +```toml # Add to ~/.config/nushell/env.nu $env.RUSTYVAULT_ADDR = "http://localhost:8200" $env.RUSTYVAULT_TOKEN = "your-vault-token" @@ -254,7 +253,7 @@ $env.ORCHESTRATOR_DATA_DIR = "/opt/orchestrator/data" ### 1. Authentication Workflow -```text +```bash # Login (password prompted securely) auth login admin # ✓ Login successful @@ -286,7 +285,7 @@ auth logout ### 2. KMS Operations -```text +```bash # Encrypt data kms encrypt "my secret data" # vault:v1:8GawgGuP... @@ -309,7 +308,7 @@ kms encrypt "data" --backend age --key age1xxxxxxx ### 3. Orchestrator Operations -```text +```bash # Check orchestrator status (no HTTP call) orch status # { @@ -332,7 +331,7 @@ orch tasks --status running ### 4. Combined Workflow -```text +```bash # Complete authenticated deployment pipeline auth login admin | if $in.success { auth verify } @@ -381,7 +380,7 @@ Login to provisioning platform and store JWT tokens securely in OS keyring. **Examples:** -```text +```bash # Interactive password prompt (recommended) auth login admin # Password: •••••••• @@ -419,7 +418,7 @@ Logout from current session and remove stored tokens from keyring. **Examples:** -```text +```bash # Simple logout auth logout # ✓ Logged out successfully @@ -450,7 +449,7 @@ Verify current session status and check token validity. **Examples:** -```text +```bash # Check if logged in auth verify # { @@ -482,7 +481,7 @@ List all active sessions for current user. **Examples:** -```text +```bash # List all sessions auth sessions # [ @@ -515,7 +514,7 @@ Enroll in Multi-Factor Authentication (TOTP or WebAuthn). **TOTP Enrollment:** -```text +```bash auth mfa enroll totp # ✓ TOTP enrollment initiated # @@ -539,7 +538,7 @@ auth mfa enroll totp **WebAuthn Enrollment:** -```text +```bash auth mfa enroll webauthn # ✓ WebAuthn enrollment initiated # @@ -577,7 +576,7 @@ Verify MFA code (TOTP or backup code). **Examples:** -```text +```bash # Verify TOTP code auth mfa verify --code 123456 # ✓ MFA verification successful @@ -594,7 +593,7 @@ auth mfa verify --code $code **Error Cases:** -```text +```bash # Invalid code auth mfa verify --code 999999 # Error: Invalid MFA code @@ -623,14 +622,14 @@ auth mfa verify --code 123456 **"No active session"** -```text +```bash # Solution: Login first auth login ``` **"Keyring error" (macOS)** -```text +```bash # Check Keychain Access permissions # System Preferences → Security & Privacy → Privacy → Full Disk Access # Add: /Applications/Nushell.app (or /usr/local/bin/nu) @@ -641,7 +640,7 @@ security unlock-keychain ~/Library/Keychains/login.keychain-db **"Keyring error" (Linux)** -```text +```bash # Install keyring service sudo apt install gnome-keyring # Ubuntu/Debian sudo dnf install gnome-keyring # Fedora @@ -657,7 +656,7 @@ export $(gnome-keyring-daemon --start --components=secrets) **"MFA verification failed"** -```text +```bash # Check time synchronization (TOTP requires accurate time) # macOS: sudo sntp -sS time.apple.com @@ -750,7 +749,7 @@ Encrypt data using specified KMS backend. **Examples:** -```text +```bash # Auto-detect backend from environment kms encrypt "secret configuration data" # vault:v1:8GawgGuP+emDKX5q... @@ -804,7 +803,7 @@ Decrypt KMS-encrypted data. **Examples:** -```text +```bash # Auto-detect backend from format kms decrypt "vault:v1:8GawgGuP..." # secret configuration data @@ -838,7 +837,7 @@ open secrets.json **Error Cases:** -```text +```bash # Invalid ciphertext kms decrypt "invalid_data" # Error: Invalid ciphertext format @@ -866,7 +865,7 @@ Generate data encryption key (DEK) using KMS envelope encryption. **Examples:** -```text +```bash # Generate AES-256 key kms generate-key # { @@ -904,7 +903,7 @@ Show KMS backend status, configuration, and health. **Examples:** -```text +```bash # Show current backend status kms status # { @@ -939,7 +938,7 @@ if (kms status | get status) == "healthy" { #### RustyVault Backend -```text +```rust # Environment variables export RUSTYVAULT_ADDR="http://localhost:8200" export RUSTYVAULT_TOKEN="hvs.xxxxxxxxxxxxx" @@ -947,14 +946,14 @@ export RUSTYVAULT_MOUNT="transit" # Transit engine mount point export RUSTYVAULT_KEY="provisioning-main" # Default key name ``` -```text +```rust # Usage kms encrypt "data" --backend rustyvault --key provisioning-main ``` **Setup RustyVault:** -```text +```rust # Start RustyVault rustyvault server -dev @@ -967,7 +966,7 @@ rustyvault write -f transit/keys/provisioning-main #### Age Backend -```text +```bash # Generate Age keypair age-keygen -o ~/.age/key.txt @@ -976,7 +975,7 @@ export AGE_IDENTITY="$HOME/.age/key.txt" # Private key export AGE_RECIPIENT="age1xxxxxxxxx" # Public key (from key.txt) ``` -```text +```bash # Usage kms encrypt "data" --backend age kms decrypt (open file.enc) --backend age @@ -984,7 +983,7 @@ kms decrypt (open file.enc) --backend age #### AWS KMS Backend -```text +```bash # AWS credentials export AWS_REGION="us-east-1" export AWS_ACCESS_KEY_ID="AKIAXXXXX" @@ -994,14 +993,14 @@ export AWS_SECRET_ACCESS_KEY="xxxxx" export AWS_KMS_KEY_ID="alias/provisioning" ``` -```text +```bash # Usage kms encrypt "data" --backend aws --key alias/provisioning ``` **Setup AWS KMS:** -```text +```bash # Create KMS key aws kms create-key --description "Provisioning Platform" @@ -1015,21 +1014,21 @@ aws kms create-grant --key-id --grantee-principal #### Cosmian Backend -```text +```bash # Cosmian KMS configuration export KMS_HTTP_URL="http://localhost:9998" export KMS_HTTP_BACKEND="cosmian" export COSMIAN_API_KEY="your-api-key" ``` -```text +```bash # Usage kms encrypt "data" --backend cosmian ``` #### Vault Backend (HashiCorp) -```text +```bash # Vault configuration export VAULT_ADDR="https://vault.example.com:8200" export VAULT_TOKEN="hvs.xxxxxxxxxxxxx" @@ -1037,7 +1036,7 @@ export VAULT_MOUNT="transit" export VAULT_KEY="provisioning" ``` -```text +```bash # Usage kms encrypt "data" --backend vault --key provisioning ``` @@ -1063,7 +1062,7 @@ kms encrypt "data" --backend vault --key provisioning **Scaling Test (1000 operations):** -```text +```bash # RustyVault: ~5 seconds 0..1000 | each { |_| kms encrypt "data" --backend rustyvault } | length # Age: ~3 seconds @@ -1074,7 +1073,7 @@ kms encrypt "data" --backend vault --key provisioning **"RustyVault connection failed"** -```text +```rust # Check RustyVault is running curl http://localhost:8200/v1/sys/health # Expected: { "initialized": true, "sealed": false } @@ -1089,7 +1088,7 @@ curl -H "X-Vault-Token: $RUSTYVAULT_TOKEN" $RUSTYVAULT_ADDR/v1/sys/health **"Age encryption failed"** -```text +```bash # Check Age keys exist ls -la ~/.age/ # Expected: key.txt @@ -1107,7 +1106,7 @@ echo $AGE_RECIPIENT **"AWS KMS access denied"** -```text +```bash # Verify AWS credentials aws sts get-caller-identity # Expected: Account, UserId, Arn @@ -1145,7 +1144,7 @@ Get orchestrator status from local files (no HTTP, ~1 ms latency). **Examples:** -```text +```bash # Default data directory orch status # { @@ -1187,7 +1186,7 @@ Validate workflow Nickel file syntax and structure. **Examples:** -```text +```bash # Basic validation orch validate workflows/deploy.ncl # { @@ -1250,7 +1249,7 @@ List orchestrator tasks from local state. **Examples:** -```text +```bash # All tasks (last 100) orch tasks # [ @@ -1302,7 +1301,7 @@ orch tasks | group-by status | each { |group| **Use Case: CI/CD Pipeline** -```text +```bash # HTTP approach (slow) http get http://localhost:9090/tasks --status running | each { |task| http get $"http://localhost:9090/tasks/($task.id)" } @@ -1318,7 +1317,7 @@ orch tasks --status running **"Failed to read status"** -```text +```bash # Check data directory exists ls -la provisioning/platform/orchestrator/data/ @@ -1331,7 +1330,7 @@ chmod 755 provisioning/platform/orchestrator/data **"Workflow validation failed"** -```text +```bash # Use strict mode for detailed errors orch validate workflows/deploy.ncl --strict @@ -1342,7 +1341,7 @@ nickel eval workflows/deploy.ncl **"No tasks found"** -```text +```bash # Check orchestrator running ps aux | grep orchestrator @@ -1362,7 +1361,7 @@ ls provisioning/platform/orchestrator/data/tasks/ Full workflow with authentication, secrets, and deployment: -```text +```bash # Step 1: Login with MFA auth login admin auth mfa verify --code (input "MFA code: ") @@ -1401,7 +1400,7 @@ echo "✓ Deployment complete" Rotate all secrets in multiple environments: -```text +```bash # Rotate database passwords ["dev", "staging", "production"] | each { |env| # Generate new password @@ -1425,7 +1424,7 @@ Rotate all secrets in multiple environments: Deploy to multiple environments with validation: -```text +```bash # Define environments let environments = [ { name: "dev", validate: "basic" }, @@ -1470,7 +1469,7 @@ $environments | each { |env| Backup configuration files with encryption: -```text +```toml # Backup script let backup_dir = $"backups/(date now | format date "%Y%m%d-%H%M%S")" mkdir $backup_dir @@ -1497,7 +1496,7 @@ echo $"✓ Backup complete: ($backup_dir)" Real-time health monitoring: -```text +```bash # Health dashboard while true { clear @@ -1557,7 +1556,7 @@ while true { **1. Batch Operations** -```text +```bash # ❌ Slow: Individual HTTP calls in loop ls configs/*.yaml | each { |file| http post http://localhost:9998/encrypt { data: (open $file.name) } @@ -1573,7 +1572,7 @@ ls configs/*.yaml | each { |file| **2. Parallel Processing** -```text +```bash # Process multiple operations in parallel ls configs/*.yaml | par-each { |file| @@ -1583,7 +1582,7 @@ ls configs/*.yaml **3. Caching Session State** -```text +```bash # Cache auth verification let $auth_cache = auth verify if $auth_cache.active { @@ -1596,7 +1595,7 @@ if $auth_cache.active { **Graceful Degradation:** -```text +```bash # Try plugin, fallback to HTTP if unavailable def kms_encrypt [data: string] { try { @@ -1609,7 +1608,7 @@ def kms_encrypt [data: string] { **Comprehensive Error Handling:** -```text +```bash # Handle all error cases def safe_deployment [] { # Check authentication @@ -1647,7 +1646,7 @@ def safe_deployment [] { **1. Never Log Decrypted Data** -```text +```bash # ❌ BAD: Logs plaintext password let password = kms decrypt $encrypted_password echo $"Password: ($password)" # Visible in logs! @@ -1659,7 +1658,7 @@ psql --dbname mydb --password $password # Not logged **2. Use Context (AAD) for Critical Data** -```text +```bash # Encrypt with context let context = $"user=(whoami),env=production,date=(date now | format date "%Y-%m-%d")" kms encrypt $sensitive_data --context $context @@ -1670,7 +1669,7 @@ kms decrypt $encrypted --context $context **3. Rotate Backup Codes** -```text +```bash # After using backup code, generate new set auth mfa verify --code ABCD-EFGH-IJKL # Warning: Backup code used @@ -1680,7 +1679,7 @@ auth mfa regenerate-backups **4. Limit Token Lifetime** -```text +```bash # Check token expiration before long operations let session = auth verify let expires_in = (($session.expires_at | into datetime) - (date now)) @@ -1698,7 +1697,7 @@ if $expires_in < 5 min { **"Plugin not found"** -```text +```bash # Check plugin registration plugin list | where name =~ "auth|kms|orch" @@ -1715,7 +1714,7 @@ nu **"Plugin command failed"** -```text +```bash # Enable debug mode $env.RUST_LOG = "debug" @@ -1728,7 +1727,7 @@ plugin list | where name =~ "kms" | select name version **"Permission denied"** -```text +```bash # Check plugin executable permissions ls -l provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_* # Should show: -rwxr-xr-x @@ -1741,7 +1740,7 @@ chmod +x provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_* **macOS Issues:** -```text +```bash # "cannot be opened because the developer cannot be verified" xattr -d com.apple.quarantine target/release/nu_plugin_auth xattr -d com.apple.quarantine target/release/nu_plugin_kms @@ -1754,7 +1753,7 @@ xattr -d com.apple.quarantine target/release/nu_plugin_orchestrator **Linux Issues:** -```text +```bash # Keyring service not running systemctl --user status gnome-keyring-daemon systemctl --user start gnome-keyring-daemon @@ -1766,7 +1765,7 @@ sudo dnf install openssl-devel # Fedora **Windows Issues:** -```text +```bash # Credential Manager access denied # Control Panel → User Accounts → Credential Manager # Ensure Windows Credential Manager service is running @@ -1779,7 +1778,7 @@ sudo dnf install openssl-devel # Fedora **Enable Verbose Logging:** -```text +```bash # Set log level $env.RUST_LOG = "debug,nu_plugin_auth=trace" @@ -1791,7 +1790,7 @@ auth login admin **Test Plugin Directly:** -```text +```bash # Test plugin communication (advanced) echo '{"Call": [0, {"name": "auth", "call": "login", "args": ["admin", "password"]}]}' | target/release/nu_plugin_auth @@ -1799,7 +1798,7 @@ echo '{"Call": [0, {"name": "auth", "call": "login", "args": ["admin", "password **Check Plugin Health:** -```text +```bash # Test each plugin auth --help # Should show auth commands kms --help # Should show kms commands @@ -1819,7 +1818,7 @@ orch status # Should return orchestrator status **Phase 1: Install Plugins (No Breaking Changes)** -```text +```bash # Build and register plugins cd provisioning/core/plugins/nushell-plugins cargo build --release --all @@ -1833,7 +1832,7 @@ http get http://localhost:9090/health **Phase 2: Update Scripts Incrementally** -```text +```bash # Before (HTTP) def encrypt_config [file: string] { let data = open $file @@ -1856,7 +1855,7 @@ def encrypt_config [file: string] { **Phase 3: Test Migration** -```text +```bash # Run side-by-side comparison def test_migration [] { let test_data = "test secret data" @@ -1879,7 +1878,7 @@ def test_migration [] { **Phase 4: Gradual Rollout** -```text +```bash # Use feature flag for controlled rollout $env.USE_PLUGINS = true @@ -1894,7 +1893,7 @@ def encrypt_with_flag [data: string] { **Phase 5: Full Migration** -```text +```bash # Replace all HTTP calls with plugin calls # Remove fallback logic once stable def encrypt_config [file: string] { @@ -1905,7 +1904,7 @@ def encrypt_config [file: string] { ### Rollback Strategy -```text +```bash # If issues arise, quickly rollback def rollback_to_http [] { # Remove plugin registrations @@ -1924,7 +1923,7 @@ def rollback_to_http [] { ### Custom Plugin Paths -```text +```bash # ~/.config/nushell/config.nu $env.PLUGIN_PATH = "/opt/provisioning/plugins" @@ -1936,7 +1935,7 @@ plugin add $"($env.PLUGIN_PATH)/nu_plugin_orchestrator" ### Environment-Specific Configuration -```text +```toml # ~/.config/nushell/env.nu # Development environment @@ -1960,7 +1959,7 @@ if ($env.ENV? == "prod") { ### Plugin Aliases -```text +```bash # ~/.config/nushell/config.nu # Auth shortcuts @@ -1980,7 +1979,7 @@ alias validate = orch validate ### Custom Commands -```text +```bash # ~/.config/nushell/custom_commands.nu # Encrypt all files in directory @@ -2035,7 +2034,7 @@ def watch-deployments [] { **1. Verify Plugin Integrity** -```text +```bash # Check plugin signatures (if available) sha256sum target/release/nu_plugin_auth # Compare with published checksums @@ -2048,7 +2047,7 @@ cargo build --release --all **2. Restrict Plugin Access** -```text +```bash # Set plugin permissions (only owner can execute) chmod 700 target/release/nu_plugin_* @@ -2061,7 +2060,7 @@ mv target/release/nu_plugin_* /opt/provisioning/plugins/ **3. Audit Plugin Usage** -```text +```bash # Log plugin calls (for compliance) def logged_encrypt [data: string] { let timestamp = date now @@ -2073,7 +2072,7 @@ def logged_encrypt [data: string] { **4. Rotate Credentials Regularly** -```text +```bash # Weekly credential rotation script def rotate_credentials [] { # Re-authenticate @@ -2104,7 +2103,7 @@ dev). A: Yes, plugins work great in CI/CD. For headless environments (no keyring), use environment variables for auth or file-based tokens. -```text +```bash # CI/CD example export CONTROL_CENTER_TOKEN="jwt-token-here" kms encrypt "data" --backend age @@ -2114,7 +2113,7 @@ kms encrypt "data" --backend age A: Rebuild and re-register: -```text +```bash cd provisioning/core/plugins/nushell-plugins git pull cargo build --release --all @@ -2127,7 +2126,7 @@ plugin add --force target/release/nu_plugin_orchestrator A: Yes, specify `--backend` for each operation: -```text +```bash kms encrypt "data1" --backend rustyvault kms encrypt "data2" --backend age kms encrypt "data3" --backend aws @@ -2145,7 +2144,7 @@ A: Plugins require Nushell 0.107.1+. For older versions, use HTTP API. A: Save backup codes securely (password manager, encrypted file). QR code can be re-scanned from the same secret. -```text +```bash # Save backup codes auth mfa enroll totp | save mfa-backup-codes.txt kms encrypt (open mfa-backup-codes.txt) | save mfa-backup-codes.enc @@ -2165,7 +2164,7 @@ A: Partially: A: Use Nushell's timing: -```text +```nushell timeit { kms encrypt "data" } # 5 ms 123μs 456 ns diff --git a/docs/src/security/plugin-usage-guide.md b/docs/src/security/plugin-usage-guide.md index 1eddcaa..7d480a1 100644 --- a/docs/src/security/plugin-usage-guide.md +++ b/docs/src/security/plugin-usage-guide.md @@ -20,7 +20,7 @@ HTTP-based operations: Run the installation script in a new Nushell session: -```text +```nushell nu provisioning/core/plugins/install-and-register.nu ``` @@ -34,7 +34,7 @@ This will: If the script doesn't work, run these commands: -```text +```bash # Copy plugins cp provisioning/core/plugins/nushell-plugins/nu_plugin_auth/target/release/nu_plugin_auth ~/.local/share/nushell/plugins/ cp provisioning/core/plugins/nushell-plugins/nu_plugin_kms/target/release/nu_plugin_kms ~/.local/share/nushell/plugins/ @@ -56,7 +56,7 @@ plugin add ~/.local/share/nushell/plugins/nu_plugin_orchestrator #### Login -```text +```bash provisioning auth login [password] # Examples @@ -67,7 +67,7 @@ provisioning auth login --url http://localhost:8081 admin #### Verify Token -```text +```bash provisioning auth verify [--local] # Examples @@ -77,7 +77,7 @@ provisioning auth verify --local #### Logout -```text +```bash provisioning auth logout # Example @@ -86,7 +86,7 @@ provisioning auth logout #### List Sessions -```text +```bash provisioning auth sessions [--active] # Examples @@ -102,7 +102,7 @@ Supports multiple backends: RustyVault, Age, AWS KMS, HashiCorp Vault, Cosmian #### Encrypt Data -```text +```bash provisioning kms encrypt [--backend ] [--key ] # Examples @@ -113,7 +113,7 @@ provisioning kms encrypt "secret" --backend rustyvault --key my-key #### Decrypt Data -```text +```bash provisioning kms decrypt [--backend ] [--key ] # Examples @@ -123,7 +123,7 @@ provisioning kms decrypt $encrypted --backend age #### KMS Status -```text +```bash provisioning kms status # Output shows current backend and availability @@ -131,7 +131,7 @@ provisioning kms status #### List Backends -```text +```bash provisioning kms list-backends # Shows all available KMS backends @@ -145,7 +145,7 @@ Local file-based orchestration without network overhead. #### Check Status -```text +```bash provisioning orch status [--data-dir ] # Examples @@ -155,7 +155,7 @@ provisioning orch status --data-dir /custom/data #### List Tasks -```text +```bash provisioning orch tasks [--status ] [--limit ] [--data-dir ] # Examples @@ -166,7 +166,7 @@ provisioning orch tasks --status running --limit 10 #### Validate Workflow -```text +```bash provisioning orch validate [--strict] # Examples @@ -176,7 +176,7 @@ provisioning orch validate workflows/deployment.ncl --strict #### Submit Workflow -```text +```bash provisioning orch submit [--priority <0-100>] [--check] # Examples @@ -187,7 +187,7 @@ provisioning orch submit workflows/test.ncl --check #### Monitor Task -```text +```bash provisioning orch monitor [--once] [--interval ] [--timeout ] # Examples @@ -200,7 +200,7 @@ provisioning orch monitor task-456 --interval 5000 --timeout 600 Check which plugins are installed: -```text +```bash provisioning plugin status # Output: @@ -215,7 +215,7 @@ provisioning plugin status ### Testing Plugins -```text +```bash provisioning plugin test # Runs quick tests on all installed plugins @@ -224,7 +224,7 @@ provisioning plugin test ### List Registered Plugins -```text +```bash provisioning plugin list # Shows all provisioning plugins registered with Nushell @@ -245,7 +245,7 @@ provisioning plugin list If plugins are not installed or fail to load, all commands automatically fall back to HTTP-based operations: -```text +```bash # With plugins installed (fast) $ provisioning auth verify Token is valid @@ -272,13 +272,13 @@ Make sure you: If you see "command not found" when running `provisioning auth login`, the auth plugin is not loaded. Run: -```text +```bash plugin list | grep nu_plugin ``` If you don't see the plugins, register them: -```text +```bash plugin add ~/.local/share/nushell/plugins/nu_plugin_auth plugin add ~/.local/share/nushell/plugins/nu_plugin_kms plugin add ~/.local/share/nushell/plugins/nu_plugin_orchestrator @@ -288,7 +288,7 @@ plugin add ~/.local/share/nushell/plugins/nu_plugin_orchestrator Check the plugin logs: -```text +```bash provisioning plugin test ``` @@ -298,7 +298,7 @@ If a plugin fails, the system will automatically fall back to HTTP mode. All plugin commands are integrated into the main provisioning CLI: -```text +```bash # Shortcuts available provisioning auth login admin # Full command provisioning login admin # Alias @@ -316,7 +316,7 @@ provisioning orch-status # Alias For orchestrator operations, specify custom data directory: -```text +```bash provisioning orch status --data-dir /custom/orchestrator/data provisioning orch tasks --data-dir /custom/orchestrator/data ``` @@ -325,7 +325,7 @@ provisioning orch tasks --data-dir /custom/orchestrator/data For auth operations with custom endpoint: -```text +```bash provisioning auth login admin --url http://custom-auth-server:8081 provisioning auth verify --url http://custom-auth-server:8081 ``` @@ -334,7 +334,7 @@ provisioning auth verify --url http://custom-auth-server:8081 Specify which KMS backend to use: -```text +```bash # Use Age encryption provisioning kms encrypt "data" --backend age @@ -352,7 +352,7 @@ provisioning kms decrypt $encrypted --backend age If you need to rebuild plugins: -```text +```bash cd provisioning/core/plugins/nushell-plugins # Build auth plugin diff --git a/docs/src/security/rustyvault-kms-guide.md b/docs/src/security/rustyvault-kms-guide.md index 9761eee..f1d98dd 100644 --- a/docs/src/security/rustyvault-kms-guide.md +++ b/docs/src/security/rustyvault-kms-guide.md @@ -24,7 +24,7 @@ RustyVault as a KMS backend alongside Age, Cosmian, AWS KMS, and HashiCorp Vault ## Architecture Position -```text +```rust KMS Service Backends: ├── Age (local development, file-based) ├── Cosmian (privacy-preserving, production) @@ -39,7 +39,7 @@ KMS Service Backends: ### Option 1: Standalone RustyVault Server -```text +```rust # Install RustyVault binary cargo install rusty_vault @@ -49,7 +49,7 @@ rustyvault server -config=/path/to/config.hcl ### Option 2: Docker Deployment -```text +```rust # Pull RustyVault image (if available) docker pull tongsuo/rustyvault:latest @@ -64,7 +64,7 @@ docker run -d ### Option 3: From Source -```text +```rust # Clone repository git clone https://github.com/Tongsuo-Project/RustyVault.git cd RustyVault @@ -82,7 +82,7 @@ cargo build --release Create `rustyvault-config.hcl`: -```text +```rust # RustyVault Server Configuration storage "file" { @@ -104,7 +104,7 @@ max_lease_ttl = "720h" ### Initialize RustyVault -```text +```rust # Initialize (first time only) export VAULT_ADDR='http://127.0.0.1:8200' rustyvault operator init @@ -120,7 +120,7 @@ export RUSTYVAULT_TOKEN='' ### Enable Transit Engine -```text +```rust # Enable transit secrets engine rustyvault secrets enable transit @@ -137,7 +137,7 @@ rustyvault read transit/keys/provisioning-main ### Update `provisioning/config/kms.toml` -```text +```toml [kms] type = "rustyvault" server_url = "http://localhost:8200" @@ -157,7 +157,7 @@ enabled = false # Set true with HTTPS ### Environment Variables -```text +```rust # RustyVault connection export RUSTYVAULT_ADDR="http://localhost:8200" export RUSTYVAULT_TOKEN="s.xxxxxxxxxxxxxxxxxxxxxx" @@ -176,7 +176,7 @@ export KMS_BIND_ADDR="0.0.0.0:8081" ### Start KMS Service -```text +```rust # With RustyVault backend cd provisioning/platform/kms-service cargo run @@ -187,7 +187,7 @@ cargo run -- --config=/path/to/kms.toml ### CLI Operations -```text +```rust # Encrypt configuration file provisioning kms encrypt provisioning/config/secrets.yaml @@ -203,7 +203,7 @@ provisioning kms health ### REST API Usage -```text +```rust # Health check curl http://localhost:8081/health @@ -237,7 +237,7 @@ curl -X POST http://localhost:8081/datakey/generate Additional authenticated data binds encrypted data to specific contexts: -```text +```rust # Encrypt with context curl -X POST http://localhost:8081/encrypt -d '{ @@ -257,7 +257,7 @@ curl -X POST http://localhost:8081/decrypt For large files, use envelope encryption: -```text +```rust # 1. Generate data key DATA_KEY=$(curl -X POST http://localhost:8081/datakey/generate -d '{"key_spec": "AES_256"}' | jq -r '.plaintext') @@ -271,7 +271,7 @@ echo "vault:v1:..." > encrypted-data-key.txt ### Key Rotation -```text +```rust # Rotate encryption key in RustyVault rustyvault write -f transit/keys/provisioning-main/rotate @@ -291,7 +291,7 @@ curl -X POST http://localhost:8081/rewrap Deploy multiple RustyVault instances behind a load balancer: -```text +```rust # docker-compose.yml version: '3.8' @@ -329,7 +329,7 @@ volumes: ### TLS Configuration -```text +```toml # kms.toml [kms] type = "rustyvault" @@ -346,7 +346,7 @@ ca_path = "/etc/kms/certs/ca.crt" ### Auto-Unseal (AWS KMS) -```text +```rust # rustyvault-config.hcl seal "awskms" { region = "us-east-1" @@ -360,7 +360,7 @@ seal "awskms" { ### Health Checks -```text +```rust # RustyVault health curl http://localhost:8200/v1/sys/health @@ -375,7 +375,7 @@ curl http://localhost:8081/metrics Enable audit logging in RustyVault: -```text +```rust # rustyvault-config.hcl audit { path = "/vault/logs/audit.log" @@ -391,7 +391,7 @@ audit { **1. Connection Refused** -```text +```rust # Check RustyVault is running curl http://localhost:8200/v1/sys/health @@ -402,7 +402,7 @@ rustyvault token lookup **2. Authentication Failed** -```text +```rust # Verify token in environment echo $RUSTYVAULT_TOKEN @@ -412,7 +412,7 @@ rustyvault token renew **3. Key Not Found** -```text +```rust # List available keys rustyvault list transit/keys @@ -422,7 +422,7 @@ rustyvault write -f transit/keys/provisioning-main **4. TLS Verification Failed** -```text +```rust # Disable TLS verification (dev only) export RUSTYVAULT_TLS_VERIFY=false @@ -438,7 +438,7 @@ export RUSTYVAULT_CACERT=/path/to/ca.crt RustyVault is API-compatible, minimal changes required: -```text +```rust # Old config (Vault) [kms] type = "vault" @@ -456,7 +456,7 @@ token = "${RUSTYVAULT_TOKEN}" Re-encrypt existing encrypted files: -```text +```rust # 1. Decrypt with Age provisioning kms decrypt --backend age secrets.enc > secrets.plain @@ -481,7 +481,7 @@ provisioning kms encrypt --backend rustyvault secrets.plain > secrets.rustyvault Create restricted policy for KMS service: -```text +```rust # kms-policy.hcl path "transit/encrypt/provisioning-main" { capabilities = ["update"] @@ -498,7 +498,7 @@ path "transit/datakey/plaintext/provisioning-main" { Apply policy: -```text +```rust rustyvault policy write kms-service kms-policy.hcl rustyvault token create -policy=kms-service ``` diff --git a/docs/src/security/secrets-management-guide.md b/docs/src/security/secrets-management-guide.md index 98bd06c..d840a78 100644 --- a/docs/src/security/secrets-management-guide.md +++ b/docs/src/security/secrets-management-guide.md @@ -30,7 +30,7 @@ Age-based encrypted secrets file with YAML structure. **Environment Variables**: -```text +```bash PROVISIONING_SECRET_SOURCE=sops PROVISIONING_SOPS_ENABLED=true PROVISIONING_SOPS_SECRETS_FILE=/path/to/secrets.enc.yaml @@ -39,7 +39,7 @@ PROVISIONING_SOPS_AGE_KEY_FILE=$HOME/.age/provisioning **Secrets File Structure** (provisioning/secrets.enc.yaml): -```text +```yaml # Encrypted with sops ssh: web-01: @@ -51,7 +51,7 @@ ssh: **Setup Instructions**: -```text +```bash # 1. Install sops and age brew install sops age @@ -98,7 +98,7 @@ AWS KMS or compatible key management service. **Environment Variables**: -```text +```bash PROVISIONING_SECRET_SOURCE=kms PROVISIONING_KMS_ENABLED=true PROVISIONING_KMS_REGION=us-east-1 @@ -106,13 +106,13 @@ PROVISIONING_KMS_REGION=us-east-1 **Secret Storage Pattern**: -```text +```bash provisioning/ssh-keys/{hostname}/{username} ``` **Setup Instructions**: -```text +```bash # 1. Create KMS key (one-time) aws kms create-key --description "Provisioning SSH Keys" @@ -154,7 +154,7 @@ Self-hosted or managed Vault instance for secrets. **Environment Variables**: -```text +```bash PROVISIONING_SECRET_SOURCE=vault PROVISIONING_VAULT_ENABLED=true PROVISIONING_VAULT_ADDRESS=http://localhost:8200 @@ -163,14 +163,14 @@ PROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ... **Secret Storage Pattern**: -```text +```bash GET /v1/secret/ssh-keys/{hostname}/{username} # Returns: {"key_content": "-----BEGIN OPENSSH PRIVATE KEY-----..."} ``` **Setup Instructions**: -```text +```bash # 1. Start Vault (if not already running) docker run -p 8200:8200 -e VAULT_DEV_ROOT_TOKEN_ID=provisioning @@ -215,7 +215,7 @@ Local filesystem SSH keys (development only). **Environment Variables**: -```text +```bash PROVISIONING_ENVIRONMENT=local-dev ``` @@ -232,7 +232,7 @@ Standard paths checked (in order): When `PROVISIONING_SECRET_SOURCE` is not explicitly set, the system auto-detects in this order: -```text +```bash 1. PROVISIONING_SOPS_ENABLED=true or PROVISIONING_SOPS_SECRETS_FILE set? → Use SOPS 2. PROVISIONING_KMS_ENABLED=true or PROVISIONING_KMS_REGION set? @@ -256,7 +256,7 @@ When `PROVISIONING_SECRET_SOURCE` is not explicitly set, the system auto-detects ### Minimal Setup (Single Source) -```text +```bash # Using Vault (recommended for self-hosted) export PROVISIONING_SECRET_SOURCE=vault export PROVISIONING_VAULT_ADDRESS=https://vault.example.com:8200 @@ -266,7 +266,7 @@ export PROVISIONING_ENVIRONMENT=production ### Enhanced Setup (Fallback Chain) -```text +```bash # Primary: Vault export PROVISIONING_VAULT_ADDRESS=https://vault.primary.com:8200 export PROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ... @@ -282,7 +282,7 @@ export PROVISIONING_SECRET_SOURCE=vault # Explicit: use Vault first ### High-Availability Setup -```text +```bash # Use KMS (managed service) export PROVISIONING_SECRET_SOURCE=kms export PROVISIONING_KMS_REGION=us-east-1 @@ -298,7 +298,7 @@ export PROVISIONING_ENVIRONMENT=production ### Check Configuration -```text +```toml # Nushell provisioning secrets status @@ -311,7 +311,7 @@ provisioning secrets diagnose ### Test SSH Key Retrieval -```text +```bash # Test specific host/user provisioning secrets get-key web-01 ubuntu @@ -326,7 +326,7 @@ provisioning ssh --test-key web-01 ubuntu ### From Local-Dev to SOPS -```text +```bash # 1. Create SOPS secrets file with existing keys cat > secrets.yaml << 'EOF' ssh: @@ -350,7 +350,7 @@ export PROVISIONING_SOPS_AGE_KEY_FILE=$HOME/.age/provisioning ### From SOPS to Vault -```text +```bash # 1. Decrypt SOPS file sops -d provisioning/secrets.enc.yaml > /tmp/secrets.yaml @@ -370,7 +370,7 @@ provisioning secrets validate-all ### 1. Never Commit Secrets -```text +```bash # Add to .gitignore echo "provisioning/secrets.enc.yaml" >> .gitignore echo ".age/provisioning" >> .gitignore @@ -379,7 +379,7 @@ echo ".vault-token" >> .gitignore ### 2. Rotate Keys Regularly -```text +```bash # SOPS: Rotate Age key age-keygen -o ~/.age/provisioning.new # Update all secrets with new key @@ -394,7 +394,7 @@ vault write -f secret/metadata/ssh-keys/web-01/ubuntu ### 3. Restrict Access -```text +```bash # SOPS: Protect Age key chmod 600 ~/.age/provisioning @@ -411,7 +411,7 @@ vault write auth/approle/role/provisioning ### 4. Audit Logging -```text +```bash # KMS: Enable CloudTrail aws cloudtrail put-event-selectors --trail-name provisioning-trail @@ -428,7 +428,7 @@ git log -p provisioning/secrets.enc.yaml ### SOPS Issues -```text +```bash # Test Age decryption sops -d provisioning/secrets.enc.yaml @@ -442,7 +442,7 @@ age-keygen -o ~/.age/provisioning ### KMS Issues -```text +```bash # Test AWS credentials aws sts get-caller-identity @@ -455,7 +455,7 @@ aws secretsmanager list-secrets --filters Name=name,Values=provisioning ### Vault Issues -```text +```bash # Check Vault status vault status @@ -489,7 +489,7 @@ A: No - it's development only. Production requires SOPS/KMS/Vault. ## Architecture -```text +```bash SSH Operation ↓ SecretsManager (Nushell/Rust) @@ -512,7 +512,7 @@ SSH Operation Completes SSH operations automatically use secrets manager: -```text +```bash # Automatic secret retrieval ssh-cmd-smart $settings $server false "command" $ip # Internally: diff --git a/docs/src/security/secretumvault-kms-guide.md b/docs/src/security/secretumvault-kms-guide.md index daa18d6..c3a322b 100644 --- a/docs/src/security/secretumvault-kms-guide.md +++ b/docs/src/security/secretumvault-kms-guide.md @@ -34,7 +34,7 @@ SecretumVault provides: **Setup**: No separate service required **Best For**: Local development and testing -```text +```javascript export PROVISIONING_ENV=dev export KMS_DEV_BACKEND=secretumvault provisioning kms encrypt config.yaml @@ -47,7 +47,7 @@ provisioning kms encrypt config.yaml **Setup**: Start SecretumVault service separately **Best For**: Team testing, staging environments -```text +```bash # Start SecretumVault service secretumvault server --storage-backend surrealdb @@ -66,7 +66,7 @@ provisioning kms encrypt config.yaml **Setup**: etcd cluster + SecretumVault service **Best For**: Production deployments with HA requirements -```text +```bash # Setup etcd cluster (3 nodes minimum) etcd --name etcd1 --data-dir etcd1-data --advertise-client-urls http://localhost:2379 @@ -119,7 +119,7 @@ Edit these files to customize: ### Encrypt Data -```text +```bash # Encrypt a file provisioning kms encrypt config.yaml # Output: config.yaml.enc @@ -133,7 +133,7 @@ provisioning kms encrypt --sign config.yaml ### Decrypt Data -```text +```bash # Decrypt a file provisioning kms decrypt config.yaml.enc # Output: config.yaml @@ -147,7 +147,7 @@ provisioning kms decrypt --verify config.yaml.enc ### Generate Data Keys -```text +```bash # Generate AES-256 data key provisioning kms generate-key --spec AES256 @@ -160,7 +160,7 @@ provisioning kms generate-key --spec RSA4096 ### Health and Status -```text +```bash # Check KMS health provisioning kms health @@ -173,7 +173,7 @@ provisioning kms status ### Key Rotation -```text +```bash # Rotate encryption key provisioning kms rotate-key provisioning-master @@ -204,7 +204,7 @@ Local file-based storage with no external dependencies. **Configuration**: -```text +```toml [secretumvault.storage.filesystem] data_dir = "~/.config/provisioning/secretumvault/data" permissions = "0700" @@ -227,7 +227,7 @@ Embedded or standalone document database. **Configuration**: -```text +```toml [secretumvault.storage.surrealdb] connection_url = "ws://localhost:8000" namespace = "provisioning" @@ -255,7 +255,7 @@ Distributed key-value store for high availability. **Configuration**: -```text +```toml [secretumvault.storage.etcd] endpoints = ["http://etcd1:2379", "http://etcd2:2379", "http://etcd3:2379"] tls_enabled = true @@ -281,7 +281,7 @@ Relational database backend. **Configuration**: -```text +```toml [secretumvault.storage.postgresql] connection_url = "postgresql://user:pass@localhost:5432/secretumvault" max_connections = 10 @@ -346,7 +346,7 @@ ssl_mode = "require" **Solution**: Check directory permissions: -```text +```bash ls -la ~/.config/provisioning/secretumvault/ # Should be: drwx------ (0700) chmod 700 ~/.config/provisioning/secretumvault/data @@ -358,7 +358,7 @@ chmod 700 ~/.config/provisioning/secretumvault/data **Solution**: Start SurrealDB first: -```text +```bash surreal start --bind 0.0.0.0:8000 file://secretum.db ``` @@ -368,7 +368,7 @@ surreal start --bind 0.0.0.0:8000 file://secretum.db **Solution**: Check etcd cluster status: -```text +```bash etcdctl member list etcdctl endpoint health @@ -423,27 +423,27 @@ curl http://etcd3:2379/health **Enable debug logging**: -```text +```javascript export RUST_LOG=debug provisioning kms encrypt config.yaml ``` **Check configuration**: -```text +```toml provisioning config show secretumvault provisioning config validate ``` **Test connectivity**: -```text +```bash provisioning kms health --verbose ``` **View audit logs**: -```text +```bash tail -f ~/.config/provisioning/logs/secretumvault-audit.log ``` @@ -492,7 +492,7 @@ tail -f ~/.config/provisioning/logs/secretumvault-audit.log ### From Age to SecretumVault -```text +```bash # Export all secrets encrypted with Age provisioning secrets export --backend age --output secrets.json @@ -505,7 +505,7 @@ find workspace/infra -name "*.enc" -exec provisioning kms reencrypt {} \; ### From RustyVault to SecretumVault -```text +```rust # Both use Vault-compatible APIs, so migration is simpler: # 1. Ensure SecretumVault keys are available # 2. Update KMS_PROD_BACKEND=secretumvault @@ -515,7 +515,7 @@ find workspace/infra -name "*.enc" -exec provisioning kms reencrypt {} \; ### From Cosmian to SecretumVault -```text +```bash # For production migration: # 1. Set up SecretumVault with etcd backend # 2. Verify high availability is working @@ -530,7 +530,7 @@ find workspace/infra -name "*.enc" -exec provisioning kms reencrypt {} \; ### Development (Filesystem) -```text +```toml [secretumvault.performance] max_connections = 5 connection_timeout = 5 @@ -540,7 +540,7 @@ cache_ttl = 60 ### Staging (SurrealDB) -```text +```toml [secretumvault.performance] max_connections = 20 connection_timeout = 5 @@ -550,7 +550,7 @@ cache_ttl = 300 ### Production (etcd) -```text +```toml [secretumvault.performance] max_connections = 50 connection_timeout = 10 @@ -564,7 +564,7 @@ cache_ttl = 600 All operations are logged: -```text +```bash # View recent audit events provisioning kms audit --limit 100 @@ -577,7 +577,7 @@ provisioning kms audit --action encrypt --from 24h ### Compliance Reports -```text +```bash # Generate compliance report provisioning compliance report --backend secretumvault @@ -594,7 +594,7 @@ provisioning compliance soc2-export --output soc2-audit.json Enable fine-grained access control: -```text +```bash # Enable Cedar integration provisioning config set secretumvault.authorization.cedar_enabled true @@ -607,7 +607,7 @@ provisioning policy define-kms-access deployer@example.com deploy-only Configure master key settings: -```text +```toml # Set KEK rotation interval provisioning config set secretumvault.rotation.rotation_interval_days 90 @@ -622,7 +622,7 @@ provisioning config set secretumvault.rotation.retain_old_versions true For production deployments across regions: -```text +```bash # Region 1 export SECRETUMVAULT_URL=https://kms-us-east.example.com export SECRETUMVAULT_STORAGE=etcd diff --git a/docs/src/security/security-system.md b/docs/src/security/security-system.md index 017ebb2..27b0e09 100644 --- a/docs/src/security/security-system.md +++ b/docs/src/security/security-system.md @@ -160,7 +160,7 @@ Security policies and settings are defined in: ## Help Commands -```text +```bash # Show security help provisioning help security diff --git a/docs/src/security/ssh-temporal-keys-user-guide.md b/docs/src/security/ssh-temporal-keys-user-guide.md index 9f58288..5ced639 100644 --- a/docs/src/security/ssh-temporal-keys-user-guide.md +++ b/docs/src/security/ssh-temporal-keys-user-guide.md @@ -6,7 +6,7 @@ The fastest way to use temporal SSH keys: -```text +```bash # Auto-generate, deploy, and connect (key auto-revoked after disconnect) ssh connect server.example.com @@ -21,7 +21,7 @@ ssh connect server.example.com --keep For more control over the key lifecycle: -```text +```bash # 1. Generate key ssh generate-key server.example.com --user root --ttl 1hr @@ -81,7 +81,7 @@ Choose the right key type for your use case: ### Development Workflow -```text +```bash # Quick SSH for debugging ssh connect dev-server.local --ttl 30 min @@ -93,7 +93,7 @@ ssh root@dev-server.local "systemctl status nginx" ### Production Deployment -```text +```bash # Generate key with longer TTL for deployment ssh generate-key prod-server.example.com --ttl 2hr @@ -109,7 +109,7 @@ ssh revoke-key ### Multi-Server Access -```text +```bash # Generate one key ssh generate-key server01.example.com --ttl 1hr @@ -125,7 +125,7 @@ Generate a new temporal SSH key. **Syntax**: -```text +```bash ssh generate-key [options] ``` @@ -139,7 +139,7 @@ ssh generate-key [options] **Examples**: -```text +```bash # Basic usage ssh generate-key server.example.com @@ -156,13 +156,13 @@ Deploy a generated key to the target server. **Syntax**: -```text +```bash ssh deploy-key ``` **Example**: -```text +```bash ssh deploy-key abc-123-def-456 ``` @@ -172,13 +172,13 @@ List all active SSH keys. **Syntax**: -```text +```bash ssh list-keys [--expired] ``` **Examples**: -```text +```bash # List active keys ssh list-keys @@ -195,13 +195,13 @@ Get detailed information about a specific key. **Syntax**: -```text +```bash ssh get-key ``` **Example**: -```text +```bash ssh get-key abc-123-def-456 ``` @@ -211,13 +211,13 @@ Immediately revoke a key (removes from server and tracking). **Syntax**: -```text +```bash ssh revoke-key ``` **Example**: -```text +```bash ssh revoke-key abc-123-def-456 ``` @@ -227,7 +227,7 @@ Auto-generate, deploy, connect, and revoke (all-in-one). **Syntax**: -```text +```bash ssh connect [options] ``` @@ -240,7 +240,7 @@ ssh connect [options] **Examples**: -```text +```bash # Quick connection ssh connect server.example.com @@ -257,13 +257,13 @@ Show SSH key statistics. **Syntax**: -```text +```bash ssh stats ``` **Example Output**: -```text +```bash SSH Key Statistics: Total generated: 42 Active keys: 10 @@ -284,7 +284,7 @@ Manually trigger cleanup of expired keys. **Syntax**: -```text +```bash ssh cleanup ``` @@ -294,13 +294,13 @@ Run a quick test of the SSH key system. **Syntax**: -```text +```bash ssh test [--user ] ``` **Example**: -```text +```bash ssh test server.example.com --user root ``` @@ -310,7 +310,7 @@ Show help information. **Syntax**: -```text +```bash ssh help ``` @@ -331,7 +331,7 @@ The `--ttl` option accepts various duration formats: When you generate a key, save the private key immediately: -```text +```bash # Generate and save to file ssh generate-key server.example.com | get private_key | save -f ~/.ssh/temp_key chmod 600 ~/.ssh/temp_key @@ -347,7 +347,7 @@ rm ~/.ssh/temp_key Add the temporary key to your SSH agent: -```text +```bash # Generate key and extract private key ssh generate-key server.example.com | get private_key | save -f /tmp/temp_key chmod 600 /tmp/temp_key @@ -495,7 +495,7 @@ If your organization uses HashiCorp Vault: #### CA Mode (Recommended) -```text +```bash # Generate CA-signed certificate ssh generate-key server.example.com --type ca --principal admin --ttl 1hr @@ -505,7 +505,7 @@ ssh generate-key server.example.com --type ca --principal admin --ttl 1hr **Setup** (one-time): -```text +```bash # On servers, add to /etc/ssh/sshd_config: TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem @@ -519,7 +519,7 @@ sudo systemctl restart sshd #### OTP Mode -```text +```bash # Generate one-time password ssh generate-key server.example.com --type otp --ip 192.168.1.100 @@ -530,7 +530,7 @@ ssh generate-key server.example.com --type otp --ip 192.168.1.100 Use in scripts for automated operations: -```text +```bash # deploy.nu def deploy [target: string] { let key = (ssh generate-key $target --ttl 1hr) @@ -552,7 +552,7 @@ def deploy [target: string] { For programmatic access, use the REST API: -```text +```bash # Generate key curl -X POST http://localhost:9090/api/v1/ssh/generate -H "Content-Type: application/json" diff --git a/docs/src/testing/taskserv-validation-guide.md b/docs/src/testing/taskserv-validation-guide.md index 28901f2..6a2e1ab 100644 --- a/docs/src/testing/taskserv-validation-guide.md +++ b/docs/src/testing/taskserv-validation-guide.md @@ -26,7 +26,7 @@ Validates configuration files, templates, and scripts without requiring infrastr **Command:** -```text +```bash provisioning taskserv validate kubernetes --level static ``` @@ -44,13 +44,13 @@ Checks taskserv dependencies, conflicts, and requirements. **Command:** -```text +```bash provisioning taskserv validate kubernetes --level dependencies ``` **Check against infrastructure:** -```text +```bash provisioning taskserv check-deps kubernetes --infra my-project ``` @@ -68,7 +68,7 @@ Enhanced check mode that performs validation and previews deployment without mak **Command:** -```text +```bash provisioning taskserv create kubernetes --check ``` @@ -85,7 +85,7 @@ Tests taskserv in isolated container environment before actual deployment. **Command:** -```text +```bash # Test with Docker provisioning taskserv test kubernetes --runtime docker @@ -102,7 +102,7 @@ provisioning taskserv test kubernetes --runtime docker --keep ### Recommended Validation Sequence -```text +```bash # 1. Static validation (fastest, no infrastructure needed) provisioning taskserv validate kubernetes --level static -v @@ -121,7 +121,7 @@ provisioning taskserv create kubernetes ### Quick Validation (All Levels) -```text +```bash # Run all validation levels provisioning taskserv validate kubernetes --level all -v ``` @@ -144,7 +144,7 @@ Multi-level validation framework. **Examples:** -```text +```bash # Complete validation provisioning taskserv validate kubernetes @@ -170,7 +170,7 @@ Check dependencies against infrastructure. **Examples:** -```text +```bash # Check dependencies provisioning taskserv check-deps kubernetes --infra my-project @@ -190,7 +190,7 @@ Enhanced check mode with full validation and preview. **Examples:** -```text +```bash # Check mode with verbose output provisioning taskserv create kubernetes --check -v @@ -212,7 +212,7 @@ Sandbox testing in isolated environment. **Examples:** -```text +```bash # Test with Docker provisioning taskserv test kubernetes --runtime docker @@ -232,7 +232,7 @@ docker exec -it taskserv-test-kubernetes bash ### Static Validation -```text +```bash Taskserv Validation Taskserv: kubernetes Level: static @@ -262,7 +262,7 @@ Overall Status ### Dependency Validation -```text +```bash Dependency Validation Report Taskserv: kubernetes @@ -284,7 +284,7 @@ Conflicts: ### Check Mode Output -```text +```bash Check Mode: kubernetes on server-01 → Running static validation... @@ -312,7 +312,7 @@ Check Mode Summary ### Test Output -```text +```bash Taskserv Sandbox Testing Taskserv: kubernetes Runtime: docker @@ -351,7 +351,7 @@ Detailed Results: ### GitLab CI Example -```text +```bash validate-taskservs: stage: validate script: @@ -377,7 +377,7 @@ deploy-taskservs: ### GitHub Actions Example -```text +```bash name: Taskserv Validation on: [push, pull_request] @@ -411,7 +411,7 @@ If shellcheck is not available, script validation will be skipped with a warning **Install shellcheck:** -```text +```bash # macOS brew install shellcheck @@ -428,7 +428,7 @@ Sandbox testing requires Docker or Podman. **Check runtime:** -```text +```bash # Docker docker ps @@ -466,7 +466,7 @@ If conflicting taskservs are detected: You can create custom validation scripts by extending the validation framework: -```text +```bash # custom_validation.nu use provisioning/core/nulib/taskservs/validate.nu * @@ -485,7 +485,7 @@ def custom-validate [taskserv: string] { Validate multiple taskservs: -```text +```bash # Validate all taskservs in infrastructure for taskserv in (provisioning taskserv list | get name) { provisioning taskserv validate $taskserv @@ -496,7 +496,7 @@ for taskserv in (provisioning taskserv list | get name) { Create test suite for all taskservs: -```text +```nushell #!/usr/bin/env nu let taskservs = ["kubernetes", "containerd", "cilium", "etcd"] diff --git a/docs/src/testing/test-environment-guide.md b/docs/src/testing/test-environment-guide.md index d156149..27c7833 100644 --- a/docs/src/testing/test-environment-guide.md +++ b/docs/src/testing/test-environment-guide.md @@ -13,7 +13,7 @@ eliminates manual Docker management and provides realistic test scenarios. ## Architecture -```text +```bash ┌─────────────────────────────────────────────────┐ │ Orchestrator (port 8080) │ │ ┌──────────────────────────────────────────┐ │ @@ -39,7 +39,7 @@ eliminates manual Docker management and provides realistic test scenarios. Test individual taskserv in isolated container. -```text +```bash # Basic test provisioning test env single kubernetes @@ -54,7 +54,7 @@ provisioning test quick postgres Simulate complete server with multiple taskservs. -```text +```bash # Server with taskservs provisioning test env server web-01 [containerd kubernetes cilium] @@ -66,7 +66,7 @@ provisioning test env server db-01 [postgres redis] --infra prod-stack Multi-node cluster simulation from templates. -```text +```bash # 3-node Kubernetes cluster provisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start @@ -93,7 +93,7 @@ provisioning test topology load etcd_cluster | test env cluster etcd ### Basic Workflow -```text +```bash # 1. Quick test (fastest) provisioning test quick kubernetes @@ -118,7 +118,7 @@ provisioning test env cleanup ### Available Templates -```text +```bash # List templates provisioning test topology list ``` @@ -133,7 +133,7 @@ provisioning test topology list ### Using Templates -```text +```bash # Load and use template provisioning test topology load kubernetes_3node | test env cluster kubernetes @@ -145,7 +145,7 @@ provisioning test topology load etcd_cluster Create `my-topology.toml`: -```text +```toml [my_cluster] name = "My Custom Cluster" cluster_type = "custom" @@ -174,7 +174,7 @@ subnet = "172.30.0.0/16" ### Environment Management -```text +```bash # Create from config provisioning test env create @@ -199,7 +199,7 @@ provisioning test env status ### Test Execution -```text +```bash # Run tests provisioning test env run [--tests [test1, test2]] @@ -212,7 +212,7 @@ provisioning test env cleanup ### Quick Test -```text +```bash # One-command test (create, run, cleanup) provisioning test quick [--infra NAME] ``` @@ -221,7 +221,7 @@ provisioning test quick [--infra NAME] ### Create Environment -```text +```bash curl -X POST http://localhost:9090/test/environments/create -H "Content-Type: application/json" -d '{ @@ -243,13 +243,13 @@ curl -X POST http://localhost:9090/test/environments/create ### List Environments -```text +```bash curl http://localhost:9090/test/environments ``` ### Run Tests -```text +```bash curl -X POST http://localhost:9090/test/environments/{id}/run -H "Content-Type: application/json" -d '{ @@ -260,7 +260,7 @@ curl -X POST http://localhost:9090/test/environments/{id}/run ### Cleanup -```text +```bash curl -X DELETE http://localhost:9090/test/environments/{id} ``` @@ -270,7 +270,7 @@ curl -X DELETE http://localhost:9090/test/environments/{id} Test taskserv before deployment: -```text +```bash # Test new taskserv version provisioning test env single my-taskserv --auto-start @@ -282,7 +282,7 @@ provisioning test env logs Test taskserv combinations: -```text +```bash # Test kubernetes + cilium + containerd provisioning test env server k8s-test [kubernetes cilium containerd] --auto-start ``` @@ -291,14 +291,14 @@ provisioning test env server k8s-test [kubernetes cilium containerd] --auto-star Test cluster configurations: -```text +```toml # Test 3-node etcd cluster provisioning test topology load etcd_cluster | test env cluster etcd --auto-start ``` ### 4. CI/CD Integration -```text +```bash # .gitlab-ci.yml test-taskserv: stage: test @@ -312,7 +312,7 @@ test-taskserv: ### Resource Limits -```text +```bash # Custom CPU and memory provisioning test env single postgres --cpu 4000 @@ -329,7 +329,7 @@ Each environment gets isolated network: ### Auto-Cleanup -```text +```bash # Auto-cleanup after tests provisioning test env single redis --auto-start --auto-cleanup ``` @@ -338,7 +338,7 @@ provisioning test env single redis --auto-start --auto-cleanup Run tests in parallel: -```text +```bash # Create multiple environments provisioning test env single kubernetes --auto-start & provisioning test env single postgres --auto-start & @@ -354,13 +354,13 @@ provisioning test env list ### Docker not running -```text +```bash Error: Failed to connect to Docker ``` **Solution:** -```text +```bash # Check Docker docker ps @@ -371,13 +371,13 @@ open -a Docker # macOS ### Orchestrator not running -```text +```bash Error: Connection refused (port 8080) ``` **Solution:** -```text +```bash cd provisioning/platform/orchestrator ./scripts/start-orchestrator.nu --background ``` @@ -386,26 +386,26 @@ cd provisioning/platform/orchestrator Check logs: -```text +```bash provisioning test env logs ``` Check Docker: -```text +```bash docker ps -a docker logs ``` ### Out of resources -```text +```bash Error: Cannot allocate memory ``` **Solution:** -```text +```bash # Cleanup old environments provisioning test env list | each {|env| provisioning test env cleanup $env.id } @@ -419,7 +419,7 @@ docker system prune -af Reuse topology templates instead of recreating: -```text +```bash provisioning test topology load kubernetes_3node | test env cluster kubernetes ``` @@ -427,7 +427,7 @@ provisioning test topology load kubernetes_3node | test env cluster kubernetes Always use auto-cleanup in CI/CD: -```text +```bash provisioning test quick # Includes auto-cleanup ``` @@ -443,7 +443,7 @@ Adjust resources based on needs: Run independent tests in parallel: -```text +```bash for taskserv in [kubernetes postgres redis] { provisioning test quick $taskserv & } @@ -461,7 +461,7 @@ wait ### Custom Config -```text +```toml # Override defaults provisioning test env single postgres --base-image debian:12 diff --git a/docs/src/testing/test-environment-system.md b/docs/src/testing/test-environment-system.md index cbd1b8b..cea83d2 100644 --- a/docs/src/testing/test-environment-system.md +++ b/docs/src/testing/test-environment-system.md @@ -22,7 +22,7 @@ servers, and multi-node clusters without manual Docker management. Test individual taskserv in isolated container: -```text +```bash # Quick test (create, run, cleanup) provisioning test quick kubernetes @@ -37,7 +37,7 @@ provisioning test env single redis --infra my-project Test complete server configurations with multiple taskservs: -```text +```toml # Simulate web server provisioning test env server web-01 [containerd kubernetes cilium] --auto-start @@ -49,7 +49,7 @@ provisioning test env server db-01 [postgres redis] --infra prod-stack --auto-st Test complex cluster configurations before deployment: -```text +```toml # 3-node Kubernetes HA cluster provisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start @@ -62,7 +62,7 @@ provisioning test topology load kubernetes_single | test env cluster kubernetes ## Test Environment Management -```text +```bash # List all test environments provisioning test env list @@ -119,7 +119,7 @@ The orchestrator exposes test environment endpoints: ## Architecture -```text +```bash User Command (CLI/API) ↓ Test Orchestrator (Rust) @@ -152,7 +152,7 @@ Isolated Test Containers ## CI/CD Integration Example -```text +```bash # GitLab CI test-infrastructure: stage: test diff --git a/docs/src/troubleshooting/troubleshooting-guide.md b/docs/src/troubleshooting/troubleshooting-guide.md index 7f7ab23..0312c71 100644 --- a/docs/src/troubleshooting/troubleshooting-guide.md +++ b/docs/src/troubleshooting/troubleshooting-guide.md @@ -15,7 +15,7 @@ This comprehensive troubleshooting guide helps you diagnose and resolve common i ### 1. Identify the Problem -```text +```bash # Check overall system status provisioning env provisioning validate config @@ -27,7 +27,7 @@ provisioning taskserv list --infra my-infra --installed ### 2. Gather Information -```text +```bash # Enable debug mode for detailed output provisioning --debug @@ -37,7 +37,7 @@ provisioning show logs --infra my-infra ### 3. Use Diagnostic Commands -```text +```bash # Validate configuration provisioning validate config --detailed @@ -58,7 +58,7 @@ provisioning network test --infra my-infra **Diagnosis:** -```text +```bash # Check system requirements uname -a df -h @@ -73,7 +73,7 @@ sudo -l #### Permission Issues -```text +```bash # Run installer with sudo sudo ./install-provisioning @@ -84,7 +84,7 @@ export PATH="$HOME/provisioning/bin:$PATH" #### Missing Dependencies -```text +```bash # Ubuntu/Debian sudo apt update sudo apt install -y curl wget tar build-essential @@ -95,7 +95,7 @@ sudo dnf install -y curl wget tar gcc make #### Architecture Issues -```text +```bash # Check architecture uname -m @@ -109,13 +109,13 @@ wget https://releases.example.com/provisioning-linux-x86_64.tar.gz **Symptoms:** -```text +```bash bash: provisioning: command not found ``` **Diagnosis:** -```text +```bash # Check if provisioning is installed which provisioning ls -la /usr/local/bin/provisioning @@ -126,7 +126,7 @@ echo $PATH **Solutions:** -```text +```bash # Add to PATH export PATH="/usr/local/bin:$PATH" @@ -142,14 +142,14 @@ sudo ln -sf /usr/local/provisioning/core/nulib/provisioning /usr/local/bin/provi **Symptoms:** -```text +```bash Plugin not found: nu_plugin_kcl Plugin registration failed ``` **Diagnosis:** -```text +```bash # Check Nushell version nu --version @@ -162,7 +162,7 @@ nu -c "version | get installed_plugins" **Solutions:** -```text +```bash # Install KCL CLI (required for nu_plugin_kcl) # Download from: https://github.com/kcl-lang/cli/releases @@ -179,14 +179,14 @@ nu -c "plugin add /usr/local/provisioning/plugins/nu_plugin_tera" **Symptoms:** -```text +```bash Configuration file not found Failed to load configuration ``` **Diagnosis:** -```text +```bash # Check configuration file locations provisioning env | grep config @@ -197,7 +197,7 @@ ls -la /usr/local/provisioning/config.defaults.toml **Solutions:** -```text +```bash # Initialize user configuration provisioning init config @@ -215,7 +215,7 @@ provisioning validate config **Symptoms:** -```text +```bash Configuration validation failed Invalid configuration value Missing required field @@ -223,7 +223,7 @@ Missing required field **Diagnosis:** -```text +```bash # Detailed validation provisioning validate config --detailed @@ -236,7 +236,7 @@ provisioning config show --section providers #### Path Configuration Issues -```text +```toml # Check base path exists ls -la /path/to/provisioning @@ -250,7 +250,7 @@ base = "/correct/path/to/provisioning" #### Provider Configuration Issues -```text +```toml # Test provider connectivity provisioning provider test aws @@ -267,14 +267,14 @@ interface = "CLI" # or "API" **Symptoms:** -```text +```bash Interpolation pattern not resolved: {{env.VARIABLE}} Template rendering failed ``` **Diagnosis:** -```text +```bash # Test interpolation provisioning validate interpolation test @@ -287,7 +287,7 @@ provisioning --debug validate interpolation validate **Solutions:** -```text +```bash # Set missing environment variables export MISSING_VARIABLE="value" @@ -305,7 +305,7 @@ config_value = "{{env.VARIABLE || 'default_value'}}" **Symptoms:** -```text +```bash Failed to create server Provider API error Insufficient quota @@ -313,7 +313,7 @@ Insufficient quota **Diagnosis:** -```text +```bash # Check provider status provisioning provider status aws @@ -332,7 +332,7 @@ provisioning --debug server create web-01 --infra my-infra --check #### API Authentication Issues -```text +```bash # AWS aws configure list aws sts get-caller-identity @@ -348,7 +348,7 @@ export UPCLOUD_PASSWORD="your-password" #### Quota/Limit Issues -```text +```bash # Check current usage provisioning show costs --infra my-infra @@ -361,7 +361,7 @@ provisioning show costs --infra my-infra #### Network/Connectivity Issues -```text +```bash # Test network connectivity curl -v https://api.aws.amazon.com curl -v https://api.upcloud.com @@ -377,7 +377,7 @@ nslookup api.aws.amazon.com **Symptoms:** -```text +```bash Connection refused Permission denied Host key verification failed @@ -385,7 +385,7 @@ Host key verification failed **Diagnosis:** -```text +```bash # Check server status provisioning server list --infra my-infra @@ -400,7 +400,7 @@ provisioning show servers web-01 --infra my-infra #### Connection Issues -```text +```bash # Wait for server to be fully ready provisioning server list --infra my-infra --status @@ -413,7 +413,7 @@ provisioning show servers web-01 --infra my-infra | grep ip #### Authentication Issues -```text +```bash # Check SSH key ls -la ~/.ssh/ ssh-add -l @@ -427,7 +427,7 @@ provisioning server ssh web-01 --key ~/.ssh/provisioning_key --infra my-infra #### Host Key Issues -```text +```bash # Remove old host key ssh-keygen -R server-ip @@ -441,7 +441,7 @@ ssh -o StrictHostKeyChecking=accept-new user@server-ip **Symptoms:** -```text +```bash Service installation failed Package not found Dependency conflicts @@ -449,7 +449,7 @@ Dependency conflicts **Diagnosis:** -```text +```bash # Check service prerequisites provisioning taskserv check kubernetes --infra my-infra @@ -464,7 +464,7 @@ provisioning server ssh web-01 --command "free -h && df -h" --infra my-infra #### Resource Issues -```text +```bash # Check available resources provisioning server ssh web-01 --command " echo 'Memory:' && free -h @@ -478,7 +478,7 @@ provisioning server resize web-01 --plan larger-plan --infra my-infra #### Package Repository Issues -```text +```bash # Update package lists provisioning server ssh web-01 --command " sudo apt update && sudo apt upgrade -y @@ -492,7 +492,7 @@ provisioning server ssh web-01 --command " #### Dependency Issues -```text +```bash # Install missing dependencies provisioning taskserv create containerd --infra my-infra @@ -504,7 +504,7 @@ provisioning taskserv create kubernetes --infra my-infra **Symptoms:** -```text +```bash Service status: failed Service not responding Health check failures @@ -512,7 +512,7 @@ Health check failures **Diagnosis:** -```text +```bash # Check service status provisioning taskserv status kubernetes --infra my-infra @@ -530,7 +530,7 @@ provisioning server ssh web-01 --command " #### Configuration Issues -```text +```toml # Reconfigure service provisioning taskserv configure kubernetes --infra my-infra @@ -540,7 +540,7 @@ provisioning taskserv reset kubernetes --infra my-infra #### Port Conflicts -```text +```bash # Check port usage provisioning server ssh web-01 --command " sudo netstat -tulpn | grep :6443 @@ -552,7 +552,7 @@ provisioning server ssh web-01 --command " #### Permission Issues -```text +```bash # Fix permissions provisioning server ssh web-01 --command " sudo chown -R kubernetes:kubernetes /var/lib/kubernetes @@ -566,7 +566,7 @@ provisioning server ssh web-01 --command " **Symptoms:** -```text +```bash Cluster deployment failed Pod creation errors Service unavailable @@ -574,7 +574,7 @@ Service unavailable **Diagnosis:** -```text +```bash # Check cluster status provisioning cluster status web-cluster --infra my-infra @@ -592,7 +592,7 @@ provisioning cluster logs web-cluster --infra my-infra #### Node Issues -```text +```bash # Check node status provisioning server ssh master-01 --command " kubectl describe nodes @@ -610,7 +610,7 @@ provisioning taskserv configure kubernetes --infra my-infra --servers worker-01 #### Resource Constraints -```text +```bash # Check resource usage provisioning server ssh master-01 --command " kubectl top nodes @@ -624,7 +624,7 @@ provisioning server create worker-04 --infra my-infra #### Network Issues -```text +```bash # Check network plugin provisioning server ssh master-01 --command " kubectl get pods -n kube-system | grep cilium @@ -646,7 +646,7 @@ provisioning taskserv restart cilium --infra my-infra **Diagnosis:** -```text +```bash # Check system resources top htop @@ -665,7 +665,7 @@ time provisioning server list --infra my-infra #### Local System Issues -```text +```bash # Close unnecessary applications # Upgrade system resources # Use SSD storage if available @@ -676,7 +676,7 @@ export PROVISIONING_TIMEOUT=600 # 10 minutes #### Network Issues -```text +```bash # Use region closer to your location [providers.aws] region = "us-west-1" # Closer region @@ -688,7 +688,7 @@ enabled = true #### Large Infrastructure Issues -```text +```bash # Use parallel operations provisioning server create --infra my-infra --parallel 4 @@ -706,7 +706,7 @@ provisioning server list --infra my-infra --filter "status == 'running'" **Diagnosis:** -```text +```bash # Check memory usage free -h ps aux --sort=-%mem | head @@ -717,7 +717,7 @@ valgrind provisioning server list --infra my-infra **Solutions:** -```text +```bash # Increase system memory # Close other applications # Use streaming operations for large datasets @@ -735,7 +735,7 @@ export PROVISIONING_MAX_PARALLEL=2 **Symptoms:** -```text +```bash Connection timeout DNS resolution failed SSL certificate errors @@ -743,7 +743,7 @@ SSL certificate errors **Diagnosis:** -```text +```bash # Test basic connectivity ping 8.8.8.8 curl -I https://api.aws.amazon.com @@ -757,7 +757,7 @@ openssl s_client -connect api.aws.amazon.com:443 -servername api.aws.amazon.com #### DNS Issues -```text +```bash # Use alternative DNS echo 'nameserver 8.8.8.8' | sudo tee /etc/resolv.conf @@ -768,7 +768,7 @@ sudo dscacheutil -flushcache # macOS #### Proxy/Firewall Issues -```text +```bash # Configure proxy if needed export HTTP_PROXY=http://proxy.company.com:9090 export HTTPS_PROXY=http://proxy.company.com:9090 @@ -780,7 +780,7 @@ sudo firewall-cmd --list-all # RHEL/CentOS #### Certificate Issues -```text +```bash # Update CA certificates sudo apt update && sudo apt install ca-certificates # Ubuntu brew install ca-certificates # macOS @@ -795,7 +795,7 @@ export PROVISIONING_SKIP_SSL_VERIFY=true **Symptoms:** -```text +```bash SOPS decryption failed Age key not found Invalid key format @@ -803,7 +803,7 @@ Invalid key format **Diagnosis:** -```text +```bash # Check SOPS configuration provisioning sops config @@ -819,7 +819,7 @@ age-keygen -y ~/.config/sops/age/keys.txt #### Missing Keys -```text +```bash # Generate new Age key age-keygen -o ~/.config/sops/age/keys.txt @@ -829,7 +829,7 @@ provisioning sops config --key-file ~/.config/sops/age/keys.txt #### Key Permissions -```text +```bash # Fix key file permissions chmod 600 ~/.config/sops/age/keys.txt chown $(whoami) ~/.config/sops/age/keys.txt @@ -837,7 +837,7 @@ chown $(whoami) ~/.config/sops/age/keys.txt #### Configuration Issues -```text +```toml # Update SOPS configuration in ~/.config/provisioning/config.toml [sops] use_sops = true @@ -851,7 +851,7 @@ key_search_paths = [ **Symptoms:** -```text +```bash Permission denied Access denied Insufficient privileges @@ -859,7 +859,7 @@ Insufficient privileges **Diagnosis:** -```text +```bash # Check user permissions id groups @@ -874,7 +874,7 @@ sudo provisioning env **Solutions:** -```text +```bash # Fix file ownership sudo chown -R $(whoami):$(whoami) ~/.config/provisioning/ @@ -892,7 +892,7 @@ sudo usermod -a -G docker $(whoami) # For Docker access **Symptoms:** -```text +```bash No space left on device Write failed Disk full @@ -900,7 +900,7 @@ Disk full **Diagnosis:** -```text +```bash # Check disk usage df -h du -sh ~/.config/provisioning/ @@ -912,7 +912,7 @@ find /usr/local/provisioning -type f -size +100M **Solutions:** -```text +```bash # Clean up cache files rm -rf ~/.config/provisioning/cache/* rm -rf /usr/local/provisioning/.cache/* @@ -931,7 +931,7 @@ gzip ~/.config/provisioning/backups/*.yaml ### Configuration Recovery -```text +```toml # Restore from backup provisioning config restore --backup latest @@ -944,7 +944,7 @@ provisioning init config --force ### Infrastructure Recovery -```text +```bash # Check infrastructure status provisioning show servers --infra my-infra @@ -957,7 +957,7 @@ provisioning restore --backup latest --infra my-infra ### Service Recovery -```text +```bash # Restart failed services provisioning taskserv restart kubernetes --infra my-infra @@ -970,7 +970,7 @@ provisioning taskserv create kubernetes --infra my-infra ### Regular Maintenance -```text +```bash # Weekly maintenance script #!/bin/bash @@ -992,7 +992,7 @@ provisioning backup create --name "weekly-$(date +%Y%m%d)" ### Monitoring Setup -```text +```bash # Set up health monitoring #!/bin/bash @@ -1029,7 +1029,7 @@ provisioning backup create --name "weekly-$(date +%Y%m%d)" ### Debug Information Collection -```text +```bash #!/bin/bash # Collect debug information diff --git a/docs/src/troubleshooting/troubleshooting/ctrl-c-sudo-handling.md b/docs/src/troubleshooting/troubleshooting/ctrl-c-sudo-handling.md index 7ed2176..5d29474 100644 --- a/docs/src/troubleshooting/troubleshooting/ctrl-c-sudo-handling.md +++ b/docs/src/troubleshooting/troubleshooting/ctrl-c-sudo-handling.md @@ -21,7 +21,7 @@ However, the system **does** gracefully handle these cancellation scenarios: When `fix_local_hosts` is enabled and sudo credentials are not cached, display a clear warning: -```text +```bash ⚠ Sudo access required for --fix-local-hosts ℹ You will be prompted for your password, or press CTRL-C to cancel Tip: Run 'sudo -v' beforehand to cache credentials @@ -37,7 +37,7 @@ All sudo commands are wrapped with error handling that detects password cancella - Wrong password 3 times - Sudo timeout -```text +```javascript let result = (do --ignore-errors { ^sudo } | complete) # Check for cancellation: exit code 1 (password required) or 130 (timeout/failure) if ($result.exit_code == 1 and ($result.stderr | str contains "password is required")) or $result.exit_code == 130 { @@ -63,14 +63,14 @@ Added reusable helper functions for consistent handling: ### Option 1: Cache Credentials First (Recommended) -```text +```bash sudo -v # Cache credentials for 5 minutes provisioning -c server create ``` ### Option 2: Enter Password When Prompted -```text +```bash provisioning -c server create # Enter password when prompted # Press CTRL-C to cancel if needed @@ -80,7 +80,7 @@ provisioning -c server create In your settings file, set: -```text +```toml fix_local_hosts = false ``` @@ -137,7 +137,7 @@ the exit code. This is fundamental Unix behavior and cannot be prevented. Test the CTRL-C handling: -```text +```bash # Test 1: Cancel at password prompt provisioning -c server create # When prompted for password, press CTRL-C @@ -179,7 +179,7 @@ provisioning -c server create ### Before (Cryptic) -```text +```bash Password: sudo: a password is required Error: nu::shell::eval_block_with_input @@ -191,7 +191,7 @@ Error: nu::shell::non_zero_exit_code ### After (Clear) -```text +```bash ⚠ Sudo access required for --fix-local-hosts ℹ You will be prompted for your password, or press CTRL-C to cancel Tip: Run 'sudo -v' beforehand to cache credentials