window.search = JSON.parse('{"doc_urls":["index.html#provisioning-platform-documentation","index.html#quick-navigation","index.html#-getting-started","index.html#-user-guides","index.html#-architecture","index.html#-architecture-decision-records-adrs","index.html#-api-documentation","index.html#-development","index.html#-troubleshooting","index.html#-how-to-guides","index.html#-configuration","index.html#-quick-references","index.html#documentation-structure","getting-started/installation-guide.html#installation-guide","getting-started/installation-guide.html#what-youll-learn","getting-started/installation-guide.html#system-requirements","getting-started/installation-guide.html#operating-system-support","getting-started/installation-guide.html#hardware-requirements","getting-started/installation-guide.html#architecture-support","getting-started/installation-guide.html#prerequisites","getting-started/installation-guide.html#pre-installation-checklist","getting-started/getting-started.html#getting-started-guide","getting-started/getting-started.html#what-youll-learn","getting-started/getting-started.html#prerequisites","getting-started/getting-started.html#essential-concepts","getting-started/getting-started.html#infrastructure-as-code-iac","getting-started/quickstart-cheatsheet.html#provisioning-platform-quick-reference","getting-started/quickstart-cheatsheet.html#quick-navigation","getting-started/quickstart-cheatsheet.html#plugin-commands","getting-started/quickstart-cheatsheet.html#authentication-plugin-nu_plugin_auth","getting-started/quickstart-cheatsheet.html#kms-plugin-nu_plugin_kms","getting-started/quickstart-cheatsheet.html#orchestrator-plugin-nu_plugin_orchestrator","getting-started/quickstart-cheatsheet.html#plugin-performance-comparison","getting-started/quickstart-cheatsheet.html#cli-shortcuts","getting-started/quickstart-cheatsheet.html#infrastructure-shortcuts","getting-started/quickstart-cheatsheet.html#orchestration-shortcuts","getting-started/quickstart-cheatsheet.html#development-shortcuts","getting-started/quickstart-cheatsheet.html#workspace-shortcuts","getting-started/quickstart-cheatsheet.html#configuration-shortcuts","getting-started/quickstart-cheatsheet.html#utility-shortcuts","getting-started/quickstart-cheatsheet.html#generation-shortcuts","getting-started/quickstart-cheatsheet.html#action-shortcuts","getting-started/quickstart-cheatsheet.html#infrastructure-commands","getting-started/quickstart-cheatsheet.html#server-management","getting-started/quickstart-cheatsheet.html#taskserv-management","getting-started/quickstart-cheatsheet.html#cluster-management","getting-started/quickstart-cheatsheet.html#orchestration-commands","getting-started/quickstart-cheatsheet.html#workflow-management","getting-started/quickstart-cheatsheet.html#batch-operations","getting-started/quickstart-cheatsheet.html#orchestrator-management","getting-started/quickstart-cheatsheet.html#configuration-commands","getting-started/quickstart-cheatsheet.html#environment-and-validation","getting-started/quickstart-cheatsheet.html#configuration-files","getting-started/quickstart-cheatsheet.html#http-configuration","getting-started/quickstart-cheatsheet.html#workspace-commands","getting-started/quickstart-cheatsheet.html#workspace-management","getting-started/quickstart-cheatsheet.html#user-preferences","getting-started/quickstart-cheatsheet.html#security-commands","getting-started/quickstart-cheatsheet.html#authentication-via-cli","getting-started/quickstart-cheatsheet.html#multi-factor-authentication-mfa","getting-started/quickstart-cheatsheet.html#secrets-management","getting-started/quickstart-cheatsheet.html#ssh-temporal-keys","getting-started/quickstart-cheatsheet.html#kms-operations-via-cli","getting-started/quickstart-cheatsheet.html#break-glass-emergency-access","getting-started/quickstart-cheatsheet.html#compliance-and-audit","getting-started/quickstart-cheatsheet.html#common-workflows","getting-started/quickstart-cheatsheet.html#complete-deployment-from-scratch","getting-started/quickstart-cheatsheet.html#multi-environment-deployment","getting-started/quickstart-cheatsheet.html#update-infrastructure","getting-started/quickstart-cheatsheet.html#encrypted-secrets-deployment","getting-started/quickstart-cheatsheet.html#debug-and-check-mode","getting-started/quickstart-cheatsheet.html#debug-mode","getting-started/quickstart-cheatsheet.html#check-mode-dry-run","getting-started/quickstart-cheatsheet.html#auto-confirm-mode","getting-started/quickstart-cheatsheet.html#wait-mode","getting-started/quickstart-cheatsheet.html#infrastructure-selection","getting-started/quickstart-cheatsheet.html#output-formats","getting-started/quickstart-cheatsheet.html#json-output","getting-started/quickstart-cheatsheet.html#yaml-output","getting-started/quickstart-cheatsheet.html#table-output-default","getting-started/quickstart-cheatsheet.html#text-output","getting-started/quickstart-cheatsheet.html#performance-tips","getting-started/quickstart-cheatsheet.html#use-plugins-for-frequent-operations","getting-started/quickstart-cheatsheet.html#batch-operations-1","getting-started/quickstart-cheatsheet.html#check-mode-for-testing","getting-started/quickstart-cheatsheet.html#help-system","getting-started/quickstart-cheatsheet.html#command-specific-help","getting-started/quickstart-cheatsheet.html#bi-directional-help","getting-started/quickstart-cheatsheet.html#general-help","getting-started/quickstart-cheatsheet.html#quick-reference-common-flags","getting-started/quickstart-cheatsheet.html#plugin-installation-quick-reference","getting-started/quickstart-cheatsheet.html#related-documentation","getting-started/setup-quickstart.html#setup-quick-start---5-minutes-to-deployment","getting-started/setup-quickstart.html#step-1-check-prerequisites-30-seconds","getting-started/setup-quickstart.html#step-2-install-provisioning-1-minute","getting-started/setup-quickstart.html#step-3-initialize-system-2-minutes","getting-started/setup-quickstart.html#step-4-create-your-first-workspace-1-minute","getting-started/setup-quickstart.html#step-5-deploy-your-first-server-1-minute","getting-started/setup-quickstart.html#verify-everything-works","getting-started/setup-quickstart.html#common-commands-cheat-sheet","getting-started/setup-quickstart.html#troubleshooting-quick-fixes","getting-started/setup-quickstart.html#whats-next","getting-started/setup-quickstart.html#need-help","getting-started/setup-quickstart.html#key-files","getting-started/setup-system-guide.html#provisioning-setup-system-guide","getting-started/setup-system-guide.html#quick-start","getting-started/setup-system-guide.html#prerequisites","getting-started/setup-system-guide.html#30-second-setup","getting-started/quickstart.html#quick-start","getting-started/quickstart.html#-navigate-to-quick-start-guide","getting-started/quickstart.html#quick-commands","getting-started/01-prerequisites.html#prerequisites","getting-started/01-prerequisites.html#hardware-requirements","getting-started/01-prerequisites.html#minimum-requirements-solo-mode","getting-started/01-prerequisites.html#recommended-requirements-multi-user-mode","getting-started/01-prerequisites.html#production-requirements-enterprise-mode","getting-started/01-prerequisites.html#operating-system","getting-started/01-prerequisites.html#supported-platforms","getting-started/01-prerequisites.html#platform-specific-notes","getting-started/01-prerequisites.html#required-software","getting-started/01-prerequisites.html#core-dependencies","getting-started/01-prerequisites.html#optional-dependencies","getting-started/01-prerequisites.html#installation-verification","getting-started/01-prerequisites.html#nushell","getting-started/01-prerequisites.html#kcl","getting-started/01-prerequisites.html#docker","getting-started/01-prerequisites.html#sops","getting-started/01-prerequisites.html#age","getting-started/01-prerequisites.html#installing-missing-dependencies","getting-started/01-prerequisites.html#macos-using-homebrew","getting-started/01-prerequisites.html#ubuntudebian","getting-started/01-prerequisites.html#fedorarhel","getting-started/01-prerequisites.html#network-requirements","getting-started/01-prerequisites.html#firewall-ports","getting-started/01-prerequisites.html#external-connectivity","getting-started/01-prerequisites.html#cloud-provider-credentials-optional","getting-started/01-prerequisites.html#aws","getting-started/01-prerequisites.html#upcloud","getting-started/01-prerequisites.html#next-steps","getting-started/02-installation.html#installation","getting-started/02-installation.html#overview","getting-started/02-installation.html#step-1-clone-the-repository","getting-started/02-installation.html#step-2-install-nushell-plugins","getting-started/02-installation.html#install-nu_plugin_tera-template-rendering","getting-started/02-installation.html#install-nu_plugin_kcl-optional-kcl-integration","getting-started/02-installation.html#verify-plugin-installation","getting-started/02-installation.html#step-3-add-cli-to-path","getting-started/02-installation.html#step-4-generate-age-encryption-keys","getting-started/02-installation.html#step-5-configure-environment","getting-started/02-installation.html#step-6-initialize-workspace","getting-started/02-installation.html#step-7-validate-installation","getting-started/02-installation.html#optional-install-platform-services","getting-started/02-installation.html#optional-install-platform-with-installer","getting-started/02-installation.html#troubleshooting","getting-started/02-installation.html#nushell-plugin-not-found","getting-started/02-installation.html#permission-denied","getting-started/02-installation.html#age-keys-not-found","getting-started/02-installation.html#next-steps","getting-started/02-installation.html#additional-resources","getting-started/03-first-deployment.html#first-deployment","getting-started/03-first-deployment.html#overview","getting-started/03-first-deployment.html#step-1-configure-infrastructure","getting-started/03-first-deployment.html#step-2-edit-configuration","getting-started/03-first-deployment.html#step-3-create-server-check-mode","getting-started/03-first-deployment.html#step-4-create-server-real","getting-started/03-first-deployment.html#step-5-verify-server","getting-started/03-first-deployment.html#step-6-install-kubernetes-check-mode","getting-started/03-first-deployment.html#step-7-install-kubernetes-real","getting-started/03-first-deployment.html#step-8-verify-installation","getting-started/03-first-deployment.html#common-deployment-patterns","getting-started/03-first-deployment.html#pattern-1-multiple-servers","getting-started/03-first-deployment.html#pattern-2-server-with-multiple-task-services","getting-started/03-first-deployment.html#pattern-3-complete-cluster","getting-started/03-first-deployment.html#deployment-workflow","getting-started/03-first-deployment.html#troubleshooting","getting-started/03-first-deployment.html#server-creation-fails","getting-started/03-first-deployment.html#task-service-installation-fails","getting-started/03-first-deployment.html#ssh-connection-issues","getting-started/03-first-deployment.html#next-steps","getting-started/03-first-deployment.html#additional-resources","getting-started/04-verification.html#verification","getting-started/04-verification.html#overview","getting-started/04-verification.html#step-1-verify-configuration","getting-started/04-verification.html#step-2-verify-servers","getting-started/04-verification.html#step-3-verify-task-services","getting-started/04-verification.html#step-4-verify-kubernetes-if-installed","getting-started/04-verification.html#step-5-verify-platform-services-optional","getting-started/04-verification.html#orchestrator","getting-started/04-verification.html#control-center","getting-started/04-verification.html#kms-service","getting-started/04-verification.html#step-6-run-health-checks","getting-started/04-verification.html#step-7-verify-workflows","getting-started/04-verification.html#common-verification-checks","getting-started/04-verification.html#dns-resolution-if-coredns-installed","getting-started/04-verification.html#network-connectivity","getting-started/04-verification.html#storage-and-resources","getting-started/04-verification.html#troubleshooting-failed-verifications","getting-started/04-verification.html#configuration-validation-failed","getting-started/04-verification.html#server-unreachable","getting-started/04-verification.html#task-service-not-running","getting-started/04-verification.html#platform-service-down","getting-started/04-verification.html#performance-verification","getting-started/04-verification.html#response-time-tests","getting-started/04-verification.html#resource-usage","getting-started/04-verification.html#security-verification","getting-started/04-verification.html#encryption","getting-started/04-verification.html#authentication-if-enabled","getting-started/04-verification.html#verification-checklist","getting-started/04-verification.html#next-steps","getting-started/04-verification.html#additional-resources","getting-started/05-platform-configuration.html#platform-service-configuration","getting-started/05-platform-configuration.html#what-youll-learn","getting-started/05-platform-configuration.html#prerequisites","getting-started/05-platform-configuration.html#platform-services-overview","getting-started/05-platform-configuration.html#deployment-modes","getting-started/05-platform-configuration.html#step-1-initialize-configuration-script","getting-started/05-platform-configuration.html#step-2-choose-configuration-method","getting-started/05-platform-configuration.html#method-a-interactive-typedialog-configuration-recommended","getting-started/05-platform-configuration.html#method-b-quick-mode-configuration-fastest","getting-started/05-platform-configuration.html#method-c-manual-nickel-configuration","getting-started/05-platform-configuration.html#step-3-understand-configuration-layers","getting-started/05-platform-configuration.html#step-4-verify-generated-configuration","getting-started/05-platform-configuration.html#step-5-run-platform-services","getting-started/05-platform-configuration.html#running-a-single-service","getting-started/05-platform-configuration.html#running-multiple-services","getting-started/05-platform-configuration.html#docker-based-deployment","getting-started/05-platform-configuration.html#step-6-verify-services-are-running","getting-started/05-platform-configuration.html#customizing-configuration","getting-started/05-platform-configuration.html#scenario-change-deployment-mode","getting-started/05-platform-configuration.html#scenario-manual-configuration-edit","getting-started/05-platform-configuration.html#scenario-workspace-specific-overrides","getting-started/05-platform-configuration.html#available-configuration-commands","getting-started/05-platform-configuration.html#configuration-file-locations","getting-started/05-platform-configuration.html#public-definitions-part-of-repository","getting-started/05-platform-configuration.html#private-runtime-configs-gitignored","getting-started/05-platform-configuration.html#examples-reference","getting-started/05-platform-configuration.html#troubleshooting-configuration","getting-started/05-platform-configuration.html#issue-script-fails-with-nickel-not-found","getting-started/05-platform-configuration.html#issue-configuration-wont-generate-toml","getting-started/05-platform-configuration.html#issue-service-cant-read-configuration","getting-started/05-platform-configuration.html#issue-services-wont-start-after-config-change","getting-started/05-platform-configuration.html#important-notes","getting-started/05-platform-configuration.html#-runtime-configurations-are-private","getting-started/05-platform-configuration.html#-schemas-are-public","getting-started/05-platform-configuration.html#-configuration-is-idempotent","getting-started/05-platform-configuration.html#-installer-status","getting-started/05-platform-configuration.html#next-steps","getting-started/05-platform-configuration.html#additional-resources","architecture/system-overview.html#system-overview","architecture/system-overview.html#executive-summary","architecture/system-overview.html#high-level-architecture","architecture/system-overview.html#system-diagram","architecture/architecture-overview.html#provisioning-platform---architecture-overview","architecture/architecture-overview.html#table-of-contents","architecture/architecture-overview.html#executive-summary","architecture/architecture-overview.html#what-is-the-provisioning-platform","architecture/architecture-overview.html#key-characteristics","architecture/architecture-overview.html#architecture-at-a-glance","architecture/design-principles.html#design-principles","architecture/design-principles.html#overview","architecture/design-principles.html#core-architectural-principles","architecture/design-principles.html#1-project-architecture-principles-pap-compliance","architecture/integration-patterns.html#integration-patterns","architecture/integration-patterns.html#overview","architecture/integration-patterns.html#core-integration-patterns","architecture/integration-patterns.html#1-hybrid-language-integration","architecture/integration-patterns.html#2-provider-abstraction-pattern","architecture/integration-patterns.html#3-configuration-resolution-pattern","architecture/integration-patterns.html#4-workflow-orchestration-patterns","architecture/integration-patterns.html#5-state-management-patterns","architecture/integration-patterns.html#6-event-and-messaging-patterns","architecture/integration-patterns.html#7-extension-integration-patterns","architecture/integration-patterns.html#8-api-design-patterns","architecture/integration-patterns.html#error-handling-patterns","architecture/integration-patterns.html#structured-error-pattern","architecture/integration-patterns.html#error-recovery-pattern","architecture/integration-patterns.html#performance-optimization-patterns","architecture/integration-patterns.html#caching-strategy-pattern","architecture/integration-patterns.html#streaming-pattern-for-large-data","architecture/integration-patterns.html#testing-integration-patterns","architecture/integration-patterns.html#integration-test-pattern","architecture/orchestrator-integration-model.html#orchestrator-integration-model---deep-dive","architecture/orchestrator-integration-model.html#executive-summary","architecture/orchestrator-integration-model.html#current-architecture-hybrid-orchestrator-v30","architecture/orchestrator-integration-model.html#the-problem-being-solved","architecture/orchestrator-integration-model.html#why-not-pure-rust","architecture/orchestrator-integration-model.html#multi-repo-integration-example","architecture/orchestrator-integration-model.html#installation","architecture/multi-repo-architecture.html#multi-repository-architecture-with-oci-registry-support","architecture/multi-repo-architecture.html#overview","architecture/multi-repo-architecture.html#architecture-goals","architecture/multi-repo-architecture.html#repository-structure","architecture/multi-repo-architecture.html#repository-1-provisioning-core","architecture/multi-repo-strategy.html#multi-repository-strategy-analysis","architecture/multi-repo-strategy.html#executive-summary","architecture/multi-repo-strategy.html#repository-architecture-options","architecture/multi-repo-strategy.html#option-a-pure-monorepo-original-recommendation","architecture/multi-repo-strategy.html#option-b-multi-repo-with-submodules--not-recommended","architecture/multi-repo-strategy.html#option-c-multi-repo-with-package-dependencies--recommended","architecture/multi-repo-strategy.html#recommended-multi-repo-architecture","architecture/multi-repo-strategy.html#repository-1-provisioning-core","architecture/database-and-config-architecture.html#database-and-configuration-architecture","architecture/database-and-config-architecture.html#control-center-database-dbs","architecture/database-and-config-architecture.html#database-type--surrealdb--in-memory-backend","architecture/database-and-config-architecture.html#database-configuration","architecture/database-and-config-architecture.html#orchestrator-database","architecture/database-and-config-architecture.html#storage-type--filesystem--file-based-queue","architecture/ecosystem-integration.html#prov-ecosystem--provctl-integration","architecture/ecosystem-integration.html#overview","architecture/ecosystem-integration.html#architecture","architecture/ecosystem-integration.html#three-layer-integration","architecture/package-and-loader-system.html#kcl-package-and-module-loader-system","architecture/package-and-loader-system.html#architecture-overview","architecture/package-and-loader-system.html#benefits","architecture/package-and-loader-system.html#components","architecture/package-and-loader-system.html#1-core-kcl-package-provisioningkcl","architecture/package-and-loader-system.html#2-module-discovery-system","architecture/nickel-vs-kcl-comparison.html#nickel-vs-kcl-comprehensive-comparison","architecture/nickel-vs-kcl-comparison.html#quick-decision-tree","architecture/nickel-vs-kcl-comparison.html#for-legacy-kcl-workspace-level","architecture/nickel-vs-kcl-comparison.html#9-typedialog-integration","architecture/nickel-vs-kcl-comparison.html#what-is-typedialog","architecture/nickel-vs-kcl-comparison.html#workflow-nickel-schemas--interactive-uis--nickel-output","architecture/nickel-executable-examples.html#nickel-executable-examples--test-cases","architecture/nickel-executable-examples.html#setup-run-examples-locally","architecture/nickel-executable-examples.html#prerequisites","architecture/nickel-executable-examples.html#directory-structure-for-examples","architecture/nickel-executable-examples.html#example-1-simple-server-configuration-executable","architecture/nickel-executable-examples.html#step-1-create-contract-file","architecture/nickel-executable-examples.html#step-2-create-defaults-file","architecture/nickel-executable-examples.html#step-3-create-main-module-with-hybrid-interface","architecture/nickel-executable-examples.html#test-export-and-validate-json","architecture/nickel-executable-examples.html#usage-in-consumer-module","architecture/nickel-executable-examples.html#example-2-complex-provider-extension-production-pattern","architecture/nickel-executable-examples.html#create-provider-structure","architecture/nickel-executable-examples.html#provider-contracts","architecture/nickel-executable-examples.html#provider-defaults","architecture/nickel-executable-examples.html#provider-main-module","architecture/nickel-executable-examples.html#test-provider-configuration","architecture/nickel-executable-examples.html#consumer-using-provider","architecture/nickel-executable-examples.html#example-3-real-world-pattern---taskserv-configuration","architecture/nickel-executable-examples.html#taskserv-contracts-from-wuji","architecture/nickel-executable-examples.html#taskserv-defaults","architecture/nickel-executable-examples.html#taskserv-main","architecture/nickel-executable-examples.html#test-taskserv-setup","architecture/nickel-executable-examples.html#example-4-composition--extension-pattern","architecture/nickel-executable-examples.html#base-infrastructure","architecture/nickel-executable-examples.html#extending-infrastructure-nickel-advantage","architecture/nickel-executable-examples.html#example-5-validation--error-handling","architecture/nickel-executable-examples.html#validation-functions","architecture/nickel-executable-examples.html#using-validations","architecture/nickel-executable-examples.html#example-6-comparison-with-kcl-same-logic","architecture/nickel-executable-examples.html#kcl-version","architecture/nickel-executable-examples.html#nickel-version","architecture/nickel-executable-examples.html#difference-summary","architecture/nickel-executable-examples.html#test-suite-bash-script","architecture/nickel-executable-examples.html#run-all-examples","architecture/nickel-executable-examples.html#quick-commands-reference","architecture/nickel-executable-examples.html#common-nickel-operations","architecture/nickel-executable-examples.html#troubleshooting-examples","architecture/nickel-executable-examples.html#problem-unexpected-token-with-multiple-let","architecture/nickel-executable-examples.html#problem-function-serialization-fails","architecture/nickel-executable-examples.html#problem-null-values-cause-export-issues","architecture/nickel-executable-examples.html#summary","architecture/orchestrator_info.html#cli-code","architecture/orchestrator_info.html#returns-workflow_id--abc-123","architecture/orchestrator_info.html#serverscreatenu","architecture/orchestrator-auth-integration.html#orchestrator-authentication--authorization-integration","architecture/orchestrator-auth-integration.html#overview","architecture/orchestrator-auth-integration.html#architecture","architecture/orchestrator-auth-integration.html#security-middleware-chain","architecture/repo-dist-analysis.html#repository-and-distribution-architecture-analysis","architecture/repo-dist-analysis.html#executive-summary","architecture/repo-dist-analysis.html#current-state-analysis","architecture/repo-dist-analysis.html#strengths","architecture/repo-dist-analysis.html#critical-issues","architecture/repo-dist-analysis.html#recommended-architecture","architecture/repo-dist-analysis.html#1-monorepo-structure","architecture/typedialog-nickel-integration.html#typedialog--nickel-integration-guide","architecture/typedialog-nickel-integration.html#what-is-typedialog","architecture/adr/ADR-001-project-structure.html#adr-001-project-structure-decision","architecture/adr/ADR-001-project-structure.html#status","architecture/adr/ADR-001-project-structure.html#context","architecture/adr/ADR-001-project-structure.html#decision","architecture/adr/ADR-002-distribution-strategy.html#adr-002-distribution-strategy","architecture/adr/ADR-002-distribution-strategy.html#status","architecture/adr/ADR-002-distribution-strategy.html#context","architecture/adr/ADR-002-distribution-strategy.html#decision","architecture/adr/ADR-002-distribution-strategy.html#distribution-layers","architecture/adr/ADR-002-distribution-strategy.html#distribution-structure","architecture/adr/ADR-003-workspace-isolation.html#adr-003-workspace-isolation","architecture/adr/ADR-003-workspace-isolation.html#status","architecture/adr/ADR-003-workspace-isolation.html#context","architecture/adr/ADR-003-workspace-isolation.html#decision","architecture/adr/ADR-003-workspace-isolation.html#workspace-structure","architecture/adr/ADR-004-hybrid-architecture.html#adr-004-hybrid-architecture","architecture/adr/ADR-004-hybrid-architecture.html#status","architecture/adr/ADR-004-hybrid-architecture.html#context","architecture/adr/ADR-004-hybrid-architecture.html#decision","architecture/adr/ADR-004-hybrid-architecture.html#architecture-layers","architecture/adr/ADR-004-hybrid-architecture.html#integration-patterns","architecture/adr/ADR-004-hybrid-architecture.html#key-architectural-principles","architecture/adr/ADR-004-hybrid-architecture.html#consequences","architecture/adr/ADR-004-hybrid-architecture.html#positive","architecture/adr/ADR-004-hybrid-architecture.html#negative","architecture/adr/ADR-004-hybrid-architecture.html#neutral","architecture/adr/ADR-004-hybrid-architecture.html#alternatives-considered","architecture/adr/ADR-004-hybrid-architecture.html#alternative-1-pure-nushell-implementation","architecture/adr/ADR-004-hybrid-architecture.html#alternative-2-complete-rust-rewrite","architecture/adr/ADR-004-hybrid-architecture.html#alternative-3-pure-go-implementation","architecture/adr/ADR-004-hybrid-architecture.html#alternative-4-pythonshell-hybrid","architecture/adr/ADR-004-hybrid-architecture.html#alternative-5-container-based-separation","architecture/adr/ADR-004-hybrid-architecture.html#implementation-details","architecture/adr/ADR-004-hybrid-architecture.html#orchestrator-components","architecture/adr/ADR-004-hybrid-architecture.html#integration-protocols","architecture/adr/ADR-004-hybrid-architecture.html#development-workflow","architecture/adr/ADR-004-hybrid-architecture.html#monitoring-and-observability","architecture/adr/ADR-004-hybrid-architecture.html#migration-strategy","architecture/adr/ADR-004-hybrid-architecture.html#phase-1-core-infrastructure-completed","architecture/adr/ADR-004-hybrid-architecture.html#phase-2-workflow-integration-completed","architecture/adr/ADR-004-hybrid-architecture.html#phase-3-advanced-features-completed","architecture/adr/ADR-004-hybrid-architecture.html#references","architecture/adr/ADR-005-extension-framework.html#adr-005-extension-framework","architecture/adr/ADR-005-extension-framework.html#status","architecture/adr/ADR-005-extension-framework.html#context","architecture/adr/ADR-005-extension-framework.html#decision","architecture/adr/ADR-005-extension-framework.html#extension-architecture","architecture/adr/ADR-005-extension-framework.html#extension-structure","architecture/adr/ADR-006-provisioning-cli-refactoring.html#adr-006-provisioning-cli-refactoring-to-modular-architecture","architecture/adr/ADR-006-provisioning-cli-refactoring.html#context","architecture/adr/ADR-006-provisioning-cli-refactoring.html#problems-identified","architecture/adr/ADR-006-provisioning-cli-refactoring.html#decision","architecture/adr/ADR-007-kms-simplification.html#adr-007-kms-service-simplification-to-age-and-cosmian-backends","architecture/adr/ADR-007-kms-simplification.html#context","architecture/adr/ADR-007-kms-simplification.html#problems-with-4-backend-approach","architecture/adr/ADR-007-kms-simplification.html#key-insights","architecture/adr/ADR-007-kms-simplification.html#decision","architecture/adr/ADR-007-kms-simplification.html#consequences","architecture/adr/ADR-007-kms-simplification.html#positive","architecture/adr/ADR-007-kms-simplification.html#negative","architecture/adr/ADR-007-kms-simplification.html#neutral","architecture/adr/ADR-007-kms-simplification.html#implementation","architecture/adr/ADR-007-kms-simplification.html#files-created","architecture/adr/ADR-007-kms-simplification.html#files-modified","architecture/adr/ADR-007-kms-simplification.html#files-deleted","architecture/adr/ADR-007-kms-simplification.html#dependencies-changed","architecture/adr/ADR-007-kms-simplification.html#migration-path","architecture/adr/ADR-007-kms-simplification.html#for-development","architecture/adr/ADR-007-kms-simplification.html#for-production","architecture/adr/ADR-007-kms-simplification.html#alternatives-considered","architecture/adr/ADR-007-kms-simplification.html#alternative-1-keep-all-4-backends","architecture/adr/ADR-007-kms-simplification.html#alternative-2-only-cosmian-no-age","architecture/adr/ADR-007-kms-simplification.html#alternative-3-only-age-no-production-backend","architecture/adr/ADR-007-kms-simplification.html#alternative-4-age--hashicorp-vault","architecture/adr/ADR-007-kms-simplification.html#metrics","architecture/adr/ADR-007-kms-simplification.html#code-reduction","architecture/adr/ADR-007-kms-simplification.html#dependency-reduction","architecture/adr/ADR-007-kms-simplification.html#compilation-time","architecture/adr/ADR-007-kms-simplification.html#compliance","architecture/adr/ADR-007-kms-simplification.html#security-considerations","architecture/adr/ADR-007-kms-simplification.html#testing-requirements","architecture/adr/ADR-007-kms-simplification.html#references","architecture/adr/ADR-007-kms-simplification.html#notes","architecture/adr/ADR-008-cedar-authorization.html#adr-008-cedar-authorization-policy-engine-integration","architecture/adr/ADR-008-cedar-authorization.html#context-and-problem-statement","architecture/adr/ADR-008-cedar-authorization.html#decision-drivers","architecture/adr/ADR-008-cedar-authorization.html#considered-options","architecture/adr/ADR-008-cedar-authorization.html#option-1-code-based-authorization-current-state","architecture/adr/ADR-008-cedar-authorization.html#option-2-opa-open-policy-agent","architecture/adr/ADR-008-cedar-authorization.html#option-3-cedar-policy-engine-chosen","architecture/adr/ADR-008-cedar-authorization.html#option-4-casbin","architecture/adr/ADR-008-cedar-authorization.html#decision-outcome","architecture/adr/ADR-008-cedar-authorization.html#rationale","architecture/adr/ADR-008-cedar-authorization.html#implementation-details","architecture/adr/ADR-009-security-system-complete.html#adr-009-complete-security-system-implementation","architecture/adr/ADR-009-security-system-complete.html#context","architecture/adr/ADR-009-security-system-complete.html#decision","architecture/adr/ADR-009-security-system-complete.html#implementation-summary","architecture/adr/ADR-009-security-system-complete.html#total-implementation","architecture/adr/ADR-009-security-system-complete.html#architecture-components","architecture/adr/ADR-009-security-system-complete.html#group-1-foundation-13485-lines","architecture/adr/ADR-009-security-system-complete.html#group-2-kms-integration-9331-lines","architecture/adr/ADR-009-security-system-complete.html#group-3-security-features-8948-lines","architecture/adr/ADR-009-security-system-complete.html#group-4-advanced-features-7935-lines","architecture/adr/ADR-009-security-system-complete.html#security-architecture-flow","architecture/adr/ADR-009-security-system-complete.html#end-to-end-request-flow","architecture/adr/ADR-010-configuration-format-strategy.html#adr-010-configuration-file-format-strategy","architecture/adr/ADR-010-configuration-format-strategy.html#context","architecture/adr/ADR-010-configuration-format-strategy.html#decision","architecture/adr/ADR-010-configuration-format-strategy.html#implementation-strategy","architecture/adr/ADR-010-configuration-format-strategy.html#phase-1-documentation-complete","architecture/adr/ADR-010-configuration-format-strategy.html#phase-2-workspace-config-migration-in-progress","architecture/adr/ADR-010-configuration-format-strategy.html#phase-3-template-file-reorganization-in-progress","architecture/adr/ADR-010-configuration-format-strategy.html#toml-for-application-configuration","architecture/adr/ADR-010-configuration-format-strategy.html#yaml-for-metadata-and-kubernetes-resources","architecture/adr/ADR-010-configuration-format-strategy.html#configuration-hierarchy-priority","architecture/adr/ADR-010-configuration-format-strategy.html#migration-path","architecture/adr/ADR-010-configuration-format-strategy.html#for-existing-workspaces","architecture/adr/ADR-010-configuration-format-strategy.html#for-new-workspaces","architecture/adr/ADR-010-configuration-format-strategy.html#file-format-guidelines-for-developers","architecture/adr/ADR-010-configuration-format-strategy.html#when-to-use-each-format","architecture/adr/ADR-010-configuration-format-strategy.html#consequences","architecture/adr/ADR-010-configuration-format-strategy.html#benefits","architecture/adr/ADR-010-configuration-format-strategy.html#trade-offs","architecture/adr/ADR-010-configuration-format-strategy.html#risk-mitigation","architecture/adr/ADR-010-configuration-format-strategy.html#template-file-reorganization","architecture/adr/ADR-010-configuration-format-strategy.html#problem","architecture/adr/ADR-011-nickel-migration.html#adr-011-migration-from-kcl-to-nickel","architecture/adr/ADR-011-nickel-migration.html#context","architecture/adr/ADR-011-nickel-migration.html#problems-with-kcl","architecture/adr/ADR-011-nickel-migration.html#project-needs","architecture/adr/ADR-011-nickel-migration.html#decision","architecture/adr/ADR-011-nickel-migration.html#key-changes","architecture/adr/ADR-011-nickel-migration.html#implementation-summary","architecture/adr/ADR-011-nickel-migration.html#migration-complete","architecture/adr/ADR-011-nickel-migration.html#platform-schemas-provisioningschemas","architecture/adr/ADR-011-nickel-migration.html#extensions-provisioningextensions","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#adr-014-nushell-nickel-plugin---cli-wrapper-architecture","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#status","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#context","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#system-requirements","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#documentation-gap","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#why-nickel-is-different-from-simple-use-cases","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#consequences","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#positive","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#negative","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#mitigation-strategies","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternatives-considered","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternative-1-pure-rust-with-nickel-lang-core","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternative-2-hybrid-pure-rust--cli-fallback","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternative-3-webassembly-version","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternative-4-use-nickel-lsp","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#implementation-details","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#command-set","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#critical-implementation-detail-command-syntax","api-reference/rest-api.html#rest-api-reference","api-reference/rest-api.html#overview","api-reference/rest-api.html#base-urls","api-reference/rest-api.html#authentication","api-reference/rest-api.html#jwt-authentication","api-reference/websocket.html#websocket-api-reference","api-reference/websocket.html#overview","api-reference/websocket.html#websocket-endpoints","api-reference/websocket.html#primary-websocket-endpoint","api-reference/websocket.html#specialized-websocket-endpoints","api-reference/websocket.html#authentication","api-reference/websocket.html#jwt-token-authentication","api-reference/websocket.html#connection-authentication-flow","api-reference/websocket.html#event-types-and-schemas","api-reference/websocket.html#core-event-types","api-reference/websocket.html#custom-event-types","api-reference/websocket.html#client-side-javascript-api","api-reference/websocket.html#connection-management","api-reference/websocket.html#real-time-dashboard-example","api-reference/websocket.html#server-side-implementation","api-reference/websocket.html#rust-websocket-handler","api-reference/websocket.html#event-filtering-and-subscriptions","api-reference/websocket.html#client-side-filtering","api-reference/websocket.html#server-side-event-filtering","api-reference/websocket.html#error-handling-and-reconnection","api-reference/websocket.html#connection-errors","api-reference/websocket.html#heartbeat-and-keep-alive","api-reference/websocket.html#performance-considerations","api-reference/websocket.html#message-batching","api-reference/websocket.html#compression","api-reference/websocket.html#rate-limiting","api-reference/websocket.html#security-considerations","api-reference/websocket.html#authentication-and-authorization","api-reference/websocket.html#message-validation","api-reference/websocket.html#data-sanitization","api-reference/extensions.html#extension-development-api","api-reference/extensions.html#overview","api-reference/extensions.html#extension-structure","api-reference/extensions.html#standard-directory-layout","api-reference/sdks.html#sdk-documentation","api-reference/sdks.html#available-sdks","api-reference/sdks.html#official-sdks","api-reference/sdks.html#community-sdks","api-reference/sdks.html#python-sdk","api-reference/sdks.html#installation","api-reference/sdks.html#quick-start","api-reference/sdks.html#advanced-usage","api-reference/sdks.html#api-reference","api-reference/sdks.html#javascripttypescript-sdk","api-reference/sdks.html#installation-1","api-reference/sdks.html#quick-start-1","api-reference/sdks.html#react-integration","api-reference/sdks.html#nodejs-cli-tool","api-reference/sdks.html#api-reference-1","api-reference/sdks.html#go-sdk","api-reference/sdks.html#installation-2","api-reference/sdks.html#quick-start-2","api-reference/sdks.html#websocket-integration","api-reference/sdks.html#http-client-with-retry-logic","api-reference/sdks.html#rust-sdk","api-reference/sdks.html#installation-3","api-reference/sdks.html#quick-start-3","api-reference/sdks.html#websocket-integration-1","api-reference/sdks.html#batch-operations","api-reference/sdks.html#best-practices","api-reference/sdks.html#authentication-and-security","api-reference/sdks.html#error-handling","api-reference/sdks.html#performance-optimization","api-reference/sdks.html#websocket-connections","api-reference/sdks.html#testing","api-reference/integration-examples.html#integration-examples","api-reference/integration-examples.html#overview","api-reference/integration-examples.html#complete-integration-examples","api-reference/integration-examples.html#python-integration","api-reference/integration-examples.html#nodejsjavascript-integration","api-reference/integration-examples.html#error-handling-strategies","api-reference/integration-examples.html#comprehensive-error-handling","api-reference/integration-examples.html#circuit-breaker-pattern","api-reference/integration-examples.html#performance-optimization","api-reference/integration-examples.html#connection-pooling-and-caching","api-reference/integration-examples.html#websocket-connection-pooling","api-reference/integration-examples.html#sdk-documentation","api-reference/integration-examples.html#python-sdk","api-reference/integration-examples.html#javascripttypescript-sdk","api-reference/integration-examples.html#common-integration-patterns","api-reference/integration-examples.html#workflow-orchestration-pipeline","api-reference/integration-examples.html#event-driven-architecture","api-reference/provider-api.html#provider-api-reference","api-reference/provider-api.html#overview","api-reference/provider-api.html#supported-providers","api-reference/provider-api.html#provider-interface","api-reference/provider-api.html#required-functions","api-reference/nushell-api.html#nushell-api-reference","api-reference/nushell-api.html#overview","api-reference/nushell-api.html#core-modules","api-reference/nushell-api.html#configuration-module","api-reference/nushell-api.html#server-module","api-reference/nushell-api.html#task-service-module","api-reference/nushell-api.html#workspace-module","api-reference/nushell-api.html#provider-module","api-reference/nushell-api.html#diagnostics--utilities","api-reference/nushell-api.html#diagnostics-module","api-reference/nushell-api.html#hints-module","api-reference/nushell-api.html#usage-example","api-reference/nushell-api.html#api-conventions","api-reference/nushell-api.html#best-practices","api-reference/nushell-api.html#source-code","api-reference/path-resolution.html#path-resolution-api","api-reference/path-resolution.html#overview","api-reference/path-resolution.html#configuration-resolution-hierarchy","api-reference/path-resolution.html#error-recovery","api-reference/path-resolution.html#performance-considerations","api-reference/path-resolution.html#best-practices","api-reference/path-resolution.html#monitoring","development/extension-development.html#extension-development-guide","development/extension-development.html#what-youll-learn","development/extension-development.html#extension-architecture","development/extension-development.html#extension-types","development/extension-development.html#extension-structure","development/infrastructure-specific-extensions.html#infrastructure-specific-extension-development","development/infrastructure-specific-extensions.html#table-of-contents","development/infrastructure-specific-extensions.html#overview","development/infrastructure-specific-extensions.html#infrastructure-assessment","development/infrastructure-specific-extensions.html#identifying-extension-needs","development/infrastructure-specific-extensions.html#requirements-gathering","development/infrastructure-specific-extensions.html#custom-taskserv-development","development/infrastructure-specific-extensions.html#company-specific-application-taskserv","development/infrastructure-specific-extensions.html#compliance-focused-taskserv","development/infrastructure-specific-extensions.html#provider-specific-extensions","development/infrastructure-specific-extensions.html#custom-cloud-provider-integration","development/infrastructure-specific-extensions.html#multi-environment-management","development/infrastructure-specific-extensions.html#environment-specific-configuration-management","development/infrastructure-specific-extensions.html#integration-patterns","development/infrastructure-specific-extensions.html#legacy-system-integration","development/infrastructure-specific-extensions.html#real-world-examples","development/infrastructure-specific-extensions.html#example-1-financial-services-company","development/infrastructure-specific-extensions.html#example-2-healthcare-organization","development/infrastructure-specific-extensions.html#example-3-manufacturing-company","development/infrastructure-specific-extensions.html#usage-examples","development/quick-provider-guide.html#quick-developer-guide-adding-new-providers","development/quick-provider-guide.html#prerequisites","development/quick-provider-guide.html#5-minute-provider-addition","development/quick-provider-guide.html#step-1-create-provider-directory","development/quick-provider-guide.html#step-2-copy-template-and-customize","development/quick-provider-guide.html#step-3-update-provider-metadata","development/quick-provider-guide.html#step-4-implement-core-functions","development/quick-provider-guide.html#step-5-create-provider-specific-functions","development/quick-provider-guide.html#step-6-test-your-provider","development/quick-provider-guide.html#step-7-add-provider-to-infrastructure","development/quick-provider-guide.html#provider-templates","development/quick-provider-guide.html#cloud-provider-template","development/quick-provider-guide.html#container-platform-template","development/quick-provider-guide.html#bare-metal-provider-template","development/quick-provider-guide.html#best-practices","development/quick-provider-guide.html#1-error-handling","development/quick-provider-guide.html#2-authentication","development/quick-provider-guide.html#3-rate-limiting","development/quick-provider-guide.html#4-provider-capabilities","development/quick-provider-guide.html#testing-checklist","development/quick-provider-guide.html#common-issues","development/quick-provider-guide.html#provider-not-found","development/quick-provider-guide.html#interface-validation-failed","development/quick-provider-guide.html#authentication-errors","development/quick-provider-guide.html#next-steps","development/quick-provider-guide.html#getting-help","development/command-handler-guide.html#command-handler-developer-guide","development/command-handler-guide.html#overview","development/command-handler-guide.html#key-architecture-principles","development/command-handler-guide.html#architecture-components","development/configuration.html#configuration","development/workflow.html#development-workflow-guide","development/workflow.html#table-of-contents","development/workflow.html#overview","development/workflow.html#development-setup","development/workflow.html#initial-environment-setup","development/integration.html#integration-guide","development/integration.html#table-of-contents","development/integration.html#overview","development/build-system.html#build-system-documentation","development/build-system.html#table-of-contents","development/build-system.html#overview","development/build-system.html#quick-start","development/build-system.html#makefile-reference","development/build-system.html#build-configuration","development/build-system.html#build-targets","development/build-system.html#build-tools","development/build-system.html#core-build-scripts","development/build-system.html#distribution-tools","development/build-system.html#package-tools","development/build-system.html#release-tools","development/build-system.html#cross-platform-compilation","development/build-system.html#supported-platforms","development/build-system.html#cross-compilation-setup","development/build-system.html#cross-compilation-usage","development/build-system.html#dependency-management","development/build-system.html#build-dependencies","development/build-system.html#dependency-validation","development/build-system.html#dependency-caching","development/build-system.html#troubleshooting","development/build-system.html#common-build-issues","development/build-system.html#build-performance-issues","development/build-system.html#distribution-issues","development/build-system.html#debug-mode","development/build-system.html#cicd-integration","development/build-system.html#github-actions","development/build-system.html#release-automation","development/build-system.html#local-ci-testing","development/extensions.html#extension-development-guide","development/extensions.html#table-of-contents","development/extensions.html#overview","development/extensions.html#extension-types","development/extensions.html#extension-architecture","development/distribution-process.html#distribution-process-documentation","development/distribution-process.html#table-of-contents","development/distribution-process.html#overview","development/distribution-process.html#distribution-architecture","development/distribution-process.html#distribution-components","development/implementation-guide.html#repository-restructuring---implementation-guide","development/implementation-guide.html#overview","development/implementation-guide.html#prerequisites","development/implementation-guide.html#required-tools","development/implementation-guide.html#recommended-tools","development/implementation-guide.html#before-starting","development/implementation-guide.html#phase-1-repository-restructuring-days-1-4","development/implementation-guide.html#day-1-backup-and-analysis","development/implementation-guide.html#day-2-directory-restructuring","development/implementation-guide.html#day-3-update-path-references","development/implementation-guide.html#day-4-validation-and-testing","development/implementation-guide.html#phase-2-build-system-implementation-days-5-8","development/implementation-guide.html#day-5-build-system-core","development/implementation-guide.html#day-6-8-continue-with-platform-extensions-and-validation","development/implementation-guide.html#phase-3-installation-system-days-9-11","development/implementation-guide.html#day-9-nushell-installer","development/implementation-guide.html#rollback-procedures","development/implementation-guide.html#if-phase-1-fails","development/implementation-guide.html#if-build-system-fails","development/implementation-guide.html#if-installation-fails","development/implementation-guide.html#checklist","development/implementation-guide.html#phase-1-repository-restructuring","development/implementation-guide.html#phase-2-build-system","development/implementation-guide.html#phase-3-installation","development/implementation-guide.html#phase-4-registry-optional","development/implementation-guide.html#phase-5-documentation","development/implementation-guide.html#notes","development/implementation-guide.html#support","development/taskserv-developer-guide.html#taskserv-developer-guide","development/taskserv-quick-guide.html#taskserv-quick-guide","development/taskserv-quick-guide.html#-quick-start","development/taskserv-quick-guide.html#create-a-new-taskserv-interactive","development/project-structure.html#project-structure-guide","development/project-structure.html#table-of-contents","development/project-structure.html#overview","development/project-structure.html#new-structure-vs-legacy","development/project-structure.html#new-development-structure-src","development/provider-agnostic-architecture.html#provider-agnostic-architecture-documentation","development/provider-agnostic-architecture.html#overview","development/provider-agnostic-architecture.html#architecture-components","development/provider-agnostic-architecture.html#1-provider-interface-interfacenu","development/provider-agnostic-architecture.html#adding-new-providers","development/provider-agnostic-architecture.html#1-create-provider-adapter","development/ctrl-c-implementation-notes.html#ctrl-c-handling-implementation-notes","development/ctrl-c-implementation-notes.html#overview","development/ctrl-c-implementation-notes.html#problem-statement","development/ctrl-c-implementation-notes.html#solution-architecture","development/ctrl-c-implementation-notes.html#key-principle-return-values-not-exit-codes","development/ctrl-c-implementation-notes.html#three-layer-approach","development/ctrl-c-implementation-notes.html#implementation-details","development/ctrl-c-implementation-notes.html#1-helper-functions-sshnu11-32","development/auth-metadata-guide.html#metadata-driven-authentication-system---implementation-guide","development/auth-metadata-guide.html#table-of-contents","development/auth-metadata-guide.html#overview","development/auth-metadata-guide.html#architecture","development/auth-metadata-guide.html#system-components","development/migration-guide.html#migration-guide-target-based-configuration-system","development/migration-guide.html#overview","development/migration-guide.html#migration-path","development/kms-simplification.html#kms-simplification-migration-guide","development/kms-simplification.html#overview","development/kms-simplification.html#what-changed","development/kms-simplification.html#removed","development/kms-simplification.html#added","development/kms-simplification.html#modified","development/kms-simplification.html#why-this-change","development/kms-simplification.html#problems-with-previous-approach","development/kms-simplification.html#benefits-of-simplified-approach","development/kms-simplification.html#migration-steps","development/kms-simplification.html#for-development-environments","development/kms-simplification.html#for-production-environments","development/kms-simplification.html#configuration-comparison","development/kms-simplification.html#before-4-backends","development/kms-simplification.html#after-2-backends","development/kms-simplification.html#breaking-changes","development/kms-simplification.html#api-changes","development/kms-simplification.html#code-migration","development/kms-simplification.html#rust-code","development/kms-simplification.html#nushell-code","development/kms-simplification.html#rollback-plan","development/kms-simplification.html#testing-the-migration","development/kms-simplification.html#development-testing","development/kms-simplification.html#production-testing","development/kms-simplification.html#troubleshooting","development/kms-simplification.html#age-keys-not-found","development/kms-simplification.html#cosmian-connection-failed","development/kms-simplification.html#compilation-errors","development/kms-simplification.html#support","development/kms-simplification.html#timeline","development/kms-simplification.html#faqs","development/kms-simplification.html#checklist","development/kms-simplification.html#development-migration","development/kms-simplification.html#production-migration","development/kms-simplification.html#conclusion","development/migration-example.html#migration-example","development/glossary.html#provisioning-platform-glossary","development/glossary.html#a","development/glossary.html#adr-architecture-decision-record","development/glossary.html#agent","development/glossary.html#anchor-link","development/glossary.html#api-gateway","development/glossary.html#auth-authentication","development/glossary.html#authorization","development/glossary.html#b","development/glossary.html#batch-operation","development/glossary.html#break-glass","development/glossary.html#c","development/glossary.html#cedar","development/glossary.html#checkpoint","development/glossary.html#cli-command-line-interface","development/glossary.html#cluster","development/glossary.html#compliance","development/glossary.html#config-configuration","development/glossary.html#control-center","development/glossary.html#coredns","development/glossary.html#cross-reference","development/glossary.html#d","development/glossary.html#dependency","development/glossary.html#diagnostics","development/glossary.html#dynamic-secrets","development/glossary.html#e","development/glossary.html#environment","development/glossary.html#extension","development/glossary.html#f","development/glossary.html#feature","development/glossary.html#g","development/glossary.html#gdpr-general-data-protection-regulation","development/glossary.html#glossary","development/glossary.html#guide","development/glossary.html#h","development/glossary.html#health-check","development/glossary.html#hybrid-architecture","development/glossary.html#i","development/glossary.html#infrastructure","development/glossary.html#integration","development/glossary.html#internal-link","development/glossary.html#j","development/glossary.html#jwt-json-web-token","development/glossary.html#k","development/glossary.html#kcl-kcl-configuration-language","development/glossary.html#kms-key-management-service","development/glossary.html#kubernetes","development/glossary.html#l","development/glossary.html#layer","development/glossary.html#m","development/glossary.html#mcp-model-context-protocol","development/glossary.html#mfa-multi-factor-authentication","development/glossary.html#migration","development/glossary.html#module","development/glossary.html#n","development/glossary.html#nushell","development/glossary.html#o","development/glossary.html#oci-open-container-initiative","development/glossary.html#operation","development/glossary.html#orchestrator","development/glossary.html#p","development/glossary.html#pap-project-architecture-principles","development/glossary.html#platform-service","development/glossary.html#plugin","development/glossary.html#provider","development/glossary.html#q","development/glossary.html#quick-reference","development/glossary.html#r","development/glossary.html#rbac-role-based-access-control","development/glossary.html#registry","development/glossary.html#rest-api","development/glossary.html#rollback","development/glossary.html#rustyvault","development/glossary.html#s","development/glossary.html#schema","development/glossary.html#secrets-management","development/glossary.html#security-system","development/glossary.html#server","development/glossary.html#service","development/glossary.html#shortcut","development/glossary.html#sops-secrets-operations","development/glossary.html#ssh-secure-shell","development/glossary.html#state-management","development/glossary.html#t","development/glossary.html#task","development/glossary.html#taskserv","development/glossary.html#template","development/glossary.html#test-environment","development/glossary.html#topology","development/glossary.html#totp-time-based-one-time-password","development/glossary.html#troubleshooting","development/glossary.html#u","development/glossary.html#ui-user-interface","development/glossary.html#update","development/glossary.html#v","development/glossary.html#validation","development/glossary.html#version","development/glossary.html#w","development/glossary.html#webauthn","development/glossary.html#workflow","development/glossary.html#workspace","development/glossary.html#x-z","development/glossary.html#yaml","development/glossary.html#symbol-and-acronym-index","development/glossary.html#cross-reference-map","development/glossary.html#by-topic-area","development/glossary.html#by-user-journey","development/glossary.html#terminology-guidelines","development/glossary.html#writing-style","development/glossary.html#avoiding-confusion","development/glossary.html#contributing-to-the-glossary","development/glossary.html#adding-new-terms","development/glossary.html#updating-existing-terms","development/glossary.html#version-history","development/provider-distribution-guide.html#provider-distribution-guide","development/provider-distribution-guide.html#table-of-contents","development/provider-distribution-guide.html#overview","development/provider-distribution-guide.html#module-loader-approach","development/provider-distribution-guide.html#purpose","development/provider-distribution-guide.html#how-it-works","development/provider-distribution-guide.html#for-releases","development/provider-distribution-guide.html#for-production","development/provider-distribution-guide.html#for-cicd","development/provider-distribution-guide.html#migration-path","development/provider-distribution-guide.html#from-module-loader-to-packs","development/taskserv-categorization.html#taskserv-categorization-plan","development/taskserv-categorization.html#categories-and-taskservs-38-total","development/taskserv-categorization.html#kubernetes--1","development/taskserv-categorization.html#networking--6","development/taskserv-categorization.html#container-runtime--6","development/taskserv-categorization.html#storage--4","development/taskserv-categorization.html#databases--2","development/taskserv-categorization.html#development--6","development/taskserv-categorization.html#infrastructure--6","development/taskserv-categorization.html#misc--1","development/taskserv-categorization.html#keep-in-root--6","development/extension-registry.html#extension-registry-service","development/extension-registry.html#features","development/extension-registry.html#architecture","development/extension-registry.html#dual-trait-system","development/mcp-server.html#mcp-server---model-context-protocol","development/mcp-server.html#overview","development/mcp-server.html#performance-results","development/typedialog-platform-config-guide.html#typedialog-platform-configuration-guide","development/typedialog-platform-config-guide.html#overview","development/typedialog-platform-config-guide.html#quick-start","development/typedialog-platform-config-guide.html#1-configure-a-platform-service-5-minutes","development/typedialog-platform-config-guide.html#2-review-generated-configuration","development/typedialog-platform-config-guide.html#3-validate-configuration","development/typedialog-platform-config-guide.html#4-services-use-generated-config","development/typedialog-platform-config-guide.html#interactive-configuration-workflow","development/typedialog-platform-config-guide.html#recommended-approach-use-typedialog-forms","development/typedialog-platform-config-guide.html#advanced-approach-manual-nickel-editing","development/typedialog-platform-config-guide.html#configuration-structure","development/typedialog-platform-config-guide.html#single-file-three-sections","development/typedialog-platform-config-guide.html#available-configuration-sections","development/typedialog-platform-config-guide.html#service-specific-configuration","development/typedialog-platform-config-guide.html#orchestrator-service","development/typedialog-platform-config-guide.html#kms-service","development/typedialog-platform-config-guide.html#control-center-service","development/typedialog-platform-config-guide.html#deployment-modes","development/typedialog-platform-config-guide.html#new-platform-services-phase-13-19","development/typedialog-platform-config-guide.html#vault-service","development/typedialog-platform-config-guide.html#extension-registry-service","development/typedialog-platform-config-guide.html#rag-retrieval-augmented-generation-service","development/typedialog-platform-config-guide.html#ai-service","development/typedialog-platform-config-guide.html#provisioning-daemon","development/typedialog-platform-config-guide.html#using-typedialog-forms","development/typedialog-platform-config-guide.html#form-navigation","development/typedialog-platform-config-guide.html#field-types","development/typedialog-platform-config-guide.html#special-values","development/typedialog-platform-config-guide.html#validation--export","development/typedialog-platform-config-guide.html#validating-configuration","development/typedialog-platform-config-guide.html#exporting-to-service-formats","development/typedialog-platform-config-guide.html#updating-configuration","development/typedialog-platform-config-guide.html#change-a-setting","development/typedialog-platform-config-guide.html#using-typedialog-to-update","development/typedialog-platform-config-guide.html#troubleshooting","development/typedialog-platform-config-guide.html#form-wont-load","development/typedialog-platform-config-guide.html#validation-fails","development/typedialog-platform-config-guide.html#export-creates-empty-files","development/typedialog-platform-config-guide.html#services-dont-use-new-config","development/typedialog-platform-config-guide.html#configuration-examples","development/typedialog-platform-config-guide.html#development-setup","development/typedialog-platform-config-guide.html#production-setup","development/typedialog-platform-config-guide.html#multi-provider-setup","development/typedialog-platform-config-guide.html#best-practices","development/typedialog-platform-config-guide.html#1-use-typedialog-for-initial-setup","development/typedialog-platform-config-guide.html#2-never-edit-generated-files","development/typedialog-platform-config-guide.html#3-validate-before-deploy","development/typedialog-platform-config-guide.html#4-use-environment-variables-for-secrets","development/typedialog-platform-config-guide.html#5-document-changes","development/typedialog-platform-config-guide.html#related-documentation","development/typedialog-platform-config-guide.html#core-resources","development/typedialog-platform-config-guide.html#platform-services","development/typedialog-platform-config-guide.html#public-definition-locations","development/typedialog-platform-config-guide.html#getting-help","development/typedialog-platform-config-guide.html#validation-errors","development/typedialog-platform-config-guide.html#configuration-questions","development/typedialog-platform-config-guide.html#test-configuration","operations/deployment-guide.html#platform-deployment-guide","operations/deployment-guide.html#table-of-contents","operations/deployment-guide.html#prerequisites","operations/deployment-guide.html#required-software","operations/deployment-guide.html#required-tools-mode-dependent","operations/deployment-guide.html#system-requirements","operations/deployment-guide.html#directory-structure","operations/deployment-guide.html#deployment-modes","operations/deployment-guide.html#mode-selection-matrix","operations/deployment-guide.html#mode-characteristics","operations/deployment-guide.html#quick-start","operations/deployment-guide.html#1-clone-repository","operations/deployment-guide.html#2-select-deployment-mode","operations/deployment-guide.html#3-set-environment-variables","operations/deployment-guide.html#4-build-all-services","operations/deployment-guide.html#5-start-services-order-matters","operations/deployment-guide.html#6-verify-services","operations/deployment-guide.html#solo-mode-deployment","operations/deployment-guide.html#step-1-verify-solo-configuration-files","operations/deployment-guide.html#step-2-set-solo-environment-variables","operations/deployment-guide.html#step-3-build-services","operations/deployment-guide.html#step-4-create-local-data-directories","operations/deployment-guide.html#step-5-start-services","operations/deployment-guide.html#step-6-test-services","operations/deployment-guide.html#step-7-verify-persistence-optional","operations/deployment-guide.html#cleanup","operations/deployment-guide.html#multiuser-mode-deployment","operations/deployment-guide.html#prerequisites-1","operations/deployment-guide.html#step-1-deploy-surrealdb","operations/deployment-guide.html#step-2-verify-surrealdb-connectivity","operations/deployment-guide.html#step-3-set-multiuser-environment-variables","operations/deployment-guide.html#step-4-build-services","operations/deployment-guide.html#step-5-create-shared-data-directories","operations/deployment-guide.html#step-6-start-services-on-multiple-machines","operations/deployment-guide.html#step-7-test-multi-machine-setup","operations/deployment-guide.html#step-8-enable-user-access","operations/deployment-guide.html#monitoring-multiuser-deployment","operations/deployment-guide.html#cicd-mode-deployment","operations/deployment-guide.html#step-1-understand-ephemeral-nature","operations/deployment-guide.html#step-2-set-cicd-environment-variables","operations/deployment-guide.html#step-3-containerize-services-optional","operations/deployment-guide.html#step-4-github-actions-example","operations/deployment-guide.html#step-5-run-cicd-tests","operations/deployment-guide.html#enterprise-mode-deployment","operations/deployment-guide.html#prerequisites-2","operations/deployment-guide.html#step-1-deploy-infrastructure","operations/deployment-guide.html#step-2-set-enterprise-environment-variables","operations/deployment-guide.html#step-3-deploy-services-across-cluster","operations/deployment-guide.html#step-4-monitor-cluster-health","operations/deployment-guide.html#step-5-enable-monitoring--alerting","operations/deployment-guide.html#step-6-backup--recovery","operations/deployment-guide.html#service-management","operations/deployment-guide.html#starting-services","operations/deployment-guide.html#stopping-services","operations/deployment-guide.html#restarting-services","operations/deployment-guide.html#checking-service-status","operations/deployment-guide.html#health-checks--monitoring","operations/deployment-guide.html#manual-health-verification","operations/deployment-guide.html#service-integration-tests","operations/deployment-guide.html#monitoring-dashboards","operations/deployment-guide.html#alerting","operations/deployment-guide.html#troubleshooting","operations/deployment-guide.html#service-wont-start","operations/deployment-guide.html#configuration-loading-fails","operations/deployment-guide.html#database-connection-issues","operations/deployment-guide.html#service-crashes-on-startup","operations/deployment-guide.html#high-memory-usage","operations/deployment-guide.html#networkdns-issues","operations/deployment-guide.html#data-persistence-issues","operations/deployment-guide.html#debugging-checklist","operations/deployment-guide.html#configuration-updates","operations/deployment-guide.html#updating-service-configuration","operations/deployment-guide.html#mode-migration","operations/deployment-guide.html#production-checklist","operations/deployment-guide.html#getting-help","operations/deployment-guide.html#community-resources","operations/deployment-guide.html#internal-support","operations/deployment-guide.html#useful-commands-reference","operations/service-management-guide.html#service-management-guide","operations/service-management-guide.html#table-of-contents","operations/service-management-guide.html#overview","operations/service-management-guide.html#key-features","operations/service-management-guide.html#supported-services","operations/service-management-guide.html#service-architecture","operations/service-management-guide.html#system-architecture","operations/monitoring-alerting-setup.html#service-monitoring--alerting-setup","operations/monitoring-alerting-setup.html#overview","operations/monitoring-alerting-setup.html#architecture","operations/monitoring-alerting-setup.html#prerequisites","operations/monitoring-alerting-setup.html#software-requirements","operations/monitoring-alerting-setup.html#system-requirements","operations/monitoring-alerting-setup.html#ports","operations/monitoring-alerting-setup.html#service-metrics-endpoints","operations/monitoring-alerting-setup.html#prometheus-configuration","operations/monitoring-alerting-setup.html#1-create-prometheus-config","operations/monitoring-alerting-setup.html#2-start-prometheus","operations/monitoring-alerting-setup.html#3-verify-prometheus","operations/monitoring-alerting-setup.html#alert-rules-configuration","operations/monitoring-alerting-setup.html#1-create-alert-rules","operations/monitoring-alerting-setup.html#2-validate-alert-rules","operations/monitoring-alerting-setup.html#alertmanager-configuration","operations/monitoring-alerting-setup.html#1-create-alertmanager-config","operations/monitoring-alerting-setup.html#2-start-alertmanager","operations/monitoring-alerting-setup.html#3-verify-alertmanager","operations/monitoring-alerting-setup.html#grafana-dashboards","operations/monitoring-alerting-setup.html#1-install-grafana","operations/monitoring-alerting-setup.html#2-add-prometheus-data-source","operations/monitoring-alerting-setup.html#3-create-platform-overview-dashboard","operations/monitoring-alerting-setup.html#4-import-dashboard-via-api","operations/monitoring-alerting-setup.html#health-check-monitoring","operations/monitoring-alerting-setup.html#1-service-health-check-script","operations/monitoring-alerting-setup.html#2-liveness-probe-configuration","operations/monitoring-alerting-setup.html#log-aggregation-elk-stack","operations/monitoring-alerting-setup.html#1-elasticsearch-setup","operations/monitoring-alerting-setup.html#2-filebeat-configuration","operations/monitoring-alerting-setup.html#3-kibana-dashboard","operations/monitoring-alerting-setup.html#monitoring-dashboard-queries","operations/monitoring-alerting-setup.html#common-prometheus-queries","operations/monitoring-alerting-setup.html#alert-testing","operations/monitoring-alerting-setup.html#1-test-alert-firing","operations/monitoring-alerting-setup.html#2-stop-service-to-trigger-alert","operations/monitoring-alerting-setup.html#3-generate-load-to-test-error-alerts","operations/monitoring-alerting-setup.html#backup--retention-policies","operations/monitoring-alerting-setup.html#1-prometheus-data-backup","operations/monitoring-alerting-setup.html#2-prometheus-retention-configuration","operations/monitoring-alerting-setup.html#maintenance--troubleshooting","operations/monitoring-alerting-setup.html#common-issues","operations/monitoring-alerting-setup.html#production-deployment-checklist","operations/monitoring-alerting-setup.html#quick-commands-reference","operations/monitoring-alerting-setup.html#documentation--runbooks","operations/monitoring-alerting-setup.html#sample-runbook-service-down","operations/monitoring-alerting-setup.html#resources","operations/service-management-quickref.html#service-management-quick-reference","operations/coredns-guide.html#coredns-integration-guide","operations/coredns-guide.html#table-of-contents","operations/coredns-guide.html#overview","operations/coredns-guide.html#key-features","operations/coredns-guide.html#installation","operations/coredns-guide.html#prerequisites","operations/coredns-guide.html#install-coredns-binary","operations/coredns-guide.html#dns-queries-not-working","operations/coredns-guide.html#zone-file-validation-errors","operations/coredns-guide.html#docker-container-issues","operations/coredns-guide.html#dynamic-updates-not-working","operations/coredns-guide.html#advanced-topics","operations/coredns-guide.html#custom-corefile-plugins","operations/backup-recovery.html#backup-and-recovery","operations/deployment.html#deployment-guide","operations/monitoring.html#monitoring-guide","operations/production-readiness-checklist.html#production-readiness-checklist","operations/production-readiness-checklist.html#executive-summary","operations/production-readiness-checklist.html#quality-metrics","operations/production-readiness-checklist.html#pre-deployment-verification","operations/production-readiness-checklist.html#1-system-requirements-","operations/production-readiness-checklist.html#2-code-quality-","operations/production-readiness-checklist.html#3-testing-","operations/production-readiness-checklist.html#4-security-","operations/production-readiness-checklist.html#5-documentation-","operations/production-readiness-checklist.html#6-deployment-readiness-","operations/production-readiness-checklist.html#pre-production-checklist","operations/production-readiness-checklist.html#team-preparation","operations/production-readiness-checklist.html#infrastructure-preparation","operations/production-readiness-checklist.html#configuration-preparation","operations/production-readiness-checklist.html#testing-in-production-like-environment","operations/production-readiness-checklist.html#deployment-steps","operations/production-readiness-checklist.html#phase-1-installation-30-minutes","operations/production-readiness-checklist.html#phase-2-initial-configuration-15-minutes","operations/production-readiness-checklist.html#phase-3-workspace-setup-10-minutes","operations/production-readiness-checklist.html#phase-4-verification-10-minutes","operations/production-readiness-checklist.html#post-deployment-verification","operations/production-readiness-checklist.html#immediate-within-1-hour","operations/production-readiness-checklist.html#daily-first-week","operations/production-readiness-checklist.html#weekly-first-month","operations/production-readiness-checklist.html#ongoing-production","operations/production-readiness-checklist.html#troubleshooting-reference","operations/production-readiness-checklist.html#issue-setup-wizard-wont-start","operations/production-readiness-checklist.html#issue-configuration-validation-fails","operations/production-readiness-checklist.html#issue-health-check-shows-warnings","operations/production-readiness-checklist.html#issue-deployment-fails","operations/production-readiness-checklist.html#performance-baselines","operations/production-readiness-checklist.html#support-and-escalation","operations/production-readiness-checklist.html#level-1-support-team","operations/production-readiness-checklist.html#level-2-support-engineering","operations/production-readiness-checklist.html#level-3-support-development","operations/production-readiness-checklist.html#rollback-procedure","operations/production-readiness-checklist.html#success-criteria","operations/production-readiness-checklist.html#sign-off","operations/break-glass-training-guide.html#break-glass-emergency-access---training-guide","operations/break-glass-training-guide.html#-what-is-break-glass","operations/break-glass-training-guide.html#key-principles","operations/break-glass-training-guide.html#-table-of-contents","operations/break-glass-training-guide.html#when-to-use-break-glass","operations/break-glass-training-guide.html#-valid-emergency-scenarios","operations/break-glass-training-guide.html#criteria-checklist","operations/break-glass-training-guide.html#when-not-to-use","operations/break-glass-training-guide.html#-invalid-scenarios-do-not-use-break-glass","operations/break-glass-training-guide.html#consequences-of-misuse","operations/break-glass-training-guide.html#roles--responsibilities","operations/break-glass-training-guide.html#requester","operations/break-glass-training-guide.html#approvers","operations/break-glass-training-guide.html#security-team","operations/break-glass-training-guide.html#break-glass-workflow","operations/break-glass-training-guide.html#phase-1-request-5-minutes","operations/cedar-policies-production-guide.html#cedar-policies-production-guide","operations/cedar-policies-production-guide.html#table-of-contents","operations/cedar-policies-production-guide.html#introduction","operations/cedar-policies-production-guide.html#why-cedar","operations/cedar-policies-production-guide.html#cedar-policy-basics","operations/cedar-policies-production-guide.html#core-concepts","operations/cedar-policies-production-guide.html#unexpected-denials","operations/mfa-admin-setup-guide.html#mfa-admin-setup-guide---production-operations-manual","operations/mfa-admin-setup-guide.html#-table-of-contents","operations/mfa-admin-setup-guide.html#overview","operations/mfa-admin-setup-guide.html#what-is-mfa","operations/mfa-admin-setup-guide.html#why-mfa-for-admins","operations/mfa-admin-setup-guide.html#mfa-methods-supported","operations/mfa-admin-setup-guide.html#mfa-requirements","operations/mfa-admin-setup-guide.html#mandatory-mfa-enforcement","operations/mfa-admin-setup-guide.html#grace-period","operations/mfa-admin-setup-guide.html#timeline-for-rollout","operations/orchestrator.html#provisioning-orchestrator","operations/orchestrator.html#architecture","operations/orchestrator.html#key-features","operations/orchestrator.html#quick-start","operations/orchestrator.html#build-and-run","operations/orchestrator.html#submit-workflow","operations/orchestrator.html#api-endpoints","operations/orchestrator.html#core-endpoints","operations/orchestrator.html#workflow-endpoints","operations/orchestrator.html#test-environment-endpoints","operations/orchestrator.html#test-environment-service","operations/orchestrator.html#test-environment-types","operations/orchestrator.html#nushell-cli-integration","operations/orchestrator.html#topology-templates","operations/orchestrator.html#storage-backends","operations/orchestrator.html#related-documentation","operations/orchestrator-system.html#hybrid-orchestrator-architecture-v300","operations/orchestrator-system.html#-orchestrator-implementation-completed-2025-09-25","operations/orchestrator-system.html#architecture-overview","operations/orchestrator-system.html#orchestrator-management","operations/orchestrator-system.html#workflow-system","operations/orchestrator-system.html#server-workflows","operations/orchestrator-system.html#taskserv-workflows","operations/orchestrator-system.html#cluster-workflows","operations/orchestrator-system.html#workflow-management","operations/orchestrator-system.html#rest-api-endpoints","operations/control-center.html#control-center---cedar-policy-engine","operations/control-center.html#key-features","operations/control-center.html#cedar-policy-engine","operations/control-center.html#security--authentication","operations/control-center.html#compliance-framework","operations/control-center.html#anomaly-detection","operations/control-center.html#storage--persistence","operations/control-center.html#quick-start","operations/control-center.html#installation","operations/control-center.html#configuration","operations/control-center.html#start-server","operations/control-center.html#test-policy-evaluation","operations/control-center.html#policy-examples","operations/control-center.html#multi-factor-authentication-policy","operations/control-center.html#production-approval-policy","operations/control-center.html#geographic-restrictions","operations/control-center.html#cli-commands","operations/control-center.html#policy-management","operations/control-center.html#compliance-checking","operations/control-center.html#api-endpoints","operations/control-center.html#policy-evaluation","operations/control-center.html#policy-versions","operations/control-center.html#compliance","operations/control-center.html#anomaly-detection-1","operations/control-center.html#architecture","operations/control-center.html#core-components","operations/control-center.html#configuration-driven-design","operations/control-center.html#deployment","operations/control-center.html#docker","operations/control-center.html#kubernetes","operations/control-center.html#related-documentation","operations/installer.html#provisioning-platform-installer","operations/installer.html#features","operations/installer.html#installation","operations/installer-system.html#provisioning-platform-installer-v350","operations/installer-system.html#-flexible-installation-and-configuration-system","operations/installer-system.html#installation-modes","operations/installer-system.html#1--interactive-tui-mode","operations/installer-system.html#2--headless-mode","operations/installer-system.html#3--unattended-mode","operations/installer-system.html#deployment-modes","operations/installer-system.html#configuration-system","operations/installer-system.html#toml-configuration","operations/installer-system.html#configuration-loading-priority","operations/installer-system.html#mcp-integration","operations/installer-system.html#deployment-automation","operations/installer-system.html#nushell-scripts","operations/installer-system.html#self-installation","operations/installer-system.html#command-reference","operations/installer-system.html#integration-examples","operations/installer-system.html#gitops-workflow","operations/installer-system.html#terraform-integration","operations/installer-system.html#ansible-integration","operations/installer-system.html#configuration-templates","operations/installer-system.html#documentation","operations/installer-system.html#help-and-support","operations/installer-system.html#nushell-fallback","operations/provisioning-server.html#provisioning-api-server","operations/provisioning-server.html#features","operations/provisioning-server.html#architecture","infrastructure/infrastructure-management.html#infrastructure-management-guide","infrastructure/infrastructure-management.html#what-youll-learn","infrastructure/infrastructure-management.html#infrastructure-concepts","infrastructure/infrastructure-management.html#infrastructure-components","infrastructure/infrastructure-management.html#infrastructure-lifecycle","infrastructure/infrastructure-management.html#advanced-testing","infrastructure/infrastructure-from-code-guide.html#infrastructure-from-code-iac-guide","infrastructure/infrastructure-from-code-guide.html#overview","infrastructure/infrastructure-from-code-guide.html#quick-start","infrastructure/infrastructure-from-code-guide.html#1-detect-technologies-in-your-project","infrastructure/batch-workflow-system.html#batch-workflow-system-v310---token-optimized-architecture","infrastructure/batch-workflow-system.html#-batch-workflow-system-completed-2025-09-25","infrastructure/batch-workflow-system.html#key-achievements","infrastructure/batch-workflow-system.html#batch-workflow-commands","infrastructure/batch-workflow-system.html#kcl-workflow-schema","infrastructure/batch-workflow-system.html#rest-api-endpoints-batch-operations","infrastructure/batch-workflow-system.html#system-benefits","infrastructure/cli-architecture.html#modular-cli-architecture-v320---major-refactoring","infrastructure/cli-architecture.html#-cli-refactoring-completed-2025-09-30","infrastructure/cli-architecture.html#architecture-improvements","infrastructure/cli-architecture.html#command-shortcuts-reference","infrastructure/cli-architecture.html#infrastructure","infrastructure/cli-architecture.html#orchestration","infrastructure/cli-architecture.html#development","infrastructure/cli-architecture.html#workspace","infrastructure/cli-architecture.html#configuration","infrastructure/cli-architecture.html#utilities","infrastructure/cli-architecture.html#generation","infrastructure/cli-architecture.html#special-commands","infrastructure/cli-architecture.html#bi-directional-help-system","infrastructure/configuration-system.html#configuration-system-v200","infrastructure/configuration-system.html#-migration-completed-2025-09-23","infrastructure/configuration-system.html#configuration-files","infrastructure/configuration-system.html#essential-commands","infrastructure/configuration-system.html#configuration-architecture","infrastructure/configuration-system.html#configuration-loading-hierarchy-priority","infrastructure/configuration-system.html#file-type-guidelines","infrastructure/workspace-setup.html#workspace-setup-guide","infrastructure/workspace-setup.html#quick-start","infrastructure/workspace-setup.html#1-create-a-new-infrastructure-workspace","infrastructure/workspace-switching-guide.html#workspace-switching-guide","infrastructure/workspace-switching-guide.html#overview","infrastructure/workspace-switching-guide.html#quick-start","infrastructure/workspace-switching-guide.html#list-available-workspaces","infrastructure/workspace-switching-system.html#workspace-switching-system-v205","infrastructure/workspace-switching-system.html#-workspace-switching-completed-2025-10-02","infrastructure/workspace-switching-system.html#key-features","infrastructure/workspace-switching-system.html#workspace-management-commands","infrastructure/cli-reference.html#cli-reference","infrastructure/cli-reference.html#what-youll-learn","infrastructure/cli-reference.html#command-structure","infrastructure/cli-reference.html#global-options","infrastructure/cli-reference.html#output-formats","infrastructure/cli-reference.html#core-commands","infrastructure/cli-reference.html#help---show-help-information","infrastructure/cli-reference.html#version---show-version-information","infrastructure/cli-reference.html#env---environment-information","infrastructure/cli-reference.html#server-management-commands","infrastructure/cli-reference.html#server-create---create-servers","infrastructure/cli-reference.html#server-delete---delete-servers","infrastructure/cli-reference.html#server-list---list-servers","infrastructure/cli-reference.html#server-ssh---ssh-access","infrastructure/cli-reference.html#server-price---cost-information","infrastructure/cli-reference.html#task-service-commands","infrastructure/cli-reference.html#taskserv-create---install-services","infrastructure/cli-reference.html#taskserv-delete---remove-services","infrastructure/cli-reference.html#taskserv-list---list-services","infrastructure/cli-reference.html#taskserv-generate---generate-configurations","infrastructure/cli-reference.html#taskserv-check-updates---version-management","infrastructure/cli-reference.html#cluster-management-commands","infrastructure/cli-reference.html#cluster-create---deploy-clusters","infrastructure/cli-reference.html#cluster-delete---remove-clusters","infrastructure/cli-reference.html#cluster-list---list-clusters","infrastructure/cli-reference.html#cluster-scale---scale-clusters","infrastructure/cli-reference.html#infrastructure-commands","infrastructure/cli-reference.html#generate---generate-configurations","infrastructure/cli-reference.html#show---display-information","infrastructure/cli-reference.html#list---list-resources","infrastructure/cli-reference.html#validate---validate-configuration","infrastructure/cli-reference.html#configuration-commands","infrastructure/cli-reference.html#init---initialize-configuration","infrastructure/cli-reference.html#template---template-management","infrastructure/cli-reference.html#advanced-commands","infrastructure/cli-reference.html#nu---interactive-shell","infrastructure/cli-reference.html#sops---secret-management","infrastructure/cli-reference.html#context---context-management","infrastructure/cli-reference.html#workflow-commands","infrastructure/cli-reference.html#workflows---batch-operations","infrastructure/cli-reference.html#orchestrator---orchestrator-management","infrastructure/cli-reference.html#scripting-and-automation","infrastructure/cli-reference.html#exit-codes","infrastructure/cli-reference.html#environment-variables","infrastructure/cli-reference.html#batch-operations","infrastructure/cli-reference.html#json-output-processing","infrastructure/cli-reference.html#command-chaining-and-pipelines","infrastructure/cli-reference.html#sequential-operations","infrastructure/cli-reference.html#complex-workflows","infrastructure/cli-reference.html#integration-with-other-tools","infrastructure/cli-reference.html#cicd-integration","infrastructure/cli-reference.html#monitoring-integration","infrastructure/cli-reference.html#backup-automation","infrastructure/workspace-config-architecture.html#workspace-configuration-architecture","infrastructure/workspace-config-architecture.html#overview","infrastructure/workspace-config-architecture.html#critical-design-principle","infrastructure/workspace-config-architecture.html#configuration-hierarchy","infrastructure/workspace-config-architecture.html#workspace-structure","infrastructure/dynamic-secrets-guide.html#dynamic-secrets-guide","infrastructure/dynamic-secrets-guide.html#quick-reference","infrastructure/dynamic-secrets-guide.html#quick-commands","infrastructure/dynamic-secrets-guide.html#secret-types","infrastructure/dynamic-secrets-guide.html#rest-api-endpoints","infrastructure/dynamic-secrets-guide.html#aws-sts-example","infrastructure/dynamic-secrets-guide.html#ssh-key-example","infrastructure/dynamic-secrets-guide.html#configuration","infrastructure/dynamic-secrets-guide.html#troubleshooting","infrastructure/dynamic-secrets-guide.html#provider-not-found","infrastructure/dynamic-secrets-guide.html#ttl-exceeds-maximum","infrastructure/dynamic-secrets-guide.html#secret-not-renewable","infrastructure/dynamic-secrets-guide.html#missing-required-parameter","infrastructure/dynamic-secrets-guide.html#security-features","infrastructure/dynamic-secrets-guide.html#support","infrastructure/mode-system-guide.html#mode-system-quick-reference","infrastructure/mode-system-guide.html#quick-start","infrastructure/workspace-guide.html#workspace-guide","infrastructure/workspace-guide.html#-workspace-switching-guide","infrastructure/workspace-guide.html#quick-start","infrastructure/workspace-guide.html#additional-workspace-resources","infrastructure/workspace-enforcement-guide.html#workspace-enforcement-and-version-tracking-guide","infrastructure/workspace-enforcement-guide.html#table-of-contents","infrastructure/workspace-enforcement-guide.html#overview","infrastructure/workspace-enforcement-guide.html#key-features","infrastructure/workspace-enforcement-guide.html#workspace-requirement","infrastructure/workspace-enforcement-guide.html#commands-that-require-workspace","infrastructure/workspace-enforcement-guide.html#commands-that-dont-require-workspace","infrastructure/workspace-enforcement-guide.html#what-happens-without-a-workspace","infrastructure/workspace-infra-reference.html#unified-workspaceinfrastructure-reference-system","infrastructure/workspace-infra-reference.html#overview","infrastructure/workspace-infra-reference.html#quick-start","infrastructure/workspace-infra-reference.html#temporal-override-single-command","infrastructure/workspace-infra-reference.html#usage-patterns","infrastructure/workspace-infra-reference.html#pattern-1-temporal-override-for-commands","infrastructure/workspace-config-commands.html#workspace-configuration-management-commands","infrastructure/workspace-config-commands.html#overview","infrastructure/workspace-config-commands.html#command-summary","infrastructure/workspace-config-commands.html#commands","infrastructure/workspace-config-commands.html#show-workspace-configuration","infrastructure/workspace-config-commands.html#configuration-file-locations","infrastructure/config-rendering-guide.html#configuration-rendering-guide","infrastructure/config-rendering-guide.html#overview","infrastructure/config-rendering-guide.html#quick-start","infrastructure/config-rendering-guide.html#starting-the-daemon","infrastructure/config-rendering-guide.html#see-also","infrastructure/config-rendering-guide.html#quick-reference","infrastructure/config-rendering-guide.html#api-endpoint","infrastructure/configuration.html#configuration-guide","infrastructure/configuration.html#what-youll-learn","infrastructure/configuration.html#configuration-architecture","infrastructure/configuration.html#configuration-hierarchy","security/authentication-layer-guide.html#authentication-layer-implementation-guide","security/authentication-layer-guide.html#overview","security/authentication-layer-guide.html#key-features","security/authentication-layer-guide.html#--jwt-authentication","security/authentication-layer-guide.html#--mfa-support","security/authentication-layer-guide.html#--security-policies","security/authentication-layer-guide.html#--audit-logging","security/authentication-layer-guide.html#--user-friendly-error-messages","security/authentication-layer-guide.html#quick-start","security/authentication-layer-guide.html#1-login-to-platform","security/config-encryption-guide.html#configuration-encryption-guide","security/config-encryption-guide.html#overview","security/config-encryption-guide.html#table-of-contents","security/config-encryption-guide.html#prerequisites","security/config-encryption-guide.html#required-tools","security/config-encryption-guide.html#verify-installation","security/config-encryption-guide.html#aws-kms-access-denied","security/security-system.html#complete-security-system-v400","security/security-system.html#-enterprise-grade-security-implementation","security/security-system.html#core-security-components","security/security-system.html#1--authentication--jwt","security/security-system.html#2--authorization--cedar","security/security-system.html#3--multi-factor-authentication--mfa","security/security-system.html#4--secrets-management","security/security-system.html#5--key-management-system--kms","security/security-system.html#6--audit-logging","security/security-system.html#7--break-glass-emergency-access","security/security-system.html#8--compliance-management","security/security-system.html#9--audit-query-system","security/security-system.html#10--token-management","security/security-system.html#11--access-control","security/security-system.html#12--encryption","security/security-system.html#performance-characteristics","security/security-system.html#quick-reference","security/security-system.html#architecture","security/security-system.html#configuration","security/security-system.html#documentation","security/security-system.html#help-commands","security/rustyvault-kms-guide.html#rustyvault-kms-backend-guide","security/rustyvault-kms-guide.html#overview","security/rustyvault-kms-guide.html#why-rustyvault","security/rustyvault-kms-guide.html#architecture-position","security/secretumvault-kms-guide.html#secretumvault-kms-backend-guide","security/secretumvault-kms-guide.html#overview","security/secretumvault-kms-guide.html#what-is-secretumvault","security/secretumvault-kms-guide.html#when-to-use-secretumvault","security/secretumvault-kms-guide.html#deployment-modes","security/secretumvault-kms-guide.html#development-mode-embedded","security/secretumvault-kms-guide.html#staging-mode-service--surrealdb","security/secretumvault-kms-guide.html#production-mode-service--etcd","security/secretumvault-kms-guide.html#configuration","security/secretumvault-kms-guide.html#environment-variables","security/secretumvault-kms-guide.html#configuration-files","security/secretumvault-kms-guide.html#operations","security/secretumvault-kms-guide.html#encrypt-data","security/secretumvault-kms-guide.html#decrypt-data","security/secretumvault-kms-guide.html#generate-data-keys","security/secretumvault-kms-guide.html#health-and-status","security/secretumvault-kms-guide.html#key-rotation","security/secretumvault-kms-guide.html#storage-backends","security/secretumvault-kms-guide.html#filesystem-development","security/secretumvault-kms-guide.html#surrealdb-staging","security/secretumvault-kms-guide.html#etcd-production","security/secretumvault-kms-guide.html#postgresql-enterprise","security/secretumvault-kms-guide.html#troubleshooting","security/secretumvault-kms-guide.html#connection-errors","security/secretumvault-kms-guide.html#authentication-failures","security/secretumvault-kms-guide.html#storage-backend-errors","security/secretumvault-kms-guide.html#performance-issues","security/secretumvault-kms-guide.html#debugging","security/secretumvault-kms-guide.html#security-best-practices","security/secretumvault-kms-guide.html#token-management","security/secretumvault-kms-guide.html#tlsssl","security/secretumvault-kms-guide.html#access-control","security/secretumvault-kms-guide.html#key-rotation-1","security/secretumvault-kms-guide.html#backup-and-recovery","security/secretumvault-kms-guide.html#migration-guide","security/secretumvault-kms-guide.html#from-age-to-secretumvault","security/secretumvault-kms-guide.html#from-rustyvault-to-secretumvault","security/secretumvault-kms-guide.html#from-cosmian-to-secretumvault","security/secretumvault-kms-guide.html#performance-tuning","security/secretumvault-kms-guide.html#development-filesystem","security/secretumvault-kms-guide.html#staging-surrealdb","security/secretumvault-kms-guide.html#production-etcd","security/secretumvault-kms-guide.html#compliance-and-audit","security/secretumvault-kms-guide.html#audit-logging","security/secretumvault-kms-guide.html#compliance-reports","security/secretumvault-kms-guide.html#advanced-topics","security/secretumvault-kms-guide.html#cedar-authorization-policies","security/secretumvault-kms-guide.html#key-encryption-keys-kek","security/secretumvault-kms-guide.html#multi-region-setup","security/secretumvault-kms-guide.html#support-and-resources","security/secretumvault-kms-guide.html#see-also","security/ssh-temporal-keys-user-guide.html#ssh-temporal-keys---user-guide","security/ssh-temporal-keys-user-guide.html#quick-start","security/ssh-temporal-keys-user-guide.html#generate-and-connect-with-temporary-key","security/ssh-temporal-keys-user-guide.html#private-key-not-working","security/ssh-temporal-keys-user-guide.html#cleanup-not-running","security/ssh-temporal-keys-user-guide.html#best-practices","security/ssh-temporal-keys-user-guide.html#security","security/ssh-temporal-keys-user-guide.html#workflow-integration","security/ssh-temporal-keys-user-guide.html#advanced-usage","security/ssh-temporal-keys-user-guide.html#vault-integration","security/plugin-integration-guide.html#nushell-plugin-integration-guide","security/plugin-integration-guide.html#table-of-contents","security/plugin-integration-guide.html#overview","security/plugin-integration-guide.html#architecture-benefits","security/nushell-plugins-guide.html#nushell-plugins-for-provisioning-platform","security/nushell-plugins-guide.html#overview","security/nushell-plugins-guide.html#why-native-plugins","security/nushell-plugins-guide.html#installation","security/nushell-plugins-guide.html#prerequisites","security/nushell-plugins-guide.html#build-from-source","security/nushell-plugins-system.html#nushell-plugins-integration-v100---see-detailed-guide-for-complete-reference","security/nushell-plugins-system.html#overview","security/nushell-plugins-system.html#performance-improvements","security/nushell-plugins-system.html#three-native-plugins","security/nushell-plugins-system.html#quick-commands","security/nushell-plugins-system.html#installation","security/nushell-plugins-system.html#benefits","security/plugin-usage-guide.html#provisioning-plugins-usage-guide","security/plugin-usage-guide.html#overview","security/plugin-usage-guide.html#installation","security/plugin-usage-guide.html#prerequisites","security/plugin-usage-guide.html#quick-install","security/plugin-usage-guide.html#manual-installation","security/plugin-usage-guide.html#usage","security/plugin-usage-guide.html#authentication-plugin","security/plugin-usage-guide.html#kms-plugin","security/plugin-usage-guide.html#orchestrator-plugin","security/plugin-usage-guide.html#plugin-status","security/plugin-usage-guide.html#testing-plugins","security/plugin-usage-guide.html#list-registered-plugins","security/plugin-usage-guide.html#performance-comparison","security/plugin-usage-guide.html#graceful-fallback","security/plugin-usage-guide.html#troubleshooting","security/plugin-usage-guide.html#plugins-not-found-after-installation","security/plugin-usage-guide.html#command-not-found-errors","security/plugin-usage-guide.html#plugins-crash-or-are-unresponsive","security/plugin-usage-guide.html#integration-with-provisioning-cli","security/plugin-usage-guide.html#advanced-configuration","security/plugin-usage-guide.html#custom-data-directory","security/plugin-usage-guide.html#custom-auth-url","security/plugin-usage-guide.html#kms-backend-selection","security/plugin-usage-guide.html#building-plugins-from-source","security/plugin-usage-guide.html#architecture","security/plugin-usage-guide.html#security-notes","security/plugin-usage-guide.html#support","security/secrets-management-guide.html#secrets-management-system---configuration-guide","security/secrets-management-guide.html#overview","security/secrets-management-guide.html#secret-sources","security/secrets-management-guide.html#1-sops-secrets-operations","security/auth-quick-reference.html#auth-quick-reference","security/config-encryption-quickref.html#config-encryption-quick-reference","security/kms-service.html#kms-service---key-management-service","security/kms-service.html#supported-backends","security/kms-service.html#architecture","integration/gitea-integration-guide.html#gitea-integration-guide","integration/gitea-integration-guide.html#table-of-contents","integration/gitea-integration-guide.html#overview","integration/gitea-integration-guide.html#architecture","integration/service-mesh-ingress-guide.html#service-mesh--ingress-guide","integration/service-mesh-ingress-guide.html#comparison","integration/service-mesh-ingress-guide.html#understanding-the-difference","integration/service-mesh-ingress-guide.html#service-mesh-options","integration/oci-registry-guide.html#oci-registry-user-guide","integration/oci-registry-guide.html#table-of-contents","integration/oci-registry-guide.html#overview","integration/oci-registry-guide.html#what-are-oci-artifacts","integration/oci-registry-guide.html#quick-start","integration/oci-registry-guide.html#prerequisites","integration/oci-registry-guide.html#troubleshooting","integration/oci-registry-guide.html#no-oci-tool-found","integration/oci-registry-guide.html#dependency-resolution-failed","integration/integrations-quickstart.html#prov-ecosystem--provctl-integrations---quick-start-guide","integration/integrations-quickstart.html#overview","integration/integrations-quickstart.html#quick-start-commands","integration/integrations-quickstart.html#-30-second-test","integration/secrets-service-layer-complete.html#secrets-service-layer-sst---complete-user-guide","integration/secrets-service-layer-complete.html#-executive-summary","integration/secrets-service-layer-complete.html#-key-features","integration/secrets-service-layer-complete.html#-quick-start-5-minutes","integration/secrets-service-layer-complete.html#1-register-the-workspace-librecloud","integration/oci-registry-platform.html#oci-registry-service","integration/oci-registry-platform.html#supported-registries","integration/oci-registry-platform.html#features","integration/oci-registry-platform.html#quick-start","integration/oci-registry-platform.html#start-zot-registry-default","integration/oci-registry-platform.html#start-harbor-registry","integration/oci-registry-platform.html#default-namespaces","integration/oci-registry-platform.html#management","integration/oci-registry-platform.html#nushell-commands","integration/oci-registry-platform.html#docker-compose","integration/oci-registry-platform.html#registry-comparison","integration/oci-registry-platform.html#security","integration/oci-registry-platform.html#authentication","integration/oci-registry-platform.html#monitoring","integration/oci-registry-platform.html#health-checks","integration/oci-registry-platform.html#metrics","integration/oci-registry-platform.html#related-documentation","testing/test-environment-guide.html#test-environment-guide","testing/test-environment-guide.html#overview","testing/test-environment-guide.html#architecture","testing/test-environment-guide.html#basic-workflow","testing/test-environment-usage.html#test-environment-usage","testing/test-environment-system.html#test-environment-service-v340","testing/test-environment-system.html#-test-environment-service-completed-2025-10-06","testing/test-environment-system.html#key-features","testing/test-environment-system.html#test-environment-types","testing/test-environment-system.html#1-single-taskserv-testing","testing/test-environment-system.html#architecture","testing/taskserv-validation-guide.html#taskserv-validation-and-testing-guide","testing/taskserv-validation-guide.html#overview","testing/taskserv-validation-guide.html#validation-levels","testing/taskserv-validation-guide.html#1-static-validation","troubleshooting/troubleshooting-guide.html#troubleshooting-guide","troubleshooting/troubleshooting-guide.html#what-youll-learn","troubleshooting/troubleshooting-guide.html#general-troubleshooting-approach","troubleshooting/troubleshooting-guide.html#1-identify-the-problem","guides/from-scratch.html#complete-deployment-guide-from-scratch-to-production","guides/from-scratch.html#table-of-contents","guides/from-scratch.html#prerequisites","guides/from-scratch.html#recommended-hardware","guides/from-scratch.html#step-1-install-nushell","guides/from-scratch.html#macos-via-homebrew","guides/from-scratch.html#learn-more","guides/from-scratch.html#get-help","guides/update-infrastructure.html#update-existing-infrastructure","guides/update-infrastructure.html#overview","guides/update-infrastructure.html#update-strategies","guides/update-infrastructure.html#strategy-1-in-place-updates-fastest","guides/customize-infrastructure.html#customize-infrastructure","guides/customize-infrastructure.html#overview","guides/customize-infrastructure.html#the-layer-system","guides/customize-infrastructure.html#understanding-layers","guides/extension-development-quickstart.html#extension-development-quick-start-guide","guides/extension-development-quickstart.html#prerequisites","guides/extension-development-quickstart.html#quick-start-creating-your-first-extension","guides/extension-development-quickstart.html#step-1-create-extension-from-template","guides/extension-development-quickstart.html#step-2-navigate-and-customize","guides/extension-development-quickstart.html#step-3-customize-configuration","guides/extension-development-quickstart.html#step-4-test-your-extension","guides/extension-development-quickstart.html#step-5-use-in-workspace","guides/extension-development-quickstart.html#common-extension-patterns","guides/extension-development-quickstart.html#database-service-extension","guides/extension-development-quickstart.html#monitoring-service-extension","guides/extension-development-quickstart.html#legacy-system-integration","guides/extension-development-quickstart.html#advanced-customization","guides/extension-development-quickstart.html#custom-provider-development","guides/extension-development-quickstart.html#complete-infrastructure-stack","guides/extension-development-quickstart.html#testing-and-validation","guides/extension-development-quickstart.html#local-testing-workflow","guides/extension-development-quickstart.html#continuous-integration-testing","guides/extension-development-quickstart.html#best-practices-summary","guides/extension-development-quickstart.html#1-extension-design","guides/extension-development-quickstart.html#2-dependencies","guides/extension-development-quickstart.html#3-security","guides/extension-development-quickstart.html#4-documentation","guides/extension-development-quickstart.html#5-testing","guides/extension-development-quickstart.html#common-issues-and-solutions","guides/extension-development-quickstart.html#extension-not-discovered","guides/extension-development-quickstart.html#kcl-compilation-errors","guides/extension-development-quickstart.html#loading-failures","guides/extension-development-quickstart.html#next-steps","guides/extension-development-quickstart.html#support","guides/guide-system.html#interactive-guides-and-quick-reference-v330","guides/guide-system.html#-guide-system-added-2025-09-30","guides/guide-system.html#available-guides","guides/guide-system.html#guide-features","guides/guide-system.html#recommended-setup","guides/guide-system.html#quick-start-with-guides","guides/guide-system.html#guide-content","guides/guide-system.html#access-from-help-system","guides/guide-system.html#guide-shortcuts","guides/guide-system.html#documentation-location","guides/workspace-generation-quick-reference.html#workspace-generation---quick-reference","guides/workspace-generation-quick-reference.html#rutas-clave-de-archivos","quick-reference/MASTER.html#quick-reference-master-index","quick-reference/MASTER.html#available-quick-references","quick-reference/MASTER.html#topic-specific-guides-with-embedded-quick-references","quick-reference/MASTER.html#using-quick-references","quick-reference/platform-operations-cheatsheet.html#platform-operations-cheatsheet","quick-reference/platform-operations-cheatsheet.html#mode-selection-one-command","quick-reference/platform-operations-cheatsheet.html#service-ports--endpoints","quick-reference/platform-operations-cheatsheet.html#service-startup-order-matters","quick-reference/platform-operations-cheatsheet.html#quick-checks-all-services","quick-reference/platform-operations-cheatsheet.html#configuration-management","quick-reference/platform-operations-cheatsheet.html#view-config-files","quick-reference/platform-operations-cheatsheet.html#apply-config-changes","quick-reference/platform-operations-cheatsheet.html#service-control","quick-reference/platform-operations-cheatsheet.html#stop-services","quick-reference/platform-operations-cheatsheet.html#restart-services","quick-reference/platform-operations-cheatsheet.html#check-logs","quick-reference/platform-operations-cheatsheet.html#database-management","quick-reference/platform-operations-cheatsheet.html#surrealdb-multiuserenterprise","quick-reference/platform-operations-cheatsheet.html#etcd-enterprise-ha","quick-reference/platform-operations-cheatsheet.html#environment-variable-overrides","quick-reference/platform-operations-cheatsheet.html#override-individual-settings","quick-reference/platform-operations-cheatsheet.html#health--status-checks","quick-reference/platform-operations-cheatsheet.html#quick-status-30-seconds","quick-reference/platform-operations-cheatsheet.html#detailed-status","quick-reference/platform-operations-cheatsheet.html#performance--monitoring","quick-reference/platform-operations-cheatsheet.html#system-resources","quick-reference/platform-operations-cheatsheet.html#service-performance","quick-reference/platform-operations-cheatsheet.html#troubleshooting-quick-fixes","quick-reference/platform-operations-cheatsheet.html#service-wont-start","quick-reference/platform-operations-cheatsheet.html#high-memory-usage","quick-reference/platform-operations-cheatsheet.html#database-connection-error","quick-reference/platform-operations-cheatsheet.html#services-not-communicating","quick-reference/platform-operations-cheatsheet.html#emergency-procedures","quick-reference/platform-operations-cheatsheet.html#full-service-recovery","quick-reference/platform-operations-cheatsheet.html#rollback-to-previous-configuration","quick-reference/platform-operations-cheatsheet.html#data-recovery","quick-reference/platform-operations-cheatsheet.html#file-locations","quick-reference/platform-operations-cheatsheet.html#mode-quick-reference-matrix","quick-reference/platform-operations-cheatsheet.html#common-command-patterns","quick-reference/platform-operations-cheatsheet.html#deploy-mode-change","quick-reference/platform-operations-cheatsheet.html#restart-single-service-without-downtime","quick-reference/platform-operations-cheatsheet.html#scale-workers-for-load","quick-reference/platform-operations-cheatsheet.html#diagnostic-bundle","quick-reference/platform-operations-cheatsheet.html#essential-references","quick-reference/general.html#rag-system---quick-reference-guide","quick-reference/general.html#-what-you-have","quick-reference/general.html#complete-rag-system","quick-reference/general.html#key-files","quick-reference/justfile-recipes.html#justfile-recipes---quick-reference","quick-reference/justfile-recipes.html#authentication-authjust","quick-reference/justfile-recipes.html#kms-kmsjust","quick-reference/justfile-recipes.html#orchestrator-orchestratorjust","quick-reference/justfile-recipes.html#plugin-testing","quick-reference/justfile-recipes.html#common-workflows","quick-reference/justfile-recipes.html#complete-authentication-setup","quick-reference/justfile-recipes.html#production-deployment-workflow","quick-reference/justfile-recipes.html#kms-setup-and-testing","quick-reference/justfile-recipes.html#monitoring-operations","quick-reference/justfile-recipes.html#cleanup-operations","quick-reference/justfile-recipes.html#tips","quick-reference/justfile-recipes.html#recipe-count","quick-reference/justfile-recipes.html#documentation","quick-reference/oci.html#oci-registry-quick-reference","quick-reference/oci.html#prerequisites","quick-reference/sudo-password-handling.html#sudo-password-handling---quick-reference","quick-reference/sudo-password-handling.html#when-sudo-is-required","quick-reference/sudo-password-handling.html#quick-solutions","quick-reference/sudo-password-handling.html#-best-cache-credentials-first","configuration/config-validation.html#configuration-validation-guide","configuration/config-validation.html#overview","configuration/config-validation.html#schema-validation-features","configuration/config-validation.html#1-required-fields-validation","configuration/workspace-config-architecture.html#workspace-config-architecture"],"index":{"documentStore":{"docInfo":{"0":{"body":49,"breadcrumbs":4,"title":3},"1":{"body":0,"breadcrumbs":3,"title":2},"10":{"body":7,"breadcrumbs":2,"title":1},"100":{"body":42,"breadcrumbs":6,"title":3},"1000":{"body":0,"breadcrumbs":6,"title":2},"1001":{"body":62,"breadcrumbs":8,"title":4},"1002":{"body":110,"breadcrumbs":7,"title":3},"1003":{"body":0,"breadcrumbs":7,"title":3},"1004":{"body":79,"breadcrumbs":6,"title":2},"1005":{"body":33,"breadcrumbs":6,"title":2},"1006":{"body":33,"breadcrumbs":7,"title":3},"1007":{"body":85,"breadcrumbs":6,"title":2},"1008":{"body":0,"breadcrumbs":10,"title":6},"1009":{"body":124,"breadcrumbs":6,"title":2},"101":{"body":30,"breadcrumbs":5,"title":2},"1010":{"body":129,"breadcrumbs":7,"title":3},"1011":{"body":153,"breadcrumbs":9,"title":5},"1012":{"body":132,"breadcrumbs":6,"title":2},"1013":{"body":122,"breadcrumbs":6,"title":2},"1014":{"body":0,"breadcrumbs":7,"title":3},"1015":{"body":31,"breadcrumbs":6,"title":2},"1016":{"body":25,"breadcrumbs":6,"title":2},"1017":{"body":16,"breadcrumbs":6,"title":2},"1018":{"body":0,"breadcrumbs":6,"title":2},"1019":{"body":22,"breadcrumbs":6,"title":2},"102":{"body":20,"breadcrumbs":5,"title":2},"1020":{"body":169,"breadcrumbs":7,"title":3},"1021":{"body":0,"breadcrumbs":6,"title":2},"1022":{"body":23,"breadcrumbs":6,"title":2},"1023":{"body":21,"breadcrumbs":7,"title":3},"1024":{"body":0,"breadcrumbs":5,"title":1},"1025":{"body":20,"breadcrumbs":7,"title":3},"1026":{"body":28,"breadcrumbs":6,"title":2},"1027":{"body":21,"breadcrumbs":8,"title":4},"1028":{"body":24,"breadcrumbs":9,"title":5},"1029":{"body":0,"breadcrumbs":6,"title":2},"103":{"body":25,"breadcrumbs":5,"title":2},"1030":{"body":42,"breadcrumbs":6,"title":2},"1031":{"body":59,"breadcrumbs":6,"title":2},"1032":{"body":49,"breadcrumbs":7,"title":3},"1033":{"body":0,"breadcrumbs":6,"title":2},"1034":{"body":9,"breadcrumbs":9,"title":5},"1035":{"body":13,"breadcrumbs":9,"title":5},"1036":{"body":11,"breadcrumbs":8,"title":4},"1037":{"body":17,"breadcrumbs":9,"title":5},"1038":{"body":7,"breadcrumbs":7,"title":3},"1039":{"body":0,"breadcrumbs":6,"title":2},"104":{"body":10,"breadcrumbs":7,"title":4},"1040":{"body":22,"breadcrumbs":6,"title":2},"1041":{"body":59,"breadcrumbs":6,"title":2},"1042":{"body":26,"breadcrumbs":7,"title":3},"1043":{"body":0,"breadcrumbs":6,"title":2},"1044":{"body":14,"breadcrumbs":6,"title":2},"1045":{"body":26,"breadcrumbs":6,"title":2},"1046":{"body":19,"breadcrumbs":6,"title":2},"1047":{"body":28,"breadcrumbs":6,"title":3},"1048":{"body":23,"breadcrumbs":5,"title":2},"1049":{"body":0,"breadcrumbs":4,"title":1},"105":{"body":0,"breadcrumbs":5,"title":2},"1050":{"body":18,"breadcrumbs":5,"title":2},"1051":{"body":21,"breadcrumbs":7,"title":4},"1052":{"body":35,"breadcrumbs":5,"title":2},"1053":{"body":19,"breadcrumbs":5,"title":2},"1054":{"body":0,"breadcrumbs":5,"title":2},"1055":{"body":19,"breadcrumbs":6,"title":3},"1056":{"body":207,"breadcrumbs":5,"title":2},"1057":{"body":0,"breadcrumbs":5,"title":2},"1058":{"body":8,"breadcrumbs":6,"title":3},"1059":{"body":18,"breadcrumbs":7,"title":4},"106":{"body":14,"breadcrumbs":4,"title":1},"1060":{"body":31,"breadcrumbs":7,"title":4},"1061":{"body":31,"breadcrumbs":6,"title":3},"1062":{"body":77,"breadcrumbs":8,"title":5},"1063":{"body":32,"breadcrumbs":6,"title":3},"1064":{"body":4,"breadcrumbs":6,"title":3},"1065":{"body":21,"breadcrumbs":9,"title":6},"1066":{"body":20,"breadcrumbs":9,"title":6},"1067":{"body":8,"breadcrumbs":7,"title":4},"1068":{"body":13,"breadcrumbs":9,"title":6},"1069":{"body":71,"breadcrumbs":7,"title":4},"107":{"body":392,"breadcrumbs":6,"title":3},"1070":{"body":28,"breadcrumbs":7,"title":4},"1071":{"body":16,"breadcrumbs":8,"title":5},"1072":{"body":15,"breadcrumbs":4,"title":1},"1073":{"body":5,"breadcrumbs":6,"title":3},"1074":{"body":15,"breadcrumbs":4,"title":1},"1075":{"body":25,"breadcrumbs":7,"title":4},"1076":{"body":8,"breadcrumbs":8,"title":5},"1077":{"body":33,"breadcrumbs":9,"title":6},"1078":{"body":3,"breadcrumbs":7,"title":4},"1079":{"body":22,"breadcrumbs":9,"title":6},"108":{"body":7,"breadcrumbs":5,"title":2},"1080":{"body":72,"breadcrumbs":9,"title":6},"1081":{"body":27,"breadcrumbs":9,"title":6},"1082":{"body":27,"breadcrumbs":8,"title":5},"1083":{"body":22,"breadcrumbs":6,"title":3},"1084":{"body":8,"breadcrumbs":6,"title":3},"1085":{"body":20,"breadcrumbs":8,"title":5},"1086":{"body":20,"breadcrumbs":9,"title":6},"1087":{"body":69,"breadcrumbs":8,"title":5},"1088":{"body":123,"breadcrumbs":8,"title":5},"1089":{"body":53,"breadcrumbs":8,"title":5},"109":{"body":23,"breadcrumbs":7,"title":4},"1090":{"body":5,"breadcrumbs":6,"title":3},"1091":{"body":31,"breadcrumbs":4,"title":1},"1092":{"body":111,"breadcrumbs":7,"title":4},"1093":{"body":43,"breadcrumbs":9,"title":6},"1094":{"body":68,"breadcrumbs":8,"title":5},"1095":{"body":28,"breadcrumbs":8,"title":5},"1096":{"body":36,"breadcrumbs":8,"title":5},"1097":{"body":49,"breadcrumbs":7,"title":4},"1098":{"body":0,"breadcrumbs":5,"title":2},"1099":{"body":114,"breadcrumbs":5,"title":2},"11":{"body":11,"breadcrumbs":3,"title":2},"110":{"body":22,"breadcrumbs":5,"title":2},"1100":{"body":39,"breadcrumbs":5,"title":2},"1101":{"body":37,"breadcrumbs":5,"title":2},"1102":{"body":40,"breadcrumbs":6,"title":3},"1103":{"body":0,"breadcrumbs":6,"title":3},"1104":{"body":39,"breadcrumbs":6,"title":3},"1105":{"body":56,"breadcrumbs":6,"title":3},"1106":{"body":52,"breadcrumbs":5,"title":2},"1107":{"body":39,"breadcrumbs":4,"title":1},"1108":{"body":0,"breadcrumbs":4,"title":1},"1109":{"body":35,"breadcrumbs":6,"title":3},"111":{"body":9,"breadcrumbs":2,"title":1},"1110":{"body":51,"breadcrumbs":6,"title":3},"1111":{"body":46,"breadcrumbs":6,"title":3},"1112":{"body":41,"breadcrumbs":6,"title":3},"1113":{"body":50,"breadcrumbs":6,"title":3},"1114":{"body":34,"breadcrumbs":5,"title":2},"1115":{"body":37,"breadcrumbs":6,"title":3},"1116":{"body":79,"breadcrumbs":5,"title":2},"1117":{"body":0,"breadcrumbs":5,"title":2},"1118":{"body":57,"breadcrumbs":6,"title":3},"1119":{"body":59,"breadcrumbs":5,"title":2},"112":{"body":0,"breadcrumbs":3,"title":2},"1120":{"body":54,"breadcrumbs":5,"title":2},"1121":{"body":0,"breadcrumbs":5,"title":2},"1122":{"body":14,"breadcrumbs":5,"title":2},"1123":{"body":13,"breadcrumbs":5,"title":2},"1124":{"body":39,"breadcrumbs":6,"title":3},"1125":{"body":7,"breadcrumbs":6,"title":3},"1126":{"body":19,"breadcrumbs":5,"title":2},"1127":{"body":20,"breadcrumbs":4,"title":1},"1128":{"body":41,"breadcrumbs":5,"title":2},"1129":{"body":56,"breadcrumbs":5,"title":2},"113":{"body":14,"breadcrumbs":5,"title":4},"1130":{"body":0,"breadcrumbs":5,"title":2},"1131":{"body":2410,"breadcrumbs":5,"title":2},"1132":{"body":25,"breadcrumbs":7,"title":4},"1133":{"body":26,"breadcrumbs":4,"title":1},"1134":{"body":20,"breadcrumbs":4,"title":1},"1135":{"body":0,"breadcrumbs":4,"title":1},"1136":{"body":41,"breadcrumbs":5,"title":2},"1137":{"body":21,"breadcrumbs":5,"title":2},"1138":{"body":22,"breadcrumbs":4,"title":1},"1139":{"body":75,"breadcrumbs":6,"title":3},"114":{"body":13,"breadcrumbs":6,"title":5},"1140":{"body":0,"breadcrumbs":5,"title":2},"1141":{"body":161,"breadcrumbs":7,"title":4},"1142":{"body":60,"breadcrumbs":6,"title":3},"1143":{"body":21,"breadcrumbs":6,"title":3},"1144":{"body":0,"breadcrumbs":6,"title":3},"1145":{"body":391,"breadcrumbs":7,"title":4},"1146":{"body":19,"breadcrumbs":7,"title":4},"1147":{"body":0,"breadcrumbs":5,"title":2},"1148":{"body":174,"breadcrumbs":7,"title":4},"1149":{"body":44,"breadcrumbs":6,"title":3},"115":{"body":17,"breadcrumbs":5,"title":4},"1150":{"body":20,"breadcrumbs":6,"title":3},"1151":{"body":0,"breadcrumbs":5,"title":2},"1152":{"body":24,"breadcrumbs":6,"title":3},"1153":{"body":23,"breadcrumbs":8,"title":5},"1154":{"body":89,"breadcrumbs":8,"title":5},"1155":{"body":25,"breadcrumbs":8,"title":5},"1156":{"body":0,"breadcrumbs":6,"title":3},"1157":{"body":58,"breadcrumbs":8,"title":5},"1158":{"body":42,"breadcrumbs":7,"title":4},"1159":{"body":0,"breadcrumbs":7,"title":4},"116":{"body":0,"breadcrumbs":3,"title":2},"1160":{"body":17,"breadcrumbs":6,"title":3},"1161":{"body":28,"breadcrumbs":6,"title":3},"1162":{"body":18,"breadcrumbs":6,"title":3},"1163":{"body":0,"breadcrumbs":6,"title":3},"1164":{"body":74,"breadcrumbs":6,"title":3},"1165":{"body":0,"breadcrumbs":5,"title":2},"1166":{"body":30,"breadcrumbs":7,"title":4},"1167":{"body":31,"breadcrumbs":8,"title":5},"1168":{"body":16,"breadcrumbs":9,"title":6},"1169":{"body":0,"breadcrumbs":6,"title":3},"117":{"body":19,"breadcrumbs":3,"title":2},"1170":{"body":40,"breadcrumbs":7,"title":4},"1171":{"body":7,"breadcrumbs":7,"title":4},"1172":{"body":0,"breadcrumbs":5,"title":2},"1173":{"body":97,"breadcrumbs":5,"title":2},"1174":{"body":55,"breadcrumbs":6,"title":3},"1175":{"body":58,"breadcrumbs":6,"title":3},"1176":{"body":0,"breadcrumbs":5,"title":2},"1177":{"body":88,"breadcrumbs":7,"title":4},"1178":{"body":22,"breadcrumbs":4,"title":1},"1179":{"body":0,"breadcrumbs":8,"title":4},"118":{"body":19,"breadcrumbs":4,"title":3},"1180":{"body":10,"breadcrumbs":5,"title":3},"1181":{"body":15,"breadcrumbs":4,"title":2},"1182":{"body":48,"breadcrumbs":3,"title":1},"1183":{"body":42,"breadcrumbs":4,"title":2},"1184":{"body":0,"breadcrumbs":3,"title":1},"1185":{"body":12,"breadcrumbs":3,"title":1},"1186":{"body":964,"breadcrumbs":5,"title":3},"1187":{"body":47,"breadcrumbs":5,"title":3},"1188":{"body":37,"breadcrumbs":6,"title":4},"1189":{"body":50,"breadcrumbs":5,"title":3},"119":{"body":0,"breadcrumbs":3,"title":2},"1190":{"body":39,"breadcrumbs":5,"title":3},"1191":{"body":0,"breadcrumbs":4,"title":2},"1192":{"body":1155,"breadcrumbs":5,"title":3},"1193":{"body":0,"breadcrumbs":4,"title":2},"1194":{"body":0,"breadcrumbs":3,"title":2},"1195":{"body":0,"breadcrumbs":3,"title":2},"1196":{"body":10,"breadcrumbs":6,"title":3},"1197":{"body":14,"breadcrumbs":5,"title":2},"1198":{"body":34,"breadcrumbs":5,"title":2},"1199":{"body":0,"breadcrumbs":6,"title":3},"12":{"body":670,"breadcrumbs":3,"title":2},"120":{"body":26,"breadcrumbs":3,"title":2},"1200":{"body":25,"breadcrumbs":6,"title":3},"1201":{"body":21,"breadcrumbs":6,"title":3},"1202":{"body":16,"breadcrumbs":5,"title":2},"1203":{"body":19,"breadcrumbs":5,"title":2},"1204":{"body":13,"breadcrumbs":5,"title":2},"1205":{"body":16,"breadcrumbs":6,"title":3},"1206":{"body":0,"breadcrumbs":6,"title":3},"1207":{"body":22,"breadcrumbs":5,"title":2},"1208":{"body":16,"breadcrumbs":5,"title":2},"1209":{"body":16,"breadcrumbs":5,"title":2},"121":{"body":30,"breadcrumbs":3,"title":2},"1210":{"body":16,"breadcrumbs":6,"title":3},"1211":{"body":0,"breadcrumbs":5,"title":2},"1212":{"body":18,"breadcrumbs":8,"title":5},"1213":{"body":20,"breadcrumbs":9,"title":6},"1214":{"body":23,"breadcrumbs":9,"title":6},"1215":{"body":25,"breadcrumbs":8,"title":5},"1216":{"body":0,"breadcrumbs":6,"title":3},"1217":{"body":16,"breadcrumbs":7,"title":4},"1218":{"body":16,"breadcrumbs":6,"title":3},"1219":{"body":13,"breadcrumbs":6,"title":3},"122":{"body":7,"breadcrumbs":3,"title":2},"1220":{"body":13,"breadcrumbs":5,"title":2},"1221":{"body":0,"breadcrumbs":5,"title":2},"1222":{"body":13,"breadcrumbs":8,"title":5},"1223":{"body":23,"breadcrumbs":7,"title":4},"1224":{"body":20,"breadcrumbs":8,"title":5},"1225":{"body":23,"breadcrumbs":6,"title":3},"1226":{"body":42,"breadcrumbs":5,"title":2},"1227":{"body":0,"breadcrumbs":5,"title":2},"1228":{"body":11,"breadcrumbs":7,"title":4},"1229":{"body":10,"breadcrumbs":7,"title":4},"123":{"body":9,"breadcrumbs":2,"title":1},"1230":{"body":9,"breadcrumbs":7,"title":4},"1231":{"body":48,"breadcrumbs":5,"title":2},"1232":{"body":26,"breadcrumbs":5,"title":2},"1233":{"body":33,"breadcrumbs":4,"title":1},"1234":{"body":20,"breadcrumbs":10,"title":6},"1235":{"body":22,"breadcrumbs":6,"title":2},"1236":{"body":35,"breadcrumbs":6,"title":2},"1237":{"body":20,"breadcrumbs":6,"title":2},"1238":{"body":0,"breadcrumbs":7,"title":3},"1239":{"body":46,"breadcrumbs":7,"title":3},"124":{"body":9,"breadcrumbs":2,"title":1},"1240":{"body":26,"breadcrumbs":6,"title":2},"1241":{"body":0,"breadcrumbs":5,"title":1},"1242":{"body":37,"breadcrumbs":9,"title":5},"1243":{"body":15,"breadcrumbs":6,"title":2},"1244":{"body":0,"breadcrumbs":6,"title":2},"1245":{"body":27,"breadcrumbs":5,"title":1},"1246":{"body":27,"breadcrumbs":5,"title":1},"1247":{"body":26,"breadcrumbs":6,"title":2},"1248":{"body":0,"breadcrumbs":7,"title":3},"1249":{"body":1622,"breadcrumbs":9,"title":5},"125":{"body":16,"breadcrumbs":2,"title":1},"1250":{"body":19,"breadcrumbs":8,"title":4},"1251":{"body":20,"breadcrumbs":6,"title":2},"1252":{"body":18,"breadcrumbs":5,"title":1},"1253":{"body":33,"breadcrumbs":5,"title":1},"1254":{"body":0,"breadcrumbs":7,"title":3},"1255":{"body":1047,"breadcrumbs":6,"title":2},"1256":{"body":452,"breadcrumbs":6,"title":2},"1257":{"body":21,"breadcrumbs":11,"title":7},"1258":{"body":29,"breadcrumbs":6,"title":2},"1259":{"body":0,"breadcrumbs":5,"title":1},"126":{"body":9,"breadcrumbs":2,"title":1},"1260":{"body":25,"breadcrumbs":5,"title":1},"1261":{"body":33,"breadcrumbs":6,"title":2},"1262":{"body":32,"breadcrumbs":7,"title":3},"1263":{"body":0,"breadcrumbs":6,"title":2},"1264":{"body":23,"breadcrumbs":7,"title":3},"1265":{"body":14,"breadcrumbs":6,"title":2},"1266":{"body":3277,"breadcrumbs":6,"title":2},"1267":{"body":16,"breadcrumbs":3,"title":2},"1268":{"body":33,"breadcrumbs":2,"title":1},"1269":{"body":96,"breadcrumbs":3,"title":2},"127":{"body":9,"breadcrumbs":2,"title":1},"1270":{"body":0,"breadcrumbs":3,"title":2},"1271":{"body":56,"breadcrumbs":3,"title":2},"1272":{"body":22,"breadcrumbs":3,"title":2},"1273":{"body":0,"breadcrumbs":3,"title":2},"1274":{"body":11,"breadcrumbs":3,"title":2},"1275":{"body":18,"breadcrumbs":3,"title":2},"1276":{"body":25,"breadcrumbs":4,"title":3},"1277":{"body":9,"breadcrumbs":4,"title":3},"1278":{"body":28,"breadcrumbs":4,"title":3},"1279":{"body":42,"breadcrumbs":4,"title":3},"128":{"body":0,"breadcrumbs":4,"title":3},"1280":{"body":28,"breadcrumbs":3,"title":2},"1281":{"body":32,"breadcrumbs":3,"title":2},"1282":{"body":12,"breadcrumbs":3,"title":2},"1283":{"body":0,"breadcrumbs":6,"title":4},"1284":{"body":15,"breadcrumbs":8,"title":6},"1285":{"body":41,"breadcrumbs":4,"title":2},"1286":{"body":27,"breadcrumbs":4,"title":2},"1287":{"body":5,"breadcrumbs":4,"title":2},"1288":{"body":20,"breadcrumbs":4,"title":2},"1289":{"body":38,"breadcrumbs":4,"title":2},"129":{"body":44,"breadcrumbs":4,"title":3},"1290":{"body":24,"breadcrumbs":4,"title":2},"1291":{"body":46,"breadcrumbs":4,"title":2},"1292":{"body":26,"breadcrumbs":5,"title":3},"1293":{"body":15,"breadcrumbs":7,"title":5},"1294":{"body":0,"breadcrumbs":4,"title":2},"1295":{"body":28,"breadcrumbs":5,"title":3},"1296":{"body":27,"breadcrumbs":4,"title":2},"1297":{"body":24,"breadcrumbs":4,"title":2},"1298":{"body":31,"breadcrumbs":4,"title":2},"1299":{"body":25,"breadcrumbs":4,"title":2},"13":{"body":8,"breadcrumbs":4,"title":2},"130":{"body":96,"breadcrumbs":2,"title":1},"1300":{"body":0,"breadcrumbs":4,"title":2},"1301":{"body":6,"breadcrumbs":3,"title":1},"1302":{"body":30,"breadcrumbs":3,"title":1},"1303":{"body":5,"breadcrumbs":4,"title":2},"1304":{"body":27,"breadcrumbs":5,"title":3},"1305":{"body":0,"breadcrumbs":4,"title":2},"1306":{"body":14,"breadcrumbs":6,"title":4},"1307":{"body":16,"breadcrumbs":5,"title":3},"1308":{"body":12,"breadcrumbs":4,"title":2},"1309":{"body":0,"breadcrumbs":4,"title":2},"131":{"body":58,"breadcrumbs":2,"title":1},"1310":{"body":25,"breadcrumbs":4,"title":2},"1311":{"body":23,"breadcrumbs":4,"title":2},"1312":{"body":0,"breadcrumbs":4,"title":2},"1313":{"body":21,"breadcrumbs":4,"title":2},"1314":{"body":11,"breadcrumbs":4,"title":2},"1315":{"body":12,"breadcrumbs":3,"title":1},"1316":{"body":12,"breadcrumbs":4,"title":2},"1317":{"body":0,"breadcrumbs":3,"title":1},"1318":{"body":47,"breadcrumbs":4,"title":2},"1319":{"body":29,"breadcrumbs":5,"title":3},"132":{"body":0,"breadcrumbs":3,"title":2},"1320":{"body":0,"breadcrumbs":3,"title":1},"1321":{"body":30,"breadcrumbs":3,"title":1},"1322":{"body":28,"breadcrumbs":3,"title":1},"1323":{"body":7,"breadcrumbs":4,"title":2},"1324":{"body":19,"breadcrumbs":4,"title":3},"1325":{"body":43,"breadcrumbs":2,"title":1},"1326":{"body":388,"breadcrumbs":2,"title":1},"1327":{"body":0,"breadcrumbs":6,"title":4},"1328":{"body":16,"breadcrumbs":6,"title":4},"1329":{"body":0,"breadcrumbs":4,"title":2},"133":{"body":45,"breadcrumbs":3,"title":2},"1330":{"body":53,"breadcrumbs":6,"title":4},"1331":{"body":69,"breadcrumbs":5,"title":3},"1332":{"body":35,"breadcrumbs":5,"title":3},"1333":{"body":35,"breadcrumbs":4,"title":2},"1334":{"body":0,"breadcrumbs":4,"title":2},"1335":{"body":41,"breadcrumbs":4,"title":2},"1336":{"body":33,"breadcrumbs":5,"title":3},"1337":{"body":42,"breadcrumbs":4,"title":2},"1338":{"body":0,"breadcrumbs":4,"title":2},"1339":{"body":27,"breadcrumbs":4,"title":2},"134":{"body":20,"breadcrumbs":3,"title":2},"1340":{"body":22,"breadcrumbs":4,"title":2},"1341":{"body":72,"breadcrumbs":4,"title":2},"1342":{"body":0,"breadcrumbs":4,"title":2},"1343":{"body":24,"breadcrumbs":4,"title":2},"1344":{"body":17,"breadcrumbs":4,"title":2},"1345":{"body":13,"breadcrumbs":4,"title":2},"1346":{"body":33,"breadcrumbs":4,"title":2},"1347":{"body":15,"breadcrumbs":3,"title":1},"1348":{"body":26,"breadcrumbs":4,"title":2},"1349":{"body":21,"breadcrumbs":4,"title":2},"135":{"body":6,"breadcrumbs":5,"title":4},"1350":{"body":16,"breadcrumbs":5,"title":3},"1351":{"body":62,"breadcrumbs":3,"title":1},"1352":{"body":367,"breadcrumbs":3,"title":1},"1353":{"body":10,"breadcrumbs":5,"title":3},"1354":{"body":22,"breadcrumbs":4,"title":2},"1355":{"body":0,"breadcrumbs":4,"title":2},"1356":{"body":43,"breadcrumbs":4,"title":2},"1357":{"body":2188,"breadcrumbs":4,"title":2},"1358":{"body":293,"breadcrumbs":4,"title":2},"1359":{"body":0,"breadcrumbs":7,"title":4},"136":{"body":13,"breadcrumbs":2,"title":1},"1360":{"body":34,"breadcrumbs":4,"title":1},"1361":{"body":0,"breadcrumbs":5,"title":2},"1362":{"body":1449,"breadcrumbs":7,"title":4},"1363":{"body":0,"breadcrumbs":10,"title":7},"1364":{"body":30,"breadcrumbs":10,"title":7},"1365":{"body":50,"breadcrumbs":5,"title":2},"1366":{"body":66,"breadcrumbs":6,"title":3},"1367":{"body":63,"breadcrumbs":6,"title":3},"1368":{"body":26,"breadcrumbs":8,"title":5},"1369":{"body":38,"breadcrumbs":5,"title":2},"137":{"body":10,"breadcrumbs":2,"title":1},"1370":{"body":0,"breadcrumbs":8,"title":6},"1371":{"body":14,"breadcrumbs":8,"title":6},"1372":{"body":54,"breadcrumbs":4,"title":2},"1373":{"body":0,"breadcrumbs":5,"title":3},"1374":{"body":30,"breadcrumbs":3,"title":1},"1375":{"body":29,"breadcrumbs":3,"title":1},"1376":{"body":30,"breadcrumbs":3,"title":1},"1377":{"body":21,"breadcrumbs":3,"title":1},"1378":{"body":30,"breadcrumbs":3,"title":1},"1379":{"body":36,"breadcrumbs":3,"title":1},"138":{"body":5,"breadcrumbs":3,"title":2},"1380":{"body":13,"breadcrumbs":3,"title":1},"1381":{"body":26,"breadcrumbs":4,"title":2},"1382":{"body":148,"breadcrumbs":6,"title":4},"1383":{"body":0,"breadcrumbs":5,"title":3},"1384":{"body":34,"breadcrumbs":7,"title":5},"1385":{"body":26,"breadcrumbs":4,"title":2},"1386":{"body":20,"breadcrumbs":4,"title":2},"1387":{"body":10,"breadcrumbs":4,"title":2},"1388":{"body":29,"breadcrumbs":6,"title":4},"1389":{"body":39,"breadcrumbs":5,"title":3},"139":{"body":7,"breadcrumbs":3,"title":1},"1390":{"body":12,"breadcrumbs":5,"title":3},"1391":{"body":0,"breadcrumbs":4,"title":2},"1392":{"body":621,"breadcrumbs":7,"title":5},"1393":{"body":9,"breadcrumbs":6,"title":3},"1394":{"body":19,"breadcrumbs":4,"title":1},"1395":{"body":0,"breadcrumbs":5,"title":2},"1396":{"body":1181,"breadcrumbs":6,"title":3},"1397":{"body":0,"breadcrumbs":7,"title":4},"1398":{"body":22,"breadcrumbs":9,"title":6},"1399":{"body":51,"breadcrumbs":5,"title":2},"14":{"body":15,"breadcrumbs":4,"title":2},"140":{"body":19,"breadcrumbs":3,"title":1},"1400":{"body":333,"breadcrumbs":6,"title":3},"1401":{"body":12,"breadcrumbs":4,"title":2},"1402":{"body":17,"breadcrumbs":4,"title":2},"1403":{"body":12,"breadcrumbs":4,"title":2},"1404":{"body":50,"breadcrumbs":4,"title":2},"1405":{"body":30,"breadcrumbs":4,"title":2},"1406":{"body":0,"breadcrumbs":4,"title":2},"1407":{"body":43,"breadcrumbs":6,"title":4},"1408":{"body":36,"breadcrumbs":6,"title":4},"1409":{"body":38,"breadcrumbs":5,"title":3},"141":{"body":17,"breadcrumbs":6,"title":4},"1410":{"body":0,"breadcrumbs":5,"title":3},"1411":{"body":90,"breadcrumbs":6,"title":4},"1412":{"body":64,"breadcrumbs":6,"title":4},"1413":{"body":57,"breadcrumbs":6,"title":4},"1414":{"body":71,"breadcrumbs":6,"title":4},"1415":{"body":54,"breadcrumbs":6,"title":4},"1416":{"body":0,"breadcrumbs":5,"title":3},"1417":{"body":80,"breadcrumbs":6,"title":4},"1418":{"body":59,"breadcrumbs":6,"title":4},"1419":{"body":57,"breadcrumbs":6,"title":4},"142":{"body":7,"breadcrumbs":7,"title":5},"1420":{"body":62,"breadcrumbs":6,"title":4},"1421":{"body":63,"breadcrumbs":7,"title":5},"1422":{"body":0,"breadcrumbs":5,"title":3},"1423":{"body":60,"breadcrumbs":6,"title":4},"1424":{"body":48,"breadcrumbs":6,"title":4},"1425":{"body":44,"breadcrumbs":6,"title":4},"1426":{"body":64,"breadcrumbs":6,"title":4},"1427":{"body":0,"breadcrumbs":4,"title":2},"1428":{"body":77,"breadcrumbs":5,"title":3},"1429":{"body":72,"breadcrumbs":5,"title":3},"143":{"body":15,"breadcrumbs":6,"title":4},"1430":{"body":49,"breadcrumbs":5,"title":3},"1431":{"body":70,"breadcrumbs":5,"title":3},"1432":{"body":0,"breadcrumbs":4,"title":2},"1433":{"body":45,"breadcrumbs":5,"title":3},"1434":{"body":45,"breadcrumbs":5,"title":3},"1435":{"body":0,"breadcrumbs":4,"title":2},"1436":{"body":41,"breadcrumbs":5,"title":3},"1437":{"body":53,"breadcrumbs":5,"title":3},"1438":{"body":51,"breadcrumbs":5,"title":3},"1439":{"body":0,"breadcrumbs":4,"title":2},"144":{"body":19,"breadcrumbs":7,"title":5},"1440":{"body":60,"breadcrumbs":5,"title":3},"1441":{"body":31,"breadcrumbs":5,"title":3},"1442":{"body":0,"breadcrumbs":4,"title":2},"1443":{"body":23,"breadcrumbs":4,"title":2},"1444":{"body":24,"breadcrumbs":4,"title":2},"1445":{"body":63,"breadcrumbs":4,"title":2},"1446":{"body":41,"breadcrumbs":5,"title":3},"1447":{"body":0,"breadcrumbs":5,"title":3},"1448":{"body":36,"breadcrumbs":4,"title":2},"1449":{"body":65,"breadcrumbs":4,"title":2},"145":{"body":14,"breadcrumbs":5,"title":3},"1450":{"body":0,"breadcrumbs":4,"title":2},"1451":{"body":31,"breadcrumbs":4,"title":2},"1452":{"body":40,"breadcrumbs":4,"title":2},"1453":{"body":59,"breadcrumbs":4,"title":2},"1454":{"body":8,"breadcrumbs":6,"title":3},"1455":{"body":19,"breadcrumbs":4,"title":1},"1456":{"body":17,"breadcrumbs":6,"title":3},"1457":{"body":26,"breadcrumbs":5,"title":2},"1458":{"body":834,"breadcrumbs":5,"title":2},"1459":{"body":19,"breadcrumbs":6,"title":3},"146":{"body":33,"breadcrumbs":7,"title":5},"1460":{"body":9,"breadcrumbs":5,"title":2},"1461":{"body":59,"breadcrumbs":5,"title":2},"1462":{"body":34,"breadcrumbs":5,"title":2},"1463":{"body":25,"breadcrumbs":6,"title":3},"1464":{"body":36,"breadcrumbs":6,"title":3},"1465":{"body":33,"breadcrumbs":6,"title":3},"1466":{"body":20,"breadcrumbs":4,"title":1},"1467":{"body":0,"breadcrumbs":4,"title":1},"1468":{"body":3,"breadcrumbs":5,"title":2},"1469":{"body":5,"breadcrumbs":6,"title":3},"147":{"body":35,"breadcrumbs":8,"title":6},"1470":{"body":4,"breadcrumbs":5,"title":2},"1471":{"body":7,"breadcrumbs":6,"title":3},"1472":{"body":18,"breadcrumbs":5,"title":2},"1473":{"body":9,"breadcrumbs":4,"title":1},"1474":{"body":6,"breadcrumbs":7,"title":4},"1475":{"body":1068,"breadcrumbs":5,"title":2},"1476":{"body":6,"breadcrumbs":4,"title":2},"1477":{"body":29,"breadcrumbs":5,"title":3},"1478":{"body":24,"breadcrumbs":4,"title":2},"1479":{"body":21,"breadcrumbs":5,"title":3},"148":{"body":34,"breadcrumbs":6,"title":4},"1480":{"body":10,"breadcrumbs":8,"title":5},"1481":{"body":12,"breadcrumbs":5,"title":2},"1482":{"body":38,"breadcrumbs":4,"title":1},"1483":{"body":33,"breadcrumbs":5,"title":2},"1484":{"body":0,"breadcrumbs":5,"title":2},"1485":{"body":28,"breadcrumbs":6,"title":3},"1486":{"body":30,"breadcrumbs":7,"title":4},"1487":{"body":1141,"breadcrumbs":6,"title":3},"1488":{"body":7,"breadcrumbs":7,"title":4},"1489":{"body":19,"breadcrumbs":4,"title":1},"149":{"body":28,"breadcrumbs":6,"title":4},"1490":{"body":0,"breadcrumbs":5,"title":2},"1491":{"body":137,"breadcrumbs":7,"title":4},"1492":{"body":0,"breadcrumbs":5,"title":2},"1493":{"body":779,"breadcrumbs":8,"title":5},"1494":{"body":0,"breadcrumbs":7,"title":4},"1495":{"body":14,"breadcrumbs":4,"title":1},"1496":{"body":41,"breadcrumbs":5,"title":2},"1497":{"body":0,"breadcrumbs":4,"title":1},"1498":{"body":409,"breadcrumbs":6,"title":3},"1499":{"body":354,"breadcrumbs":6,"title":3},"15":{"body":0,"breadcrumbs":4,"title":2},"150":{"body":31,"breadcrumbs":6,"title":4},"1500":{"body":14,"breadcrumbs":6,"title":3},"1501":{"body":51,"breadcrumbs":4,"title":1},"1502":{"body":0,"breadcrumbs":5,"title":2},"1503":{"body":1101,"breadcrumbs":5,"title":2},"1504":{"body":13,"breadcrumbs":4,"title":1},"1505":{"body":0,"breadcrumbs":5,"title":2},"1506":{"body":627,"breadcrumbs":5,"title":2},"1507":{"body":14,"breadcrumbs":3,"title":2},"1508":{"body":23,"breadcrumbs":3,"title":2},"1509":{"body":0,"breadcrumbs":3,"title":2},"151":{"body":45,"breadcrumbs":6,"title":4},"1510":{"body":1390,"breadcrumbs":3,"title":2},"1511":{"body":9,"breadcrumbs":7,"title":4},"1512":{"body":23,"breadcrumbs":4,"title":1},"1513":{"body":0,"breadcrumbs":5,"title":2},"1514":{"body":20,"breadcrumbs":5,"title":2},"1515":{"body":12,"breadcrumbs":5,"title":2},"1516":{"body":26,"breadcrumbs":5,"title":2},"1517":{"body":14,"breadcrumbs":5,"title":2},"1518":{"body":14,"breadcrumbs":7,"title":4},"1519":{"body":0,"breadcrumbs":5,"title":2},"152":{"body":25,"breadcrumbs":6,"title":4},"1520":{"body":1865,"breadcrumbs":6,"title":3},"1521":{"body":10,"breadcrumbs":6,"title":3},"1522":{"body":45,"breadcrumbs":4,"title":1},"1523":{"body":15,"breadcrumbs":5,"title":2},"1524":{"body":0,"breadcrumbs":4,"title":1},"1525":{"body":40,"breadcrumbs":5,"title":2},"1526":{"body":1133,"breadcrumbs":5,"title":2},"1527":{"body":638,"breadcrumbs":7,"title":4},"1528":{"body":0,"breadcrumbs":6,"title":4},"1529":{"body":13,"breadcrumbs":6,"title":4},"153":{"body":0,"breadcrumbs":3,"title":1},"1530":{"body":0,"breadcrumbs":5,"title":3},"1531":{"body":25,"breadcrumbs":5,"title":3},"1532":{"body":23,"breadcrumbs":5,"title":3},"1533":{"body":22,"breadcrumbs":7,"title":5},"1534":{"body":33,"breadcrumbs":5,"title":3},"1535":{"body":23,"breadcrumbs":7,"title":5},"1536":{"body":26,"breadcrumbs":5,"title":3},"1537":{"body":24,"breadcrumbs":7,"title":5},"1538":{"body":24,"breadcrumbs":5,"title":3},"1539":{"body":22,"breadcrumbs":6,"title":4},"154":{"body":13,"breadcrumbs":5,"title":3},"1540":{"body":10,"breadcrumbs":5,"title":3},"1541":{"body":12,"breadcrumbs":5,"title":3},"1542":{"body":11,"breadcrumbs":4,"title":2},"1543":{"body":21,"breadcrumbs":4,"title":2},"1544":{"body":93,"breadcrumbs":4,"title":2},"1545":{"body":27,"breadcrumbs":3,"title":1},"1546":{"body":17,"breadcrumbs":3,"title":1},"1547":{"body":23,"breadcrumbs":3,"title":1},"1548":{"body":20,"breadcrumbs":4,"title":2},"1549":{"body":9,"breadcrumbs":7,"title":4},"155":{"body":17,"breadcrumbs":4,"title":2},"1550":{"body":26,"breadcrumbs":4,"title":1},"1551":{"body":44,"breadcrumbs":4,"title":1},"1552":{"body":938,"breadcrumbs":5,"title":2},"1553":{"body":23,"breadcrumbs":7,"title":4},"1554":{"body":0,"breadcrumbs":4,"title":1},"1555":{"body":48,"breadcrumbs":4,"title":1},"1556":{"body":28,"breadcrumbs":5,"title":2},"1557":{"body":0,"breadcrumbs":5,"title":2},"1558":{"body":22,"breadcrumbs":6,"title":3},"1559":{"body":39,"breadcrumbs":7,"title":4},"156":{"body":14,"breadcrumbs":5,"title":3},"1560":{"body":69,"breadcrumbs":7,"title":4},"1561":{"body":0,"breadcrumbs":4,"title":1},"1562":{"body":51,"breadcrumbs":5,"title":2},"1563":{"body":24,"breadcrumbs":5,"title":2},"1564":{"body":0,"breadcrumbs":4,"title":1},"1565":{"body":25,"breadcrumbs":5,"title":2},"1566":{"body":25,"breadcrumbs":5,"title":2},"1567":{"body":32,"breadcrumbs":6,"title":3},"1568":{"body":17,"breadcrumbs":5,"title":2},"1569":{"body":29,"breadcrumbs":5,"title":2},"157":{"body":6,"breadcrumbs":4,"title":2},"1570":{"body":0,"breadcrumbs":5,"title":2},"1571":{"body":29,"breadcrumbs":5,"title":2},"1572":{"body":35,"breadcrumbs":5,"title":2},"1573":{"body":38,"breadcrumbs":5,"title":2},"1574":{"body":28,"breadcrumbs":5,"title":2},"1575":{"body":0,"breadcrumbs":4,"title":1},"1576":{"body":26,"breadcrumbs":5,"title":2},"1577":{"body":29,"breadcrumbs":5,"title":2},"1578":{"body":60,"breadcrumbs":6,"title":3},"1579":{"body":59,"breadcrumbs":5,"title":2},"158":{"body":7,"breadcrumbs":4,"title":2},"1580":{"body":31,"breadcrumbs":4,"title":1},"1581":{"body":0,"breadcrumbs":6,"title":3},"1582":{"body":19,"breadcrumbs":5,"title":2},"1583":{"body":17,"breadcrumbs":4,"title":1},"1584":{"body":16,"breadcrumbs":5,"title":2},"1585":{"body":18,"breadcrumbs":5,"title":2},"1586":{"body":16,"breadcrumbs":5,"title":2},"1587":{"body":0,"breadcrumbs":5,"title":2},"1588":{"body":30,"breadcrumbs":5,"title":2},"1589":{"body":23,"breadcrumbs":5,"title":2},"159":{"body":9,"breadcrumbs":4,"title":2},"1590":{"body":37,"breadcrumbs":5,"title":2},"1591":{"body":0,"breadcrumbs":5,"title":2},"1592":{"body":9,"breadcrumbs":5,"title":2},"1593":{"body":9,"breadcrumbs":5,"title":2},"1594":{"body":9,"breadcrumbs":5,"title":2},"1595":{"body":0,"breadcrumbs":5,"title":2},"1596":{"body":29,"breadcrumbs":5,"title":2},"1597":{"body":26,"breadcrumbs":5,"title":2},"1598":{"body":0,"breadcrumbs":5,"title":2},"1599":{"body":30,"breadcrumbs":6,"title":3},"16":{"body":18,"breadcrumbs":5,"title":3},"160":{"body":19,"breadcrumbs":3,"title":1},"1600":{"body":30,"breadcrumbs":7,"title":4},"1601":{"body":16,"breadcrumbs":6,"title":3},"1602":{"body":19,"breadcrumbs":5,"title":2},"1603":{"body":22,"breadcrumbs":4,"title":1},"1604":{"body":0,"breadcrumbs":10,"title":5},"1605":{"body":0,"breadcrumbs":7,"title":2},"1606":{"body":794,"breadcrumbs":9,"title":4},"1607":{"body":33,"breadcrumbs":8,"title":3},"1608":{"body":24,"breadcrumbs":7,"title":2},"1609":{"body":0,"breadcrumbs":7,"title":2},"161":{"body":21,"breadcrumbs":6,"title":4},"1610":{"body":48,"breadcrumbs":6,"title":1},"1611":{"body":42,"breadcrumbs":7,"title":2},"1612":{"body":0,"breadcrumbs":7,"title":2},"1613":{"body":309,"breadcrumbs":7,"title":2},"1614":{"body":14,"breadcrumbs":7,"title":4},"1615":{"body":30,"breadcrumbs":5,"title":2},"1616":{"body":44,"breadcrumbs":4,"title":1},"1617":{"body":4971,"breadcrumbs":5,"title":2},"1618":{"body":6,"breadcrumbs":7,"title":4},"1619":{"body":29,"breadcrumbs":4,"title":1},"162":{"body":42,"breadcrumbs":6,"title":4},"1620":{"body":47,"breadcrumbs":5,"title":2},"1621":{"body":0,"breadcrumbs":4,"title":1},"1622":{"body":10,"breadcrumbs":4,"title":1},"1623":{"body":1835,"breadcrumbs":5,"title":2},"1624":{"body":22,"breadcrumbs":12,"title":9},"1625":{"body":14,"breadcrumbs":4,"title":1},"1626":{"body":31,"breadcrumbs":5,"title":2},"1627":{"body":53,"breadcrumbs":6,"title":3},"1628":{"body":28,"breadcrumbs":5,"title":2},"1629":{"body":17,"breadcrumbs":4,"title":1},"163":{"body":35,"breadcrumbs":8,"title":6},"1630":{"body":41,"breadcrumbs":4,"title":1},"1631":{"body":0,"breadcrumbs":7,"title":4},"1632":{"body":32,"breadcrumbs":4,"title":1},"1633":{"body":0,"breadcrumbs":4,"title":1},"1634":{"body":8,"breadcrumbs":4,"title":1},"1635":{"body":17,"breadcrumbs":5,"title":2},"1636":{"body":36,"breadcrumbs":5,"title":2},"1637":{"body":0,"breadcrumbs":4,"title":1},"1638":{"body":62,"breadcrumbs":5,"title":2},"1639":{"body":85,"breadcrumbs":5,"title":2},"164":{"body":30,"breadcrumbs":7,"title":5},"1640":{"body":132,"breadcrumbs":5,"title":2},"1641":{"body":33,"breadcrumbs":5,"title":2},"1642":{"body":12,"breadcrumbs":5,"title":2},"1643":{"body":8,"breadcrumbs":6,"title":3},"1644":{"body":40,"breadcrumbs":5,"title":2},"1645":{"body":40,"breadcrumbs":5,"title":2},"1646":{"body":0,"breadcrumbs":4,"title":1},"1647":{"body":16,"breadcrumbs":6,"title":3},"1648":{"body":28,"breadcrumbs":6,"title":3},"1649":{"body":14,"breadcrumbs":6,"title":3},"165":{"body":26,"breadcrumbs":6,"title":4},"1650":{"body":37,"breadcrumbs":6,"title":3},"1651":{"body":0,"breadcrumbs":5,"title":2},"1652":{"body":18,"breadcrumbs":6,"title":3},"1653":{"body":19,"breadcrumbs":6,"title":3},"1654":{"body":39,"breadcrumbs":6,"title":3},"1655":{"body":40,"breadcrumbs":6,"title":3},"1656":{"body":28,"breadcrumbs":4,"title":1},"1657":{"body":26,"breadcrumbs":5,"title":2},"1658":{"body":25,"breadcrumbs":4,"title":1},"1659":{"body":9,"breadcrumbs":8,"title":5},"166":{"body":33,"breadcrumbs":8,"title":6},"1660":{"body":34,"breadcrumbs":4,"title":1},"1661":{"body":0,"breadcrumbs":5,"title":2},"1662":{"body":1080,"breadcrumbs":7,"title":4},"1663":{"body":0,"breadcrumbs":6,"title":3},"1664":{"body":0,"breadcrumbs":8,"title":4},"1665":{"body":12,"breadcrumbs":7,"title":5},"1666":{"body":28,"breadcrumbs":4,"title":2},"1667":{"body":422,"breadcrumbs":3,"title":1},"1668":{"body":17,"breadcrumbs":6,"title":3},"1669":{"body":14,"breadcrumbs":5,"title":2},"167":{"body":34,"breadcrumbs":7,"title":5},"1670":{"body":34,"breadcrumbs":4,"title":1},"1671":{"body":1337,"breadcrumbs":4,"title":1},"1672":{"body":0,"breadcrumbs":8,"title":4},"1673":{"body":12,"breadcrumbs":5,"title":1},"1674":{"body":48,"breadcrumbs":6,"title":2},"1675":{"body":2717,"breadcrumbs":7,"title":3},"1676":{"body":9,"breadcrumbs":7,"title":4},"1677":{"body":13,"breadcrumbs":5,"title":2},"1678":{"body":40,"breadcrumbs":4,"title":1},"1679":{"body":30,"breadcrumbs":5,"title":2},"168":{"body":34,"breadcrumbs":6,"title":4},"1680":{"body":0,"breadcrumbs":5,"title":2},"1681":{"body":1103,"breadcrumbs":4,"title":1},"1682":{"body":0,"breadcrumbs":4,"title":1},"1683":{"body":147,"breadcrumbs":6,"title":3},"1684":{"body":169,"breadcrumbs":6,"title":3},"1685":{"body":18,"breadcrumbs":10,"title":7},"1686":{"body":58,"breadcrumbs":4,"title":1},"1687":{"body":0,"breadcrumbs":6,"title":3},"1688":{"body":1374,"breadcrumbs":6,"title":3},"1689":{"body":14,"breadcrumbs":11,"title":7},"169":{"body":0,"breadcrumbs":5,"title":3},"1690":{"body":30,"breadcrumbs":6,"title":2},"1691":{"body":63,"breadcrumbs":6,"title":2},"1692":{"body":0,"breadcrumbs":8,"title":4},"1693":{"body":1751,"breadcrumbs":8,"title":4},"1694":{"body":13,"breadcrumbs":6,"title":3},"1695":{"body":22,"breadcrumbs":5,"title":2},"1696":{"body":42,"breadcrumbs":4,"title":1},"1697":{"body":0,"breadcrumbs":5,"title":2},"1698":{"body":20,"breadcrumbs":7,"title":4},"1699":{"body":28,"breadcrumbs":6,"title":3},"17":{"body":25,"breadcrumbs":4,"title":2},"170":{"body":36,"breadcrumbs":6,"title":4},"1700":{"body":37,"breadcrumbs":5,"title":2},"1701":{"body":0,"breadcrumbs":4,"title":1},"1702":{"body":54,"breadcrumbs":5,"title":2},"1703":{"body":22,"breadcrumbs":5,"title":2},"1704":{"body":28,"breadcrumbs":5,"title":2},"1705":{"body":0,"breadcrumbs":4,"title":1},"1706":{"body":18,"breadcrumbs":4,"title":1},"1707":{"body":0,"breadcrumbs":4,"title":1},"1708":{"body":8,"breadcrumbs":5,"title":2},"1709":{"body":6,"breadcrumbs":4,"title":1},"171":{"body":14,"breadcrumbs":8,"title":6},"1710":{"body":8,"breadcrumbs":5,"title":2},"1711":{"body":9,"breadcrumbs":6,"title":3},"1712":{"body":22,"breadcrumbs":4,"title":1},"1713":{"body":153,"breadcrumbs":4,"title":1},"1714":{"body":763,"breadcrumbs":5,"title":2},"1715":{"body":0,"breadcrumbs":6,"title":3},"1716":{"body":0,"breadcrumbs":7,"title":4},"1717":{"body":20,"breadcrumbs":10,"title":7},"1718":{"body":64,"breadcrumbs":5,"title":2},"1719":{"body":0,"breadcrumbs":6,"title":3},"172":{"body":10,"breadcrumbs":6,"title":4},"1720":{"body":287,"breadcrumbs":7,"title":4},"1721":{"body":176,"breadcrumbs":4,"title":1},"1722":{"body":9,"breadcrumbs":7,"title":4},"1723":{"body":16,"breadcrumbs":4,"title":1},"1724":{"body":0,"breadcrumbs":5,"title":2},"1725":{"body":1109,"breadcrumbs":6,"title":3},"1726":{"body":10,"breadcrumbs":4,"title":2},"1727":{"body":15,"breadcrumbs":4,"title":2},"1728":{"body":0,"breadcrumbs":5,"title":3},"1729":{"body":2122,"breadcrumbs":5,"title":3},"173":{"body":88,"breadcrumbs":4,"title":2},"1730":{"body":15,"breadcrumbs":6,"title":5},"1731":{"body":62,"breadcrumbs":3,"title":2},"1732":{"body":46,"breadcrumbs":2,"title":1},"1733":{"body":13,"breadcrumbs":3,"title":2},"1734":{"body":8,"breadcrumbs":5,"title":4},"1735":{"body":2312,"breadcrumbs":4,"title":3},"1736":{"body":24,"breadcrumbs":3,"title":2},"1737":{"body":88,"breadcrumbs":2,"title":1},"1738":{"body":13,"breadcrumbs":5,"title":3},"1739":{"body":15,"breadcrumbs":3,"title":1},"174":{"body":0,"breadcrumbs":3,"title":1},"1740":{"body":0,"breadcrumbs":4,"title":2},"1741":{"body":1826,"breadcrumbs":7,"title":5},"1742":{"body":15,"breadcrumbs":4,"title":2},"1743":{"body":15,"breadcrumbs":3,"title":1},"1744":{"body":0,"breadcrumbs":4,"title":2},"1745":{"body":1680,"breadcrumbs":4,"title":2},"1746":{"body":13,"breadcrumbs":8,"title":5},"1747":{"body":25,"breadcrumbs":4,"title":1},"1748":{"body":0,"breadcrumbs":8,"title":5},"1749":{"body":19,"breadcrumbs":8,"title":5},"175":{"body":17,"breadcrumbs":5,"title":3},"1750":{"body":27,"breadcrumbs":7,"title":4},"1751":{"body":49,"breadcrumbs":7,"title":4},"1752":{"body":21,"breadcrumbs":7,"title":4},"1753":{"body":82,"breadcrumbs":7,"title":4},"1754":{"body":0,"breadcrumbs":6,"title":3},"1755":{"body":78,"breadcrumbs":6,"title":3},"1756":{"body":63,"breadcrumbs":6,"title":3},"1757":{"body":64,"breadcrumbs":6,"title":3},"1758":{"body":0,"breadcrumbs":5,"title":2},"1759":{"body":16,"breadcrumbs":6,"title":3},"176":{"body":17,"breadcrumbs":6,"title":4},"1760":{"body":16,"breadcrumbs":6,"title":3},"1761":{"body":0,"breadcrumbs":5,"title":2},"1762":{"body":55,"breadcrumbs":6,"title":3},"1763":{"body":111,"breadcrumbs":6,"title":3},"1764":{"body":0,"breadcrumbs":6,"title":3},"1765":{"body":17,"breadcrumbs":6,"title":3},"1766":{"body":10,"breadcrumbs":5,"title":2},"1767":{"body":12,"breadcrumbs":5,"title":2},"1768":{"body":11,"breadcrumbs":5,"title":2},"1769":{"body":13,"breadcrumbs":5,"title":2},"177":{"body":24,"breadcrumbs":5,"title":3},"1770":{"body":0,"breadcrumbs":6,"title":3},"1771":{"body":26,"breadcrumbs":5,"title":2},"1772":{"body":25,"breadcrumbs":6,"title":3},"1773":{"body":25,"breadcrumbs":5,"title":2},"1774":{"body":30,"breadcrumbs":5,"title":2},"1775":{"body":24,"breadcrumbs":4,"title":1},"1776":{"body":0,"breadcrumbs":7,"title":5},"1777":{"body":12,"breadcrumbs":8,"title":6},"1778":{"body":53,"breadcrumbs":4,"title":2},"1779":{"body":38,"breadcrumbs":4,"title":2},"178":{"body":10,"breadcrumbs":4,"title":2},"1780":{"body":43,"breadcrumbs":4,"title":2},"1781":{"body":34,"breadcrumbs":5,"title":3},"1782":{"body":105,"breadcrumbs":4,"title":2},"1783":{"body":20,"breadcrumbs":5,"title":3},"1784":{"body":29,"breadcrumbs":4,"title":2},"1785":{"body":19,"breadcrumbs":4,"title":2},"1786":{"body":0,"breadcrumbs":8,"title":4},"1787":{"body":1086,"breadcrumbs":8,"title":4},"1788":{"body":8,"breadcrumbs":6,"title":4},"1789":{"body":16,"breadcrumbs":5,"title":3},"179":{"body":7,"breadcrumbs":4,"title":2},"1790":{"body":31,"breadcrumbs":8,"title":6},"1791":{"body":30,"breadcrumbs":5,"title":3},"1792":{"body":6,"breadcrumbs":6,"title":3},"1793":{"body":31,"breadcrumbs":7,"title":4},"1794":{"body":48,"breadcrumbs":6,"title":3},"1795":{"body":85,"breadcrumbs":7,"title":4},"1796":{"body":53,"breadcrumbs":6,"title":3},"1797":{"body":0,"breadcrumbs":5,"title":2},"1798":{"body":20,"breadcrumbs":6,"title":3},"1799":{"body":55,"breadcrumbs":6,"title":3},"18":{"body":13,"breadcrumbs":4,"title":2},"180":{"body":8,"breadcrumbs":2,"title":1},"1800":{"body":0,"breadcrumbs":5,"title":2},"1801":{"body":30,"breadcrumbs":5,"title":2},"1802":{"body":31,"breadcrumbs":5,"title":2},"1803":{"body":24,"breadcrumbs":5,"title":2},"1804":{"body":0,"breadcrumbs":5,"title":2},"1805":{"body":52,"breadcrumbs":5,"title":2},"1806":{"body":42,"breadcrumbs":6,"title":3},"1807":{"body":0,"breadcrumbs":6,"title":3},"1808":{"body":49,"breadcrumbs":6,"title":3},"1809":{"body":0,"breadcrumbs":6,"title":3},"181":{"body":14,"breadcrumbs":2,"title":1},"1810":{"body":50,"breadcrumbs":7,"title":4},"1811":{"body":34,"breadcrumbs":5,"title":2},"1812":{"body":0,"breadcrumbs":5,"title":2},"1813":{"body":40,"breadcrumbs":5,"title":2},"1814":{"body":37,"breadcrumbs":5,"title":2},"1815":{"body":0,"breadcrumbs":6,"title":3},"1816":{"body":44,"breadcrumbs":6,"title":3},"1817":{"body":39,"breadcrumbs":6,"title":3},"1818":{"body":38,"breadcrumbs":6,"title":3},"1819":{"body":31,"breadcrumbs":5,"title":2},"182":{"body":27,"breadcrumbs":5,"title":4},"1820":{"body":0,"breadcrumbs":5,"title":2},"1821":{"body":56,"breadcrumbs":6,"title":3},"1822":{"body":58,"breadcrumbs":6,"title":3},"1823":{"body":38,"breadcrumbs":5,"title":2},"1824":{"body":64,"breadcrumbs":5,"title":2},"1825":{"body":53,"breadcrumbs":7,"title":4},"1826":{"body":0,"breadcrumbs":6,"title":3},"1827":{"body":32,"breadcrumbs":6,"title":3},"1828":{"body":38,"breadcrumbs":8,"title":5},"1829":{"body":51,"breadcrumbs":6,"title":3},"183":{"body":47,"breadcrumbs":5,"title":4},"1830":{"body":72,"breadcrumbs":5,"title":2},"1831":{"body":41,"breadcrumbs":5,"title":2},"1832":{"body":13,"breadcrumbs":7,"title":5},"1833":{"body":0,"breadcrumbs":2,"title":0},"1834":{"body":34,"breadcrumbs":5,"title":3},"1835":{"body":568,"breadcrumbs":4,"title":2},"1836":{"body":0,"breadcrumbs":6,"title":4},"1837":{"body":74,"breadcrumbs":4,"title":2},"1838":{"body":97,"breadcrumbs":4,"title":2},"1839":{"body":151,"breadcrumbs":4,"title":2},"184":{"body":52,"breadcrumbs":6,"title":5},"1840":{"body":27,"breadcrumbs":4,"title":2},"1841":{"body":0,"breadcrumbs":4,"title":2},"1842":{"body":8,"breadcrumbs":5,"title":3},"1843":{"body":27,"breadcrumbs":5,"title":3},"1844":{"body":14,"breadcrumbs":5,"title":3},"1845":{"body":22,"breadcrumbs":4,"title":2},"1846":{"body":19,"breadcrumbs":4,"title":2},"1847":{"body":51,"breadcrumbs":3,"title":1},"1848":{"body":12,"breadcrumbs":4,"title":2},"1849":{"body":33,"breadcrumbs":3,"title":1},"185":{"body":45,"breadcrumbs":6,"title":5},"1850":{"body":6,"breadcrumbs":6,"title":4},"1851":{"body":711,"breadcrumbs":3,"title":1},"1852":{"body":0,"breadcrumbs":8,"title":5},"1853":{"body":19,"breadcrumbs":5,"title":2},"1854":{"body":0,"breadcrumbs":5,"title":2},"1855":{"body":380,"breadcrumbs":7,"title":4},"1856":{"body":0,"breadcrumbs":5,"title":3},"1857":{"body":13,"breadcrumbs":3,"title":1},"1858":{"body":0,"breadcrumbs":5,"title":3},"1859":{"body":1039,"breadcrumbs":6,"title":4},"186":{"body":3,"breadcrumbs":7,"title":6},"1860":{"body":0,"breadcrumbs":6,"title":3},"187":{"body":11,"breadcrumbs":2,"title":1},"188":{"body":19,"breadcrumbs":3,"title":2},"189":{"body":12,"breadcrumbs":3,"title":2},"19":{"body":21,"breadcrumbs":3,"title":1},"190":{"body":30,"breadcrumbs":6,"title":5},"191":{"body":21,"breadcrumbs":5,"title":4},"192":{"body":0,"breadcrumbs":4,"title":3},"193":{"body":18,"breadcrumbs":5,"title":4},"194":{"body":28,"breadcrumbs":3,"title":2},"195":{"body":35,"breadcrumbs":3,"title":2},"196":{"body":0,"breadcrumbs":4,"title":3},"197":{"body":15,"breadcrumbs":4,"title":3},"198":{"body":19,"breadcrumbs":3,"title":2},"199":{"body":15,"breadcrumbs":4,"title":3},"2":{"body":31,"breadcrumbs":3,"title":2},"20":{"body":989,"breadcrumbs":5,"title":3},"200":{"body":22,"breadcrumbs":4,"title":3},"201":{"body":0,"breadcrumbs":3,"title":2},"202":{"body":30,"breadcrumbs":4,"title":3},"203":{"body":14,"breadcrumbs":3,"title":2},"204":{"body":0,"breadcrumbs":3,"title":2},"205":{"body":16,"breadcrumbs":2,"title":1},"206":{"body":17,"breadcrumbs":3,"title":2},"207":{"body":46,"breadcrumbs":3,"title":2},"208":{"body":21,"breadcrumbs":3,"title":2},"209":{"body":18,"breadcrumbs":3,"title":2},"21":{"body":15,"breadcrumbs":5,"title":3},"210":{"body":15,"breadcrumbs":6,"title":3},"211":{"body":27,"breadcrumbs":5,"title":2},"212":{"body":22,"breadcrumbs":4,"title":1},"213":{"body":57,"breadcrumbs":6,"title":3},"214":{"body":44,"breadcrumbs":5,"title":2},"215":{"body":32,"breadcrumbs":8,"title":5},"216":{"body":0,"breadcrumbs":8,"title":5},"217":{"body":97,"breadcrumbs":8,"title":5},"218":{"body":57,"breadcrumbs":9,"title":6},"219":{"body":64,"breadcrumbs":8,"title":5},"22":{"body":17,"breadcrumbs":4,"title":2},"220":{"body":58,"breadcrumbs":8,"title":5},"221":{"body":40,"breadcrumbs":8,"title":5},"222":{"body":4,"breadcrumbs":8,"title":5},"223":{"body":14,"breadcrumbs":6,"title":3},"224":{"body":42,"breadcrumbs":6,"title":3},"225":{"body":22,"breadcrumbs":6,"title":3},"226":{"body":24,"breadcrumbs":8,"title":5},"227":{"body":0,"breadcrumbs":5,"title":2},"228":{"body":42,"breadcrumbs":7,"title":4},"229":{"body":59,"breadcrumbs":7,"title":4},"23":{"body":16,"breadcrumbs":3,"title":1},"230":{"body":45,"breadcrumbs":7,"title":4},"231":{"body":62,"breadcrumbs":6,"title":3},"232":{"body":0,"breadcrumbs":6,"title":3},"233":{"body":27,"breadcrumbs":7,"title":4},"234":{"body":17,"breadcrumbs":7,"title":4},"235":{"body":9,"breadcrumbs":5,"title":2},"236":{"body":0,"breadcrumbs":5,"title":2},"237":{"body":19,"breadcrumbs":8,"title":5},"238":{"body":22,"breadcrumbs":8,"title":5},"239":{"body":36,"breadcrumbs":8,"title":5},"24":{"body":0,"breadcrumbs":4,"title":2},"240":{"body":24,"breadcrumbs":9,"title":6},"241":{"body":0,"breadcrumbs":5,"title":2},"242":{"body":18,"breadcrumbs":6,"title":3},"243":{"body":17,"breadcrumbs":5,"title":2},"244":{"body":34,"breadcrumbs":5,"title":2},"245":{"body":28,"breadcrumbs":5,"title":2},"246":{"body":41,"breadcrumbs":5,"title":2},"247":{"body":43,"breadcrumbs":5,"title":2},"248":{"body":0,"breadcrumbs":4,"title":2},"249":{"body":35,"breadcrumbs":4,"title":2},"25":{"body":1173,"breadcrumbs":5,"title":3},"250":{"body":0,"breadcrumbs":5,"title":3},"251":{"body":1138,"breadcrumbs":4,"title":2},"252":{"body":11,"breadcrumbs":6,"title":4},"253":{"body":22,"breadcrumbs":4,"title":2},"254":{"body":0,"breadcrumbs":4,"title":2},"255":{"body":21,"breadcrumbs":4,"title":2},"256":{"body":42,"breadcrumbs":4,"title":2},"257":{"body":2499,"breadcrumbs":4,"title":2},"258":{"body":0,"breadcrumbs":4,"title":2},"259":{"body":26,"breadcrumbs":3,"title":1},"26":{"body":7,"breadcrumbs":7,"title":4},"260":{"body":0,"breadcrumbs":5,"title":3},"261":{"body":1248,"breadcrumbs":8,"title":6},"262":{"body":0,"breadcrumbs":4,"title":2},"263":{"body":25,"breadcrumbs":3,"title":1},"264":{"body":0,"breadcrumbs":5,"title":3},"265":{"body":109,"breadcrumbs":6,"title":4},"266":{"body":112,"breadcrumbs":6,"title":4},"267":{"body":89,"breadcrumbs":6,"title":4},"268":{"body":92,"breadcrumbs":6,"title":4},"269":{"body":139,"breadcrumbs":6,"title":4},"27":{"body":55,"breadcrumbs":5,"title":2},"270":{"body":93,"breadcrumbs":6,"title":4},"271":{"body":96,"breadcrumbs":6,"title":4},"272":{"body":59,"breadcrumbs":6,"title":4},"273":{"body":0,"breadcrumbs":5,"title":3},"274":{"body":40,"breadcrumbs":5,"title":3},"275":{"body":44,"breadcrumbs":5,"title":3},"276":{"body":0,"breadcrumbs":5,"title":3},"277":{"body":67,"breadcrumbs":5,"title":3},"278":{"body":26,"breadcrumbs":6,"title":4},"279":{"body":0,"breadcrumbs":5,"title":3},"28":{"body":11,"breadcrumbs":5,"title":2},"280":{"body":35,"breadcrumbs":5,"title":3},"281":{"body":14,"breadcrumbs":8,"title":5},"282":{"body":29,"breadcrumbs":5,"title":2},"283":{"body":0,"breadcrumbs":8,"title":5},"284":{"body":1059,"breadcrumbs":6,"title":3},"285":{"body":51,"breadcrumbs":5,"title":2},"286":{"body":0,"breadcrumbs":7,"title":4},"287":{"body":390,"breadcrumbs":4,"title":1},"288":{"body":9,"breadcrumbs":9,"title":6},"289":{"body":19,"breadcrumbs":4,"title":1},"29":{"body":87,"breadcrumbs":6,"title":3},"290":{"body":42,"breadcrumbs":5,"title":2},"291":{"body":0,"breadcrumbs":5,"title":2},"292":{"body":1453,"breadcrumbs":7,"title":4},"293":{"body":11,"breadcrumbs":7,"title":4},"294":{"body":27,"breadcrumbs":5,"title":2},"295":{"body":0,"breadcrumbs":6,"title":3},"296":{"body":31,"breadcrumbs":8,"title":5},"297":{"body":38,"breadcrumbs":9,"title":6},"298":{"body":55,"breadcrumbs":10,"title":7},"299":{"body":0,"breadcrumbs":7,"title":4},"3":{"body":52,"breadcrumbs":3,"title":2},"30":{"body":133,"breadcrumbs":6,"title":3},"300":{"body":2100,"breadcrumbs":7,"title":4},"301":{"body":7,"breadcrumbs":6,"title":3},"302":{"body":0,"breadcrumbs":7,"title":4},"303":{"body":14,"breadcrumbs":8,"title":5},"304":{"body":153,"breadcrumbs":5,"title":2},"305":{"body":0,"breadcrumbs":5,"title":2},"306":{"body":537,"breadcrumbs":9,"title":6},"307":{"body":9,"breadcrumbs":6,"title":4},"308":{"body":43,"breadcrumbs":3,"title":1},"309":{"body":0,"breadcrumbs":3,"title":1},"31":{"body":66,"breadcrumbs":6,"title":3},"310":{"body":1119,"breadcrumbs":5,"title":3},"311":{"body":18,"breadcrumbs":8,"title":5},"312":{"body":20,"breadcrumbs":5,"title":2},"313":{"body":29,"breadcrumbs":4,"title":1},"314":{"body":0,"breadcrumbs":4,"title":1},"315":{"body":34,"breadcrumbs":8,"title":5},"316":{"body":812,"breadcrumbs":7,"title":4},"317":{"body":14,"breadcrumbs":9,"title":5},"318":{"body":1392,"breadcrumbs":7,"title":3},"319":{"body":33,"breadcrumbs":8,"title":4},"32":{"body":35,"breadcrumbs":6,"title":3},"320":{"body":0,"breadcrumbs":7,"title":3},"321":{"body":10,"breadcrumbs":5,"title":1},"322":{"body":673,"breadcrumbs":11,"title":7},"323":{"body":19,"breadcrumbs":8,"title":5},"324":{"body":0,"breadcrumbs":7,"title":4},"325":{"body":14,"breadcrumbs":4,"title":1},"326":{"body":7,"breadcrumbs":6,"title":3},"327":{"body":0,"breadcrumbs":9,"title":6},"328":{"body":13,"breadcrumbs":8,"title":5},"329":{"body":34,"breadcrumbs":8,"title":5},"33":{"body":0,"breadcrumbs":5,"title":2},"330":{"body":63,"breadcrumbs":10,"title":7},"331":{"body":40,"breadcrumbs":7,"title":4},"332":{"body":36,"breadcrumbs":6,"title":3},"333":{"body":0,"breadcrumbs":10,"title":7},"334":{"body":5,"breadcrumbs":6,"title":3},"335":{"body":27,"breadcrumbs":5,"title":2},"336":{"body":22,"breadcrumbs":5,"title":2},"337":{"body":90,"breadcrumbs":6,"title":3},"338":{"body":37,"breadcrumbs":6,"title":3},"339":{"body":64,"breadcrumbs":6,"title":3},"34":{"body":95,"breadcrumbs":5,"title":2},"340":{"body":0,"breadcrumbs":10,"title":7},"341":{"body":18,"breadcrumbs":6,"title":3},"342":{"body":64,"breadcrumbs":5,"title":2},"343":{"body":80,"breadcrumbs":5,"title":2},"344":{"body":52,"breadcrumbs":6,"title":3},"345":{"body":0,"breadcrumbs":8,"title":5},"346":{"body":87,"breadcrumbs":5,"title":2},"347":{"body":60,"breadcrumbs":7,"title":4},"348":{"body":0,"breadcrumbs":8,"title":5},"349":{"body":41,"breadcrumbs":5,"title":2},"35":{"body":86,"breadcrumbs":5,"title":2},"350":{"body":66,"breadcrumbs":5,"title":2},"351":{"body":0,"breadcrumbs":9,"title":6},"352":{"body":24,"breadcrumbs":5,"title":2},"353":{"body":34,"breadcrumbs":5,"title":2},"354":{"body":12,"breadcrumbs":5,"title":2},"355":{"body":0,"breadcrumbs":7,"title":4},"356":{"body":109,"breadcrumbs":5,"title":2},"357":{"body":0,"breadcrumbs":6,"title":3},"358":{"body":68,"breadcrumbs":6,"title":3},"359":{"body":0,"breadcrumbs":5,"title":2},"36":{"body":96,"breadcrumbs":5,"title":2},"360":{"body":16,"breadcrumbs":7,"title":4},"361":{"body":25,"breadcrumbs":7,"title":4},"362":{"body":13,"breadcrumbs":9,"title":6},"363":{"body":48,"breadcrumbs":4,"title":1},"364":{"body":76,"breadcrumbs":4,"title":2},"365":{"body":57,"breadcrumbs":6,"title":4},"366":{"body":202,"breadcrumbs":3,"title":1},"367":{"body":8,"breadcrumbs":7,"title":4},"368":{"body":24,"breadcrumbs":4,"title":1},"369":{"body":0,"breadcrumbs":4,"title":1},"37":{"body":70,"breadcrumbs":5,"title":2},"370":{"body":1334,"breadcrumbs":6,"title":3},"371":{"body":12,"breadcrumbs":7,"title":4},"372":{"body":29,"breadcrumbs":5,"title":2},"373":{"body":0,"breadcrumbs":6,"title":3},"374":{"body":58,"breadcrumbs":4,"title":1},"375":{"body":81,"breadcrumbs":5,"title":2},"376":{"body":0,"breadcrumbs":5,"title":2},"377":{"body":3398,"breadcrumbs":6,"title":3},"378":{"body":18,"breadcrumbs":7,"title":4},"379":{"body":1746,"breadcrumbs":4,"title":1},"38":{"body":45,"breadcrumbs":5,"title":2},"380":{"body":0,"breadcrumbs":9,"title":5},"381":{"body":1,"breadcrumbs":5,"title":1},"382":{"body":105,"breadcrumbs":5,"title":1},"383":{"body":392,"breadcrumbs":5,"title":1},"384":{"body":0,"breadcrumbs":8,"title":4},"385":{"body":1,"breadcrumbs":5,"title":1},"386":{"body":103,"breadcrumbs":5,"title":1},"387":{"body":10,"breadcrumbs":5,"title":1},"388":{"body":59,"breadcrumbs":6,"title":2},"389":{"body":499,"breadcrumbs":6,"title":2},"39":{"body":84,"breadcrumbs":5,"title":2},"390":{"body":0,"breadcrumbs":8,"title":4},"391":{"body":1,"breadcrumbs":5,"title":1},"392":{"body":128,"breadcrumbs":5,"title":1},"393":{"body":8,"breadcrumbs":5,"title":1},"394":{"body":606,"breadcrumbs":6,"title":2},"395":{"body":0,"breadcrumbs":8,"title":4},"396":{"body":1,"breadcrumbs":5,"title":1},"397":{"body":126,"breadcrumbs":5,"title":1},"398":{"body":7,"breadcrumbs":5,"title":1},"399":{"body":97,"breadcrumbs":6,"title":2},"4":{"body":28,"breadcrumbs":2,"title":1},"40":{"body":33,"breadcrumbs":5,"title":2},"400":{"body":70,"breadcrumbs":6,"title":2},"401":{"body":49,"breadcrumbs":7,"title":3},"402":{"body":0,"breadcrumbs":5,"title":1},"403":{"body":68,"breadcrumbs":5,"title":1},"404":{"body":43,"breadcrumbs":5,"title":1},"405":{"body":29,"breadcrumbs":5,"title":1},"406":{"body":0,"breadcrumbs":6,"title":2},"407":{"body":20,"breadcrumbs":9,"title":5},"408":{"body":19,"breadcrumbs":9,"title":5},"409":{"body":21,"breadcrumbs":9,"title":5},"41":{"body":51,"breadcrumbs":5,"title":2},"410":{"body":19,"breadcrumbs":8,"title":4},"411":{"body":17,"breadcrumbs":9,"title":5},"412":{"body":0,"breadcrumbs":6,"title":2},"413":{"body":37,"breadcrumbs":6,"title":2},"414":{"body":28,"breadcrumbs":6,"title":2},"415":{"body":28,"breadcrumbs":6,"title":2},"416":{"body":28,"breadcrumbs":6,"title":2},"417":{"body":0,"breadcrumbs":6,"title":2},"418":{"body":13,"breadcrumbs":9,"title":5},"419":{"body":12,"breadcrumbs":9,"title":5},"42":{"body":0,"breadcrumbs":5,"title":2},"420":{"body":10,"breadcrumbs":9,"title":5},"421":{"body":24,"breadcrumbs":5,"title":1},"422":{"body":0,"breadcrumbs":8,"title":4},"423":{"body":1,"breadcrumbs":5,"title":1},"424":{"body":111,"breadcrumbs":5,"title":1},"425":{"body":8,"breadcrumbs":5,"title":1},"426":{"body":42,"breadcrumbs":6,"title":2},"427":{"body":717,"breadcrumbs":6,"title":2},"428":{"body":18,"breadcrumbs":12,"title":7},"429":{"body":21,"breadcrumbs":6,"title":1},"43":{"body":68,"breadcrumbs":5,"title":2},"430":{"body":88,"breadcrumbs":7,"title":2},"431":{"body":1012,"breadcrumbs":6,"title":1},"432":{"body":15,"breadcrumbs":12,"title":8},"433":{"body":22,"breadcrumbs":5,"title":1},"434":{"body":49,"breadcrumbs":8,"title":4},"435":{"body":42,"breadcrumbs":6,"title":2},"436":{"body":62,"breadcrumbs":5,"title":1},"437":{"body":0,"breadcrumbs":5,"title":1},"438":{"body":57,"breadcrumbs":5,"title":1},"439":{"body":26,"breadcrumbs":5,"title":1},"44":{"body":63,"breadcrumbs":5,"title":2},"440":{"body":20,"breadcrumbs":5,"title":1},"441":{"body":0,"breadcrumbs":5,"title":1},"442":{"body":29,"breadcrumbs":6,"title":2},"443":{"body":36,"breadcrumbs":6,"title":2},"444":{"body":17,"breadcrumbs":6,"title":2},"445":{"body":39,"breadcrumbs":6,"title":2},"446":{"body":0,"breadcrumbs":6,"title":2},"447":{"body":32,"breadcrumbs":5,"title":1},"448":{"body":31,"breadcrumbs":5,"title":1},"449":{"body":0,"breadcrumbs":6,"title":2},"45":{"body":38,"breadcrumbs":5,"title":2},"450":{"body":16,"breadcrumbs":9,"title":5},"451":{"body":22,"breadcrumbs":8,"title":4},"452":{"body":20,"breadcrumbs":9,"title":5},"453":{"body":24,"breadcrumbs":9,"title":5},"454":{"body":0,"breadcrumbs":5,"title":1},"455":{"body":20,"breadcrumbs":6,"title":2},"456":{"body":21,"breadcrumbs":6,"title":2},"457":{"body":13,"breadcrumbs":6,"title":2},"458":{"body":0,"breadcrumbs":5,"title":1},"459":{"body":28,"breadcrumbs":6,"title":2},"46":{"body":0,"breadcrumbs":5,"title":2},"460":{"body":29,"breadcrumbs":6,"title":2},"461":{"body":21,"breadcrumbs":5,"title":1},"462":{"body":33,"breadcrumbs":5,"title":1},"463":{"body":15,"breadcrumbs":11,"title":7},"464":{"body":63,"breadcrumbs":7,"title":3},"465":{"body":38,"breadcrumbs":6,"title":2},"466":{"body":0,"breadcrumbs":6,"title":2},"467":{"body":33,"breadcrumbs":11,"title":7},"468":{"body":27,"breadcrumbs":10,"title":6},"469":{"body":44,"breadcrumbs":10,"title":6},"47":{"body":98,"breadcrumbs":5,"title":2},"470":{"body":24,"breadcrumbs":7,"title":3},"471":{"body":7,"breadcrumbs":6,"title":2},"472":{"body":44,"breadcrumbs":5,"title":1},"473":{"body":622,"breadcrumbs":6,"title":2},"474":{"body":10,"breadcrumbs":11,"title":6},"475":{"body":27,"breadcrumbs":6,"title":1},"476":{"body":12,"breadcrumbs":6,"title":1},"477":{"body":0,"breadcrumbs":7,"title":2},"478":{"body":19,"breadcrumbs":7,"title":2},"479":{"body":0,"breadcrumbs":7,"title":2},"48":{"body":99,"breadcrumbs":5,"title":2},"480":{"body":153,"breadcrumbs":10,"title":5},"481":{"body":110,"breadcrumbs":11,"title":6},"482":{"body":126,"breadcrumbs":11,"title":6},"483":{"body":101,"breadcrumbs":11,"title":6},"484":{"body":0,"breadcrumbs":8,"title":3},"485":{"body":904,"breadcrumbs":9,"title":4},"486":{"body":19,"breadcrumbs":11,"title":6},"487":{"body":66,"breadcrumbs":6,"title":1},"488":{"body":45,"breadcrumbs":6,"title":1},"489":{"body":0,"breadcrumbs":7,"title":2},"49":{"body":33,"breadcrumbs":5,"title":2},"490":{"body":21,"breadcrumbs":9,"title":4},"491":{"body":50,"breadcrumbs":11,"title":6},"492":{"body":166,"breadcrumbs":11,"title":6},"493":{"body":64,"breadcrumbs":8,"title":3},"494":{"body":60,"breadcrumbs":9,"title":4},"495":{"body":65,"breadcrumbs":8,"title":3},"496":{"body":0,"breadcrumbs":7,"title":2},"497":{"body":46,"breadcrumbs":7,"title":2},"498":{"body":26,"breadcrumbs":7,"title":2},"499":{"body":0,"breadcrumbs":9,"title":4},"5":{"body":34,"breadcrumbs":5,"title":4},"50":{"body":0,"breadcrumbs":5,"title":2},"500":{"body":57,"breadcrumbs":8,"title":3},"501":{"body":0,"breadcrumbs":6,"title":1},"502":{"body":51,"breadcrumbs":6,"title":1},"503":{"body":20,"breadcrumbs":7,"title":2},"504":{"body":25,"breadcrumbs":7,"title":2},"505":{"body":0,"breadcrumbs":8,"title":3},"506":{"body":200,"breadcrumbs":6,"title":1},"507":{"body":15,"breadcrumbs":9,"title":5},"508":{"body":28,"breadcrumbs":5,"title":1},"509":{"body":116,"breadcrumbs":6,"title":2},"51":{"body":22,"breadcrumbs":5,"title":2},"510":{"body":26,"breadcrumbs":6,"title":2},"511":{"body":12,"breadcrumbs":5,"title":1},"512":{"body":116,"breadcrumbs":6,"title":2},"513":{"body":0,"breadcrumbs":6,"title":2},"514":{"body":29,"breadcrumbs":6,"title":2},"515":{"body":19,"breadcrumbs":7,"title":3},"516":{"body":1009,"breadcrumbs":6,"title":2},"517":{"body":0,"breadcrumbs":15,"title":8},"518":{"body":4,"breadcrumbs":8,"title":1},"519":{"body":41,"breadcrumbs":8,"title":1},"52":{"body":26,"breadcrumbs":5,"title":2},"520":{"body":296,"breadcrumbs":9,"title":2},"521":{"body":29,"breadcrumbs":9,"title":2},"522":{"body":37,"breadcrumbs":12,"title":5},"523":{"body":0,"breadcrumbs":8,"title":1},"524":{"body":42,"breadcrumbs":8,"title":1},"525":{"body":27,"breadcrumbs":8,"title":1},"526":{"body":51,"breadcrumbs":9,"title":2},"527":{"body":0,"breadcrumbs":9,"title":2},"528":{"body":13,"breadcrumbs":14,"title":7},"529":{"body":14,"breadcrumbs":14,"title":7},"53":{"body":13,"breadcrumbs":5,"title":2},"530":{"body":11,"breadcrumbs":11,"title":4},"531":{"body":14,"breadcrumbs":12,"title":5},"532":{"body":0,"breadcrumbs":9,"title":2},"533":{"body":39,"breadcrumbs":9,"title":2},"534":{"body":231,"breadcrumbs":12,"title":5},"535":{"body":8,"breadcrumbs":5,"title":3},"536":{"body":24,"breadcrumbs":3,"title":1},"537":{"body":5,"breadcrumbs":4,"title":2},"538":{"body":0,"breadcrumbs":3,"title":1},"539":{"body":1510,"breadcrumbs":4,"title":2},"54":{"body":0,"breadcrumbs":5,"title":2},"540":{"body":15,"breadcrumbs":4,"title":3},"541":{"body":26,"breadcrumbs":2,"title":1},"542":{"body":0,"breadcrumbs":3,"title":2},"543":{"body":44,"breadcrumbs":4,"title":3},"544":{"body":35,"breadcrumbs":4,"title":3},"545":{"body":0,"breadcrumbs":2,"title":1},"546":{"body":26,"breadcrumbs":4,"title":3},"547":{"body":28,"breadcrumbs":4,"title":3},"548":{"body":0,"breadcrumbs":4,"title":3},"549":{"body":252,"breadcrumbs":4,"title":3},"55":{"body":83,"breadcrumbs":5,"title":2},"550":{"body":18,"breadcrumbs":4,"title":3},"551":{"body":0,"breadcrumbs":5,"title":4},"552":{"body":178,"breadcrumbs":3,"title":2},"553":{"body":174,"breadcrumbs":5,"title":4},"554":{"body":0,"breadcrumbs":4,"title":3},"555":{"body":266,"breadcrumbs":4,"title":3},"556":{"body":0,"breadcrumbs":4,"title":3},"557":{"body":39,"breadcrumbs":4,"title":3},"558":{"body":16,"breadcrumbs":5,"title":4},"559":{"body":0,"breadcrumbs":4,"title":3},"56":{"body":43,"breadcrumbs":5,"title":2},"560":{"body":66,"breadcrumbs":3,"title":2},"561":{"body":54,"breadcrumbs":4,"title":3},"562":{"body":0,"breadcrumbs":3,"title":2},"563":{"body":22,"breadcrumbs":3,"title":2},"564":{"body":9,"breadcrumbs":2,"title":1},"565":{"body":20,"breadcrumbs":3,"title":2},"566":{"body":0,"breadcrumbs":3,"title":2},"567":{"body":16,"breadcrumbs":3,"title":2},"568":{"body":13,"breadcrumbs":3,"title":2},"569":{"body":30,"breadcrumbs":3,"title":2},"57":{"body":0,"breadcrumbs":5,"title":2},"570":{"body":13,"breadcrumbs":4,"title":3},"571":{"body":35,"breadcrumbs":2,"title":1},"572":{"body":0,"breadcrumbs":3,"title":2},"573":{"body":2308,"breadcrumbs":4,"title":3},"574":{"body":10,"breadcrumbs":3,"title":2},"575":{"body":7,"breadcrumbs":3,"title":2},"576":{"body":29,"breadcrumbs":3,"title":2},"577":{"body":16,"breadcrumbs":3,"title":2},"578":{"body":0,"breadcrumbs":3,"title":2},"579":{"body":14,"breadcrumbs":2,"title":1},"58":{"body":19,"breadcrumbs":6,"title":3},"580":{"body":64,"breadcrumbs":3,"title":2},"581":{"body":225,"breadcrumbs":3,"title":2},"582":{"body":113,"breadcrumbs":3,"title":2},"583":{"body":0,"breadcrumbs":3,"title":2},"584":{"body":12,"breadcrumbs":2,"title":1},"585":{"body":55,"breadcrumbs":3,"title":2},"586":{"body":186,"breadcrumbs":3,"title":2},"587":{"body":256,"breadcrumbs":4,"title":3},"588":{"body":78,"breadcrumbs":3,"title":2},"589":{"body":0,"breadcrumbs":3,"title":2},"59":{"body":39,"breadcrumbs":7,"title":4},"590":{"body":4,"breadcrumbs":2,"title":1},"591":{"body":103,"breadcrumbs":3,"title":2},"592":{"body":97,"breadcrumbs":3,"title":2},"593":{"body":135,"breadcrumbs":5,"title":4},"594":{"body":0,"breadcrumbs":3,"title":2},"595":{"body":11,"breadcrumbs":2,"title":1},"596":{"body":80,"breadcrumbs":3,"title":2},"597":{"body":76,"breadcrumbs":3,"title":2},"598":{"body":103,"breadcrumbs":3,"title":2},"599":{"body":0,"breadcrumbs":3,"title":2},"6":{"body":26,"breadcrumbs":3,"title":2},"60":{"body":42,"breadcrumbs":5,"title":2},"600":{"body":26,"breadcrumbs":3,"title":2},"601":{"body":26,"breadcrumbs":3,"title":2},"602":{"body":23,"breadcrumbs":3,"title":2},"603":{"body":23,"breadcrumbs":3,"title":2},"604":{"body":46,"breadcrumbs":2,"title":1},"605":{"body":17,"breadcrumbs":4,"title":2},"606":{"body":23,"breadcrumbs":3,"title":1},"607":{"body":0,"breadcrumbs":5,"title":3},"608":{"body":774,"breadcrumbs":4,"title":2},"609":{"body":877,"breadcrumbs":4,"title":2},"61":{"body":33,"breadcrumbs":6,"title":3},"610":{"body":0,"breadcrumbs":5,"title":3},"611":{"body":239,"breadcrumbs":5,"title":3},"612":{"body":108,"breadcrumbs":5,"title":3},"613":{"body":0,"breadcrumbs":4,"title":2},"614":{"body":231,"breadcrumbs":5,"title":3},"615":{"body":75,"breadcrumbs":5,"title":3},"616":{"body":0,"breadcrumbs":4,"title":2},"617":{"body":63,"breadcrumbs":4,"title":2},"618":{"body":36,"breadcrumbs":4,"title":2},"619":{"body":0,"breadcrumbs":5,"title":3},"62":{"body":29,"breadcrumbs":7,"title":4},"620":{"body":165,"breadcrumbs":5,"title":3},"621":{"body":221,"breadcrumbs":5,"title":3},"622":{"body":6,"breadcrumbs":5,"title":3},"623":{"body":16,"breadcrumbs":3,"title":1},"624":{"body":12,"breadcrumbs":4,"title":2},"625":{"body":4,"breadcrumbs":4,"title":2},"626":{"body":296,"breadcrumbs":4,"title":2},"627":{"body":7,"breadcrumbs":5,"title":3},"628":{"body":10,"breadcrumbs":3,"title":1},"629":{"body":0,"breadcrumbs":4,"title":2},"63":{"body":40,"breadcrumbs":7,"title":4},"630":{"body":18,"breadcrumbs":4,"title":2},"631":{"body":18,"breadcrumbs":4,"title":2},"632":{"body":20,"breadcrumbs":5,"title":3},"633":{"body":18,"breadcrumbs":4,"title":2},"634":{"body":18,"breadcrumbs":4,"title":2},"635":{"body":0,"breadcrumbs":4,"title":2},"636":{"body":24,"breadcrumbs":4,"title":2},"637":{"body":23,"breadcrumbs":4,"title":2},"638":{"body":30,"breadcrumbs":4,"title":2},"639":{"body":27,"breadcrumbs":4,"title":2},"64":{"body":77,"breadcrumbs":5,"title":2},"640":{"body":6,"breadcrumbs":4,"title":2},"641":{"body":15,"breadcrumbs":4,"title":2},"642":{"body":15,"breadcrumbs":5,"title":3},"643":{"body":28,"breadcrumbs":3,"title":1},"644":{"body":1115,"breadcrumbs":5,"title":3},"645":{"body":28,"breadcrumbs":4,"title":2},"646":{"body":0,"breadcrumbs":4,"title":2},"647":{"body":26,"breadcrumbs":4,"title":2},"648":{"body":91,"breadcrumbs":3,"title":1},"649":{"body":13,"breadcrumbs":5,"title":3},"65":{"body":0,"breadcrumbs":5,"title":2},"650":{"body":21,"breadcrumbs":4,"title":2},"651":{"body":0,"breadcrumbs":4,"title":2},"652":{"body":29,"breadcrumbs":4,"title":2},"653":{"body":2788,"breadcrumbs":4,"title":2},"654":{"body":12,"breadcrumbs":7,"title":4},"655":{"body":17,"breadcrumbs":5,"title":2},"656":{"body":29,"breadcrumbs":4,"title":1},"657":{"body":0,"breadcrumbs":5,"title":2},"658":{"body":204,"breadcrumbs":6,"title":3},"659":{"body":121,"breadcrumbs":5,"title":2},"66":{"body":97,"breadcrumbs":6,"title":3},"660":{"body":0,"breadcrumbs":6,"title":3},"661":{"body":484,"breadcrumbs":7,"title":4},"662":{"body":308,"breadcrumbs":6,"title":3},"663":{"body":0,"breadcrumbs":6,"title":3},"664":{"body":323,"breadcrumbs":7,"title":4},"665":{"body":0,"breadcrumbs":6,"title":3},"666":{"body":328,"breadcrumbs":7,"title":4},"667":{"body":0,"breadcrumbs":5,"title":2},"668":{"body":430,"breadcrumbs":6,"title":3},"669":{"body":0,"breadcrumbs":6,"title":3},"67":{"body":58,"breadcrumbs":6,"title":3},"670":{"body":11,"breadcrumbs":8,"title":5},"671":{"body":9,"breadcrumbs":7,"title":4},"672":{"body":9,"breadcrumbs":7,"title":4},"673":{"body":110,"breadcrumbs":5,"title":2},"674":{"body":10,"breadcrumbs":9,"title":6},"675":{"body":12,"breadcrumbs":4,"title":1},"676":{"body":0,"breadcrumbs":7,"title":4},"677":{"body":6,"breadcrumbs":8,"title":5},"678":{"body":7,"breadcrumbs":8,"title":5},"679":{"body":44,"breadcrumbs":8,"title":5},"68":{"body":35,"breadcrumbs":5,"title":2},"680":{"body":127,"breadcrumbs":8,"title":5},"681":{"body":88,"breadcrumbs":9,"title":6},"682":{"body":30,"breadcrumbs":7,"title":4},"683":{"body":16,"breadcrumbs":8,"title":5},"684":{"body":0,"breadcrumbs":5,"title":2},"685":{"body":36,"breadcrumbs":6,"title":3},"686":{"body":31,"breadcrumbs":6,"title":3},"687":{"body":31,"breadcrumbs":7,"title":4},"688":{"body":0,"breadcrumbs":5,"title":2},"689":{"body":21,"breadcrumbs":6,"title":3},"69":{"body":38,"breadcrumbs":6,"title":3},"690":{"body":22,"breadcrumbs":5,"title":2},"691":{"body":34,"breadcrumbs":6,"title":3},"692":{"body":57,"breadcrumbs":6,"title":3},"693":{"body":31,"breadcrumbs":5,"title":2},"694":{"body":0,"breadcrumbs":5,"title":2},"695":{"body":17,"breadcrumbs":5,"title":2},"696":{"body":11,"breadcrumbs":6,"title":3},"697":{"body":16,"breadcrumbs":5,"title":2},"698":{"body":28,"breadcrumbs":5,"title":2},"699":{"body":23,"breadcrumbs":5,"title":2},"7":{"body":29,"breadcrumbs":2,"title":1},"70":{"body":0,"breadcrumbs":6,"title":3},"700":{"body":16,"breadcrumbs":7,"title":4},"701":{"body":16,"breadcrumbs":4,"title":1},"702":{"body":42,"breadcrumbs":6,"title":3},"703":{"body":1486,"breadcrumbs":5,"title":2},"704":{"body":0,"breadcrumbs":2,"title":1},"705":{"body":13,"breadcrumbs":4,"title":3},"706":{"body":20,"breadcrumbs":3,"title":2},"707":{"body":63,"breadcrumbs":2,"title":1},"708":{"body":0,"breadcrumbs":3,"title":2},"709":{"body":2084,"breadcrumbs":4,"title":3},"71":{"body":35,"breadcrumbs":5,"title":2},"710":{"body":18,"breadcrumbs":3,"title":2},"711":{"body":22,"breadcrumbs":3,"title":2},"712":{"body":2603,"breadcrumbs":2,"title":1},"713":{"body":19,"breadcrumbs":5,"title":3},"714":{"body":15,"breadcrumbs":4,"title":2},"715":{"body":47,"breadcrumbs":3,"title":1},"716":{"body":39,"breadcrumbs":4,"title":2},"717":{"body":0,"breadcrumbs":4,"title":2},"718":{"body":40,"breadcrumbs":4,"title":2},"719":{"body":633,"breadcrumbs":4,"title":2},"72":{"body":43,"breadcrumbs":7,"title":4},"720":{"body":0,"breadcrumbs":4,"title":2},"721":{"body":352,"breadcrumbs":5,"title":3},"722":{"body":212,"breadcrumbs":4,"title":2},"723":{"body":65,"breadcrumbs":4,"title":2},"724":{"body":75,"breadcrumbs":4,"title":2},"725":{"body":0,"breadcrumbs":5,"title":3},"726":{"body":37,"breadcrumbs":4,"title":2},"727":{"body":67,"breadcrumbs":5,"title":3},"728":{"body":55,"breadcrumbs":5,"title":3},"729":{"body":0,"breadcrumbs":4,"title":2},"73":{"body":25,"breadcrumbs":6,"title":3},"730":{"body":40,"breadcrumbs":4,"title":2},"731":{"body":46,"breadcrumbs":4,"title":2},"732":{"body":32,"breadcrumbs":4,"title":2},"733":{"body":0,"breadcrumbs":3,"title":1},"734":{"body":118,"breadcrumbs":5,"title":3},"735":{"body":59,"breadcrumbs":5,"title":3},"736":{"body":36,"breadcrumbs":4,"title":2},"737":{"body":36,"breadcrumbs":4,"title":2},"738":{"body":0,"breadcrumbs":4,"title":2},"739":{"body":50,"breadcrumbs":4,"title":2},"74":{"body":22,"breadcrumbs":5,"title":2},"740":{"body":36,"breadcrumbs":4,"title":2},"741":{"body":38,"breadcrumbs":5,"title":3},"742":{"body":17,"breadcrumbs":4,"title":3},"743":{"body":17,"breadcrumbs":3,"title":2},"744":{"body":74,"breadcrumbs":2,"title":1},"745":{"body":0,"breadcrumbs":3,"title":2},"746":{"body":3363,"breadcrumbs":3,"title":2},"747":{"body":18,"breadcrumbs":5,"title":3},"748":{"body":19,"breadcrumbs":4,"title":2},"749":{"body":62,"breadcrumbs":3,"title":1},"75":{"body":27,"breadcrumbs":5,"title":2},"750":{"body":0,"breadcrumbs":4,"title":2},"751":{"body":1955,"breadcrumbs":4,"title":2},"752":{"body":13,"breadcrumbs":6,"title":4},"753":{"body":20,"breadcrumbs":3,"title":1},"754":{"body":0,"breadcrumbs":3,"title":1},"755":{"body":10,"breadcrumbs":4,"title":2},"756":{"body":8,"breadcrumbs":4,"title":2},"757":{"body":13,"breadcrumbs":4,"title":2},"758":{"body":0,"breadcrumbs":9,"title":7},"759":{"body":225,"breadcrumbs":6,"title":4},"76":{"body":0,"breadcrumbs":5,"title":2},"760":{"body":407,"breadcrumbs":6,"title":4},"761":{"body":318,"breadcrumbs":7,"title":5},"762":{"body":423,"breadcrumbs":6,"title":4},"763":{"body":0,"breadcrumbs":10,"title":8},"764":{"body":106,"breadcrumbs":7,"title":5},"765":{"body":7,"breadcrumbs":9,"title":7},"766":{"body":0,"breadcrumbs":9,"title":7},"767":{"body":52,"breadcrumbs":6,"title":4},"768":{"body":0,"breadcrumbs":4,"title":2},"769":{"body":25,"breadcrumbs":5,"title":3},"77":{"body":23,"breadcrumbs":5,"title":2},"770":{"body":12,"breadcrumbs":5,"title":3},"771":{"body":16,"breadcrumbs":4,"title":2},"772":{"body":0,"breadcrumbs":3,"title":1},"773":{"body":19,"breadcrumbs":6,"title":4},"774":{"body":19,"breadcrumbs":6,"title":4},"775":{"body":15,"breadcrumbs":5,"title":3},"776":{"body":12,"breadcrumbs":6,"title":4},"777":{"body":8,"breadcrumbs":5,"title":3},"778":{"body":27,"breadcrumbs":3,"title":1},"779":{"body":14,"breadcrumbs":3,"title":1},"78":{"body":23,"breadcrumbs":5,"title":2},"780":{"body":0,"breadcrumbs":6,"title":3},"781":{"body":0,"breadcrumbs":6,"title":3},"782":{"body":0,"breadcrumbs":5,"title":2},"783":{"body":518,"breadcrumbs":7,"title":4},"784":{"body":18,"breadcrumbs":5,"title":3},"785":{"body":16,"breadcrumbs":4,"title":2},"786":{"body":41,"breadcrumbs":3,"title":1},"787":{"body":0,"breadcrumbs":6,"title":4},"788":{"body":967,"breadcrumbs":6,"title":4},"789":{"body":0,"breadcrumbs":7,"title":4},"79":{"body":18,"breadcrumbs":6,"title":3},"790":{"body":46,"breadcrumbs":4,"title":1},"791":{"body":0,"breadcrumbs":5,"title":2},"792":{"body":362,"breadcrumbs":7,"title":4},"793":{"body":0,"breadcrumbs":6,"title":3},"794":{"body":334,"breadcrumbs":7,"title":4},"795":{"body":0,"breadcrumbs":9,"title":5},"796":{"body":12,"breadcrumbs":5,"title":1},"797":{"body":68,"breadcrumbs":6,"title":2},"798":{"body":0,"breadcrumbs":6,"title":2},"799":{"body":18,"breadcrumbs":10,"title":6},"8":{"body":7,"breadcrumbs":2,"title":1},"80":{"body":8,"breadcrumbs":5,"title":2},"800":{"body":45,"breadcrumbs":7,"title":3},"801":{"body":0,"breadcrumbs":6,"title":2},"802":{"body":714,"breadcrumbs":9,"title":5},"803":{"body":11,"breadcrumbs":9,"title":6},"804":{"body":11,"breadcrumbs":5,"title":2},"805":{"body":50,"breadcrumbs":4,"title":1},"806":{"body":0,"breadcrumbs":4,"title":1},"807":{"body":1153,"breadcrumbs":5,"title":2},"808":{"body":0,"breadcrumbs":8,"title":6},"809":{"body":13,"breadcrumbs":3,"title":1},"81":{"body":0,"breadcrumbs":5,"title":2},"810":{"body":692,"breadcrumbs":4,"title":2},"811":{"body":8,"breadcrumbs":6,"title":4},"812":{"body":38,"breadcrumbs":3,"title":1},"813":{"body":0,"breadcrumbs":3,"title":1},"814":{"body":28,"breadcrumbs":3,"title":1},"815":{"body":18,"breadcrumbs":3,"title":1},"816":{"body":18,"breadcrumbs":3,"title":1},"817":{"body":0,"breadcrumbs":3,"title":1},"818":{"body":35,"breadcrumbs":5,"title":3},"819":{"body":31,"breadcrumbs":5,"title":3},"82":{"body":23,"breadcrumbs":7,"title":4},"820":{"body":0,"breadcrumbs":4,"title":2},"821":{"body":102,"breadcrumbs":4,"title":2},"822":{"body":282,"breadcrumbs":4,"title":2},"823":{"body":0,"breadcrumbs":4,"title":2},"824":{"body":25,"breadcrumbs":5,"title":3},"825":{"body":32,"breadcrumbs":4,"title":2},"826":{"body":0,"breadcrumbs":4,"title":2},"827":{"body":53,"breadcrumbs":4,"title":2},"828":{"body":0,"breadcrumbs":4,"title":2},"829":{"body":35,"breadcrumbs":4,"title":2},"83":{"body":11,"breadcrumbs":5,"title":2},"830":{"body":30,"breadcrumbs":4,"title":2},"831":{"body":27,"breadcrumbs":4,"title":2},"832":{"body":0,"breadcrumbs":4,"title":2},"833":{"body":49,"breadcrumbs":4,"title":2},"834":{"body":56,"breadcrumbs":4,"title":2},"835":{"body":0,"breadcrumbs":3,"title":1},"836":{"body":17,"breadcrumbs":5,"title":3},"837":{"body":23,"breadcrumbs":5,"title":3},"838":{"body":12,"breadcrumbs":4,"title":2},"839":{"body":11,"breadcrumbs":3,"title":1},"84":{"body":12,"breadcrumbs":6,"title":3},"840":{"body":27,"breadcrumbs":3,"title":1},"841":{"body":76,"breadcrumbs":3,"title":1},"842":{"body":4,"breadcrumbs":3,"title":1},"843":{"body":37,"breadcrumbs":4,"title":2},"844":{"body":56,"breadcrumbs":4,"title":2},"845":{"body":33,"breadcrumbs":3,"title":1},"846":{"body":0,"breadcrumbs":4,"title":2},"847":{"body":26,"breadcrumbs":4,"title":3},"848":{"body":0,"breadcrumbs":1,"title":0},"849":{"body":44,"breadcrumbs":5,"title":4},"85":{"body":0,"breadcrumbs":5,"title":2},"850":{"body":29,"breadcrumbs":2,"title":1},"851":{"body":41,"breadcrumbs":3,"title":2},"852":{"body":35,"breadcrumbs":3,"title":2},"853":{"body":35,"breadcrumbs":3,"title":2},"854":{"body":29,"breadcrumbs":2,"title":1},"855":{"body":0,"breadcrumbs":2,"title":1},"856":{"body":39,"breadcrumbs":3,"title":2},"857":{"body":40,"breadcrumbs":3,"title":2},"858":{"body":0,"breadcrumbs":2,"title":1},"859":{"body":26,"breadcrumbs":2,"title":1},"86":{"body":38,"breadcrumbs":6,"title":3},"860":{"body":26,"breadcrumbs":2,"title":1},"861":{"body":40,"breadcrumbs":5,"title":4},"862":{"body":40,"breadcrumbs":2,"title":1},"863":{"body":28,"breadcrumbs":2,"title":1},"864":{"body":34,"breadcrumbs":3,"title":2},"865":{"body":30,"breadcrumbs":3,"title":2},"866":{"body":28,"breadcrumbs":2,"title":1},"867":{"body":27,"breadcrumbs":3,"title":2},"868":{"body":0,"breadcrumbs":2,"title":1},"869":{"body":28,"breadcrumbs":2,"title":1},"87":{"body":14,"breadcrumbs":6,"title":3},"870":{"body":26,"breadcrumbs":2,"title":1},"871":{"body":30,"breadcrumbs":3,"title":2},"872":{"body":0,"breadcrumbs":2,"title":1},"873":{"body":29,"breadcrumbs":2,"title":1},"874":{"body":30,"breadcrumbs":2,"title":1},"875":{"body":0,"breadcrumbs":2,"title":1},"876":{"body":33,"breadcrumbs":2,"title":1},"877":{"body":0,"breadcrumbs":2,"title":1},"878":{"body":35,"breadcrumbs":6,"title":5},"879":{"body":20,"breadcrumbs":2,"title":1},"88":{"body":12,"breadcrumbs":5,"title":2},"880":{"body":32,"breadcrumbs":2,"title":1},"881":{"body":0,"breadcrumbs":2,"title":1},"882":{"body":26,"breadcrumbs":3,"title":2},"883":{"body":30,"breadcrumbs":3,"title":2},"884":{"body":0,"breadcrumbs":1,"title":0},"885":{"body":34,"breadcrumbs":2,"title":1},"886":{"body":25,"breadcrumbs":2,"title":1},"887":{"body":32,"breadcrumbs":3,"title":2},"888":{"body":0,"breadcrumbs":2,"title":1},"889":{"body":24,"breadcrumbs":5,"title":4},"89":{"body":55,"breadcrumbs":7,"title":4},"890":{"body":0,"breadcrumbs":2,"title":1},"891":{"body":27,"breadcrumbs":5,"title":4},"892":{"body":28,"breadcrumbs":5,"title":4},"893":{"body":27,"breadcrumbs":2,"title":1},"894":{"body":0,"breadcrumbs":2,"title":1},"895":{"body":22,"breadcrumbs":2,"title":1},"896":{"body":0,"breadcrumbs":2,"title":1},"897":{"body":27,"breadcrumbs":5,"title":4},"898":{"body":37,"breadcrumbs":5,"title":4},"899":{"body":24,"breadcrumbs":2,"title":1},"9":{"body":16,"breadcrumbs":2,"title":1},"90":{"body":42,"breadcrumbs":7,"title":4},"900":{"body":38,"breadcrumbs":2,"title":1},"901":{"body":0,"breadcrumbs":2,"title":1},"902":{"body":26,"breadcrumbs":2,"title":1},"903":{"body":0,"breadcrumbs":2,"title":1},"904":{"body":22,"breadcrumbs":5,"title":4},"905":{"body":21,"breadcrumbs":2,"title":1},"906":{"body":33,"breadcrumbs":2,"title":1},"907":{"body":0,"breadcrumbs":2,"title":1},"908":{"body":22,"breadcrumbs":5,"title":4},"909":{"body":27,"breadcrumbs":3,"title":2},"91":{"body":40,"breadcrumbs":5,"title":2},"910":{"body":34,"breadcrumbs":2,"title":1},"911":{"body":40,"breadcrumbs":2,"title":1},"912":{"body":0,"breadcrumbs":2,"title":1},"913":{"body":29,"breadcrumbs":3,"title":2},"914":{"body":0,"breadcrumbs":2,"title":1},"915":{"body":28,"breadcrumbs":6,"title":5},"916":{"body":23,"breadcrumbs":2,"title":1},"917":{"body":27,"breadcrumbs":3,"title":2},"918":{"body":26,"breadcrumbs":2,"title":1},"919":{"body":23,"breadcrumbs":2,"title":1},"92":{"body":7,"breadcrumbs":9,"title":6},"920":{"body":0,"breadcrumbs":2,"title":1},"921":{"body":36,"breadcrumbs":2,"title":1},"922":{"body":22,"breadcrumbs":3,"title":2},"923":{"body":32,"breadcrumbs":3,"title":2},"924":{"body":33,"breadcrumbs":2,"title":1},"925":{"body":24,"breadcrumbs":2,"title":1},"926":{"body":39,"breadcrumbs":2,"title":1},"927":{"body":26,"breadcrumbs":4,"title":3},"928":{"body":37,"breadcrumbs":4,"title":3},"929":{"body":18,"breadcrumbs":3,"title":2},"93":{"body":16,"breadcrumbs":9,"title":6},"930":{"body":0,"breadcrumbs":2,"title":1},"931":{"body":18,"breadcrumbs":2,"title":1},"932":{"body":38,"breadcrumbs":2,"title":1},"933":{"body":21,"breadcrumbs":2,"title":1},"934":{"body":39,"breadcrumbs":3,"title":2},"935":{"body":29,"breadcrumbs":2,"title":1},"936":{"body":30,"breadcrumbs":7,"title":6},"937":{"body":21,"breadcrumbs":2,"title":1},"938":{"body":0,"breadcrumbs":2,"title":1},"939":{"body":23,"breadcrumbs":4,"title":3},"94":{"body":18,"breadcrumbs":9,"title":6},"940":{"body":30,"breadcrumbs":2,"title":1},"941":{"body":0,"breadcrumbs":2,"title":1},"942":{"body":29,"breadcrumbs":2,"title":1},"943":{"body":28,"breadcrumbs":2,"title":1},"944":{"body":0,"breadcrumbs":2,"title":1},"945":{"body":28,"breadcrumbs":2,"title":1},"946":{"body":38,"breadcrumbs":2,"title":1},"947":{"body":37,"breadcrumbs":2,"title":1},"948":{"body":0,"breadcrumbs":3,"title":2},"949":{"body":21,"breadcrumbs":2,"title":1},"95":{"body":20,"breadcrumbs":9,"title":6},"950":{"body":99,"breadcrumbs":4,"title":3},"951":{"body":0,"breadcrumbs":4,"title":3},"952":{"body":78,"breadcrumbs":3,"title":2},"953":{"body":26,"breadcrumbs":3,"title":2},"954":{"body":0,"breadcrumbs":3,"title":2},"955":{"body":42,"breadcrumbs":3,"title":2},"956":{"body":28,"breadcrumbs":3,"title":2},"957":{"body":0,"breadcrumbs":3,"title":2},"958":{"body":31,"breadcrumbs":4,"title":3},"959":{"body":19,"breadcrumbs":4,"title":3},"96":{"body":11,"breadcrumbs":10,"title":7},"960":{"body":27,"breadcrumbs":3,"title":2},"961":{"body":16,"breadcrumbs":6,"title":3},"962":{"body":19,"breadcrumbs":5,"title":2},"963":{"body":32,"breadcrumbs":4,"title":1},"964":{"body":0,"breadcrumbs":6,"title":3},"965":{"body":8,"breadcrumbs":4,"title":1},"966":{"body":961,"breadcrumbs":4,"title":1},"967":{"body":33,"breadcrumbs":4,"title":1},"968":{"body":29,"breadcrumbs":4,"title":1},"969":{"body":32,"breadcrumbs":4,"title":1},"97":{"body":26,"breadcrumbs":10,"title":7},"970":{"body":0,"breadcrumbs":5,"title":2},"971":{"body":289,"breadcrumbs":6,"title":3},"972":{"body":0,"breadcrumbs":5,"title":3},"973":{"body":0,"breadcrumbs":6,"title":4},"974":{"body":1,"breadcrumbs":4,"title":2},"975":{"body":7,"breadcrumbs":4,"title":2},"976":{"body":6,"breadcrumbs":5,"title":3},"977":{"body":7,"breadcrumbs":4,"title":2},"978":{"body":2,"breadcrumbs":4,"title":2},"979":{"body":6,"breadcrumbs":4,"title":2},"98":{"body":18,"breadcrumbs":6,"title":3},"980":{"body":6,"breadcrumbs":4,"title":2},"981":{"body":1,"breadcrumbs":4,"title":2},"982":{"body":15,"breadcrumbs":5,"title":3},"983":{"body":21,"breadcrumbs":5,"title":3},"984":{"body":87,"breadcrumbs":3,"title":1},"985":{"body":0,"breadcrumbs":3,"title":1},"986":{"body":833,"breadcrumbs":5,"title":3},"987":{"body":20,"breadcrumbs":7,"title":5},"988":{"body":12,"breadcrumbs":3,"title":1},"989":{"body":297,"breadcrumbs":4,"title":2},"99":{"body":74,"breadcrumbs":7,"title":4},"990":{"body":46,"breadcrumbs":8,"title":4},"991":{"body":45,"breadcrumbs":5,"title":1},"992":{"body":0,"breadcrumbs":6,"title":2},"993":{"body":44,"breadcrumbs":10,"title":6},"994":{"body":9,"breadcrumbs":8,"title":4},"995":{"body":13,"breadcrumbs":7,"title":3},"996":{"body":19,"breadcrumbs":9,"title":5},"997":{"body":0,"breadcrumbs":7,"title":3},"998":{"body":34,"breadcrumbs":9,"title":5},"999":{"body":29,"breadcrumbs":9,"title":5}},"docs":{"0":{"body":"Last Updated : 2025-01-02 (Phase 3.A Cleanup Complete) Status : ✅ Primary documentation source (145 files consolidated) Welcome to the comprehensive documentation for the Provisioning Platform - a modern, cloud-native infrastructure automation system built with Nushell, KCL, and Rust. Note : Architecture Decision Records (ADRs) and high-level design documentation are in docs/ directory. This location contains all user-facing, operational, and product documentation.","breadcrumbs":"Home » Provisioning Platform Documentation","id":"0","title":"Provisioning Platform Documentation"},"1":{"body":"","breadcrumbs":"Home » Quick Navigation","id":"1","title":"Quick Navigation"},"10":{"body":"Document Description Workspace Config Architecture Configuration architecture","breadcrumbs":"Home » 🔐 Configuration","id":"10","title":"🔐 Configuration"},"100":{"body":"Setup wizard won\'t start # Check Nushell\\nnu --version # Check permissions\\nchmod +x $(which provisioning) Configuration error # Validate configuration\\nprovisioning setup validate --verbose # Check paths\\nprovisioning info paths Deployment fails # Dry-run to see what would happen\\nprovisioning server create --check # Check platform status\\nprovisioning platform status","breadcrumbs":"Setup Quick Start » Troubleshooting Quick Fixes","id":"100","title":"Troubleshooting Quick Fixes"},"1000":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Configuration Structure","id":"1000","title":"Configuration Structure"},"1001":{"body":"All configuration lives in one Nickel file with three sections: # workspace_librecloud/config/config.ncl\\n{ # SECTION 1: Workspace metadata workspace = { name = \\"librecloud\\", path = \\"/Users/Akasha/project-provisioning/workspace_librecloud\\", description = \\"Production workspace\\" }, # SECTION 2: Cloud providers providers = { upcloud = { enabled = true, api_user = \\"{{env.UPCLOUD_USER}}\\", api_password = \\"{{kms.decrypt(\'upcloud_pass\')}}\\" }, aws = { enabled = false }, local = { enabled = true } }, # SECTION 3: Platform services platform = { orchestrator = { enabled = true, server = { host = \\"127.0.0.1\\", port = 9090 }, storage = { type = \\"filesystem\\" } }, kms = { enabled = true, backend = \\"rustyvault\\", url = \\"http://localhost:8200\\" } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Single File, Three Sections","id":"1001","title":"Single File, Three Sections"},"1002":{"body":"Section Purpose Used By workspace Workspace metadata and paths Config loader, providers providers.upcloud UpCloud provider settings UpCloud provisioning providers.aws AWS provider settings AWS provisioning providers.local Local VM provider settings Local VM provisioning Core Platform Services platform.orchestrator Orchestrator service config Orchestrator REST API platform.control_center Control center service config Control center REST API platform.mcp_server MCP server service config Model Context Protocol integration platform.installer Installer service config Infrastructure provisioning Security & Secrets platform.vault_service Vault service config Secrets management and encryption Extensions & Registry platform.extension_registry Extension registry config Extension distribution via Gitea/OCI AI & Intelligence platform.rag RAG system config Retrieval-Augmented Generation platform.ai_service AI service config AI model integration and DAG workflows Operations & Daemon platform.provisioning_daemon Provisioning daemon config Background provisioning operations","breadcrumbs":"TypeDialog Platform Config Guide » Available Configuration Sections","id":"1002","title":"Available Configuration Sections"},"1003":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Service-Specific Configuration","id":"1003","title":"Service-Specific Configuration"},"1004":{"body":"Purpose : Coordinate infrastructure operations, manage workflows, handle batch operations Key Settings : server : HTTP server configuration (host, port, workers) storage : Task queue storage (filesystem or SurrealDB) queue : Task processing (concurrency, retries, timeouts) batch : Batch operation settings (parallelism, timeouts) monitoring : Health checks and metrics collection rollback : Checkpoint and recovery strategy logging : Log level and format Example : platform = { orchestrator = { enabled = true, server = { host = \\"127.0.0.1\\", port = 9090, workers = 4, keep_alive = 75, max_connections = 1000 }, storage = { type = \\"filesystem\\", backend_path = \\"{{workspace.path}}/.orchestrator/data/queue.rkvs\\" }, queue = { max_concurrent_tasks = 5, retry_attempts = 3, retry_delay_seconds = 5, task_timeout_minutes = 60 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Orchestrator Service","id":"1004","title":"Orchestrator Service"},"1005":{"body":"Purpose : Cryptographic key management, secret encryption/decryption Key Settings : backend : KMS backend (rustyvault, age, aws, vault, cosmian) url : Backend URL or connection string credentials : Authentication if required Example : platform = { kms = { enabled = true, backend = \\"rustyvault\\", url = \\"http://localhost:8200\\" }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » KMS Service","id":"1005","title":"KMS Service"},"1006":{"body":"Purpose : Centralized monitoring and control interface Key Settings : server : HTTP server configuration database : Backend database connection jwt : JWT authentication settings security : CORS and security policies Example : platform = { control_center = { enabled = true, server = { host = \\"127.0.0.1\\", port = 8080 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Control Center Service","id":"1006","title":"Control Center Service"},"1007":{"body":"All platform services support four deployment modes, each with different resource allocation and feature sets: Mode Resources Use Case Storage TLS solo Minimal (2 workers) Development, testing Embedded/filesystem No multiuser Moderate (4 workers) Team environments Shared databases Optional cicd High throughput (8+ workers) CI/CD pipelines Ephemeral/memory No enterprise High availability (16+ workers) Production Clustered/distributed Yes Mode-based Configuration Loading : # Load a specific mode\'s configuration\\nexport VAULT_MODE=enterprise\\nexport REGISTRY_MODE=multiuser\\nexport RAG_MODE=cicd # Services automatically resolve to correct TOML files:\\n# Generated from: provisioning/schemas/platform/\\n# - vault-service.enterprise.toml (generated from vault-service.ncl)\\n# - extension-registry.multiuser.toml (generated from extension-registry.ncl)\\n# - rag.cicd.toml (generated from rag.ncl)","breadcrumbs":"TypeDialog Platform Config Guide » Deployment Modes","id":"1007","title":"Deployment Modes"},"1008":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » New Platform Services (Phase 13-19)","id":"1008","title":"New Platform Services (Phase 13-19)"},"1009":{"body":"Purpose : Secrets management, encryption, and cryptographic key storage Key Settings : server : HTTP server configuration (host, port, workers) storage : Backend storage (filesystem, memory, surrealdb, etcd, postgresql) vault : Vault mounting and key management ha : High availability clustering security : TLS, certificate validation logging : Log level and audit trails Mode Characteristics : solo : Filesystem storage, no TLS, embedded mode multiuser : SurrealDB backend, shared storage, TLS optional cicd : In-memory ephemeral storage, no persistence enterprise : Etcd HA, TLS required, audit logging enabled Environment Variable Overrides : VAULT_CONFIG=/path/to/vault.toml # Explicit config path\\nVAULT_MODE=enterprise # Mode-specific config\\nVAULT_SERVER_URL=http://localhost:8200 # Server URL\\nVAULT_STORAGE_BACKEND=etcd # Storage backend\\nVAULT_AUTH_TOKEN=s.xxxxxxxx # Authentication token\\nVAULT_TLS_VERIFY=true # TLS verification Example Configuration : platform = { vault_service = { enabled = true, server = { host = \\"0.0.0.0\\", port = 8200, workers = 8 }, storage = { backend = \\"surrealdb\\", url = \\"http://surrealdb:8000\\", namespace = \\"vault\\", database = \\"secrets\\" }, vault = { mount_point = \\"transit\\", key_name = \\"provisioning-master\\" }, ha = { enabled = true } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Vault Service","id":"1009","title":"Vault Service"},"101":{"body":"After basic setup: Configure Provider : Add cloud provider credentials Create More Workspaces : Dev, staging, production Deploy Services : Web servers, databases, etc. Set Up Monitoring : Health checks, logging Automate Deployments : CI/CD integration","breadcrumbs":"Setup Quick Start » What\'s Next?","id":"101","title":"What\'s Next?"},"1010":{"body":"Purpose : Extension distribution and management via Gitea and OCI registries Key Settings : server : HTTP server configuration (host, port, workers) gitea : Gitea integration for extension source repository oci : OCI registry for artifact distribution cache : Metadata and list caching auth : Registry authentication Mode Characteristics : solo : Gitea only, minimal cache, CORS disabled multiuser : Gitea + OCI, both enabled, CORS enabled cicd : OCI only (high-throughput mode), ephemeral cache enterprise : Both Gitea + OCI, TLS verification, large cache Environment Variable Overrides : REGISTRY_CONFIG=/path/to/registry.toml # Explicit config path\\nREGISTRY_MODE=multiuser # Mode-specific config\\nREGISTRY_SERVER_HOST=0.0.0.0 # Server host\\nREGISTRY_SERVER_PORT=8081 # Server port\\nREGISTRY_SERVER_WORKERS=4 # Worker count\\nREGISTRY_GITEA_URL=http://gitea:3000 # Gitea URL\\nREGISTRY_GITEA_ORG=provisioning # Gitea organization\\nREGISTRY_OCI_REGISTRY=registry.local:5000 # OCI registry\\nREGISTRY_OCI_NAMESPACE=provisioning # OCI namespace Example Configuration : platform = { extension_registry = { enabled = true, server = { host = \\"0.0.0.0\\", port = 8081, workers = 4 }, gitea = { enabled = true, url = \\"http://gitea:3000\\", org = \\"provisioning\\" }, oci = { enabled = true, registry = \\"registry.local:5000\\", namespace = \\"provisioning\\" }, cache = { capacity = 1000, ttl = 300 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Extension Registry Service","id":"1010","title":"Extension Registry Service"},"1011":{"body":"Purpose : Document retrieval, semantic search, and AI-augmented responses Key Settings : embeddings : Embedding model provider (openai, local, anthropic) vector_db : Vector database backend (memory, surrealdb, qdrant, milvus) llm : Language model provider (anthropic, openai, ollama) retrieval : Search strategy and parameters ingestion : Document processing and indexing Mode Characteristics : solo : Local embeddings, in-memory vector DB, Ollama LLM multiuser : OpenAI embeddings, SurrealDB vector DB, Anthropic LLM cicd : RAG completely disabled (not applicable for ephemeral pipelines) enterprise : Large embeddings (3072-dim), distributed vector DB, Claude Opus Environment Variable Overrides : RAG_CONFIG=/path/to/rag.toml # Explicit config path\\nRAG_MODE=multiuser # Mode-specific config\\nRAG_ENABLED=true # Enable/disable RAG\\nRAG_EMBEDDINGS_PROVIDER=openai # Embedding provider\\nRAG_EMBEDDINGS_API_KEY=sk-xxx # Embedding API key\\nRAG_VECTOR_DB_URL=http://surrealdb:8000 # Vector DB URL\\nRAG_LLM_PROVIDER=anthropic # LLM provider\\nRAG_LLM_API_KEY=sk-ant-xxx # LLM API key\\nRAG_VECTOR_DB_TYPE=surrealdb # Vector DB type Example Configuration : platform = { rag = { enabled = true, embeddings = { provider = \\"openai\\", model = \\"text-embedding-3-small\\", api_key = \\"{{env.OPENAI_API_KEY}}\\" }, vector_db = { db_type = \\"surrealdb\\", url = \\"http://surrealdb:8000\\", namespace = \\"rag_prod\\" }, llm = { provider = \\"anthropic\\", model = \\"claude-opus-4-5-20251101\\", api_key = \\"{{env.ANTHROPIC_API_KEY}}\\" }, retrieval = { top_k = 10, similarity_threshold = 0.75 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » RAG (Retrieval-Augmented Generation) Service","id":"1011","title":"RAG (Retrieval-Augmented Generation) Service"},"1012":{"body":"Purpose : AI model integration with RAG and MCP support for multi-step workflows Key Settings : server : HTTP server configuration rag : RAG system integration mcp : Model Context Protocol integration dag : Directed acyclic graph task orchestration Mode Characteristics : solo : RAG enabled, no MCP, minimal concurrency (3 tasks) multiuser : Both RAG and MCP enabled, moderate concurrency (10 tasks) cicd : RAG disabled, MCP enabled, high concurrency (20 tasks) enterprise : Both enabled, max concurrency (50 tasks), full monitoring Environment Variable Overrides : AI_SERVICE_CONFIG=/path/to/ai.toml # Explicit config path\\nAI_SERVICE_MODE=enterprise # Mode-specific config\\nAI_SERVICE_SERVER_PORT=8082 # Server port\\nAI_SERVICE_SERVER_WORKERS=16 # Worker count\\nAI_SERVICE_RAG_ENABLED=true # Enable RAG integration\\nAI_SERVICE_MCP_ENABLED=true # Enable MCP integration\\nAI_SERVICE_DAG_MAX_CONCURRENT_TASKS=50 # Max concurrent tasks Example Configuration : platform = { ai_service = { enabled = true, server = { host = \\"0.0.0.0\\", port = 8082, workers = 8 }, rag = { enabled = true, rag_service_url = \\"http://rag:8083\\", timeout = 60000 }, mcp = { enabled = true, mcp_service_url = \\"http://mcp-server:8084\\", timeout = 60000 }, dag = { max_concurrent_tasks = 20, task_timeout = 600000, retry_attempts = 5 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » AI Service","id":"1012","title":"AI Service"},"1013":{"body":"Purpose : Background service for provisioning operations, workspace management, and health monitoring Key Settings : daemon : Daemon control (poll interval, max workers) logging : Log level and output configuration actions : Automated actions (cleanup, updates, sync) workers : Worker pool configuration health : Health check settings Mode Characteristics : solo : Minimal polling, no auto-cleanup, debug logging multiuser : Standard polling, workspace sync enabled, info logging cicd : Frequent polling, ephemeral cleanup, warning logging enterprise : Standard polling, full automation, all features enabled Environment Variable Overrides : DAEMON_CONFIG=/path/to/daemon.toml # Explicit config path\\nDAEMON_MODE=enterprise # Mode-specific config\\nDAEMON_POLL_INTERVAL=30 # Polling interval (seconds)\\nDAEMON_MAX_WORKERS=16 # Maximum worker threads\\nDAEMON_LOGGING_LEVEL=info # Log level (debug/info/warn/error)\\nDAEMON_AUTO_CLEANUP=true # Enable auto cleanup\\nDAEMON_AUTO_UPDATE=true # Enable auto updates Example Configuration : platform = { provisioning_daemon = { enabled = true, daemon = { poll_interval = 30, max_workers = 8 }, logging = { level = \\"info\\", file = \\"/var/log/provisioning/daemon.log\\" }, actions = { auto_cleanup = true, auto_update = false, workspace_sync = true } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Provisioning Daemon","id":"1013","title":"Provisioning Daemon"},"1014":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Using TypeDialog Forms","id":"1014","title":"Using TypeDialog Forms"},"1015":{"body":"Interactive Prompts : Answer questions one at a time Validation : Inputs are validated as you type Defaults : Each field shows a sensible default Skip Optional : Press Enter to use default or skip optional fields Review : Preview generated Nickel before saving","breadcrumbs":"TypeDialog Platform Config Guide » Form Navigation","id":"1015","title":"Form Navigation"},"1016":{"body":"Type Example Notes text \\"127.0.0.1\\" Free-form text input confirm true/false Yes/no answer select \\"filesystem\\" Choose from list custom(u16) 9090 Number input custom(u32) 1000 Larger number","breadcrumbs":"TypeDialog Platform Config Guide » Field Types","id":"1016","title":"Field Types"},"1017":{"body":"Environment Variables : api_user = \\"{{env.UPCLOUD_USER}}\\"\\napi_password = \\"{{env.UPCLOUD_PASSWORD}}\\" Workspace Paths : data_dir = \\"{{workspace.path}}/.orchestrator/data\\"\\nlogs_dir = \\"{{workspace.path}}/.orchestrator/logs\\" KMS Decryption : api_password = \\"{{kms.decrypt(\'upcloud_pass\')}}\\"","breadcrumbs":"TypeDialog Platform Config Guide » Special Values","id":"1017","title":"Special Values"},"1018":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Validation & Export","id":"1018","title":"Validation & Export"},"1019":{"body":"# Check Nickel syntax\\nnickel typecheck workspace_librecloud/config/config.ncl # Detailed validation with error messages\\nnickel typecheck workspace_librecloud/config/config.ncl 2>&1 # Schema validation happens during export\\nprovisioning config export","breadcrumbs":"TypeDialog Platform Config Guide » Validating Configuration","id":"1019","title":"Validating Configuration"},"102":{"body":"# Get help\\nprovisioning help # Setup help\\nprovisioning help setup # Specific command help\\nprovisioning --help # View documentation\\nprovisioning guide system-setup","breadcrumbs":"Setup Quick Start » Need Help?","id":"102","title":"Need Help?"},"1020":{"body":"# One-time export\\nprovisioning config export # Export creates (pre-configured TOML for all services):\\nworkspace_librecloud/config/generated/\\n├── workspace.toml # Workspace metadata\\n├── providers/\\n│ ├── upcloud.toml # UpCloud provider\\n│ └── local.toml # Local provider\\n└── platform/ ├── orchestrator.toml # Orchestrator service ├── control_center.toml # Control center service ├── mcp_server.toml # MCP server service ├── installer.toml # Installer service ├── kms.toml # KMS service ├── vault_service.toml # Vault service (new) ├── extension_registry.toml # Extension registry (new) ├── rag.toml # RAG service (new) ├── ai_service.toml # AI service (new) └── provisioning_daemon.toml # Daemon service (new) # Public Nickel Schemas (20 total for 5 new services):\\nprovisioning/schemas/platform/\\n├── schemas/\\n│ ├── vault-service.ncl\\n│ ├── extension-registry.ncl\\n│ ├── rag.ncl\\n│ ├── ai-service.ncl\\n│ └── provisioning-daemon.ncl\\n├── defaults/\\n│ ├── vault-service-defaults.ncl\\n│ ├── extension-registry-defaults.ncl\\n│ ├── rag-defaults.ncl\\n│ ├── ai-service-defaults.ncl\\n│ ├── provisioning-daemon-defaults.ncl\\n│ └── deployment/\\n│ ├── solo-defaults.ncl\\n│ ├── multiuser-defaults.ncl\\n│ ├── cicd-defaults.ncl\\n│ └── enterprise-defaults.ncl\\n├── validators/\\n├── templates/\\n├── constraints/\\n└── values/ Using Pre-Generated Configurations : All 5 new services come with pre-built TOML configs for each deployment mode: # View available schemas for vault service\\nls -la provisioning/schemas/platform/schemas/vault-service.ncl\\nls -la provisioning/schemas/platform/defaults/vault-service-defaults.ncl # Load enterprise mode\\nexport VAULT_MODE=enterprise\\ncargo run -p vault-service # Or load multiuser mode\\nexport REGISTRY_MODE=multiuser\\ncargo run -p extension-registry # All 5 services support mode-based loading\\nexport RAG_MODE=cicd\\nexport AI_SERVICE_MODE=enterprise\\nexport DAEMON_MODE=multiuser","breadcrumbs":"TypeDialog Platform Config Guide » Exporting to Service Formats","id":"1020","title":"Exporting to Service Formats"},"1021":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Updating Configuration","id":"1021","title":"Updating Configuration"},"1022":{"body":"Edit source config : vim workspace_librecloud/config/config.ncl Validate changes : nickel typecheck workspace_librecloud/config/config.ncl Re-export to TOML : provisioning config export Restart affected service (if needed): provisioning restart orchestrator","breadcrumbs":"TypeDialog Platform Config Guide » Change a Setting","id":"1022","title":"Change a Setting"},"1023":{"body":"If you prefer interactive updating: # Re-run TypeDialog form (overwrites config.ncl)\\nprovisioning config platform orchestrator # Or edit via TypeDialog with existing values\\ntypedialog form .typedialog/provisioning/platform/orchestrator/form.toml","breadcrumbs":"TypeDialog Platform Config Guide » Using TypeDialog to Update","id":"1023","title":"Using TypeDialog to Update"},"1024":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Troubleshooting","id":"1024","title":"Troubleshooting"},"1025":{"body":"Problem : Failed to parse config file Solution : Check form.toml syntax and verify required fields are present (name, description, locales_path, templates_path) head -10 .typedialog/provisioning/platform/orchestrator/form.toml","breadcrumbs":"TypeDialog Platform Config Guide » Form Won\'t Load","id":"1025","title":"Form Won\'t Load"},"1026":{"body":"Problem : Nickel configuration validation failed Solution : Check for syntax errors and correct field names nickel typecheck workspace_librecloud/config/config.ncl 2>&1 | less Common issues: Missing closing braces, incorrect field names, wrong data types","breadcrumbs":"TypeDialog Platform Config Guide » Validation Fails","id":"1026","title":"Validation Fails"},"1027":{"body":"Problem : Generated TOML files are empty Solution : Verify config.ncl exports to JSON and check all required sections exist nickel export --format json workspace_librecloud/config/config.ncl | head -20","breadcrumbs":"TypeDialog Platform Config Guide » Export Creates Empty Files","id":"1027","title":"Export Creates Empty Files"},"1028":{"body":"Problem : Changes don\'t take effect Solution : Verify export succeeded: ls -lah workspace_librecloud/config/generated/platform/ Check service path: provisioning start orchestrator --check Restart service: provisioning restart orchestrator","breadcrumbs":"TypeDialog Platform Config Guide » Services Don\'t Use New Config","id":"1028","title":"Services Don\'t Use New Config"},"1029":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Configuration Examples","id":"1029","title":"Configuration Examples"},"103":{"body":"Your configuration is in: macOS : ~/Library/Application Support/provisioning/ Linux : ~/.config/provisioning/ Important files: system.toml - System configuration user_preferences.toml - User settings workspaces/*/ - Workspace definitions Ready to dive deeper? Check out the Full Setup Guide","breadcrumbs":"Setup Quick Start » Key Files","id":"103","title":"Key Files"},"1030":{"body":"{ workspace = { name = \\"dev\\", path = \\"/Users/dev/workspace\\", description = \\"Development workspace\\" }, providers = { local = { enabled = true, base_path = \\"/opt/vms\\" }, upcloud = { enabled = false }, aws = { enabled = false } }, platform = { orchestrator = { enabled = true, server = { host = \\"127.0.0.1\\", port = 9090 }, storage = { type = \\"filesystem\\" }, logging = { level = \\"debug\\", format = \\"json\\" } }, kms = { enabled = true, backend = \\"age\\" } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Development Setup","id":"1030","title":"Development Setup"},"1031":{"body":"{ workspace = { name = \\"prod\\", path = \\"/opt/provisioning/prod\\", description = \\"Production workspace\\" }, providers = { upcloud = { enabled = true, api_user = \\"{{env.UPCLOUD_USER}}\\", api_password = \\"{{kms.decrypt(\'upcloud_prod\')}}\\", default_zone = \\"de-fra1\\" }, aws = { enabled = false }, local = { enabled = false } }, platform = { orchestrator = { enabled = true, server = { host = \\"0.0.0.0\\", port = 9090, workers = 8 }, storage = { type = \\"surrealdb-server\\", url = \\"ws://surreal.internal:8000\\" }, monitoring = { enabled = true, metrics_interval_seconds = 30 }, logging = { level = \\"info\\", format = \\"json\\" } }, kms = { enabled = true, backend = \\"vault\\", url = \\"https://vault.internal:8200\\" } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Production Setup","id":"1031","title":"Production Setup"},"1032":{"body":"{ workspace = { name = \\"multi\\", path = \\"/opt/multi\\", description = \\"Multi-cloud workspace\\" }, providers = { upcloud = { enabled = true, api_user = \\"{{env.UPCLOUD_USER}}\\", default_zone = \\"de-fra1\\", zones = [\\"de-fra1\\", \\"us-nyc1\\", \\"nl-ams1\\"] }, aws = { enabled = true, access_key = \\"{{env.AWS_ACCESS_KEY_ID}}\\" }, local = { enabled = true, base_path = \\"/opt/local-vms\\" } }, platform = { orchestrator = { enabled = true, multi_workspace = false, storage = { type = \\"filesystem\\" } }, kms = { enabled = true, backend = \\"rustyvault\\" } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Multi-Provider Setup","id":"1032","title":"Multi-Provider Setup"},"1033":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Best Practices","id":"1033","title":"Best Practices"},"1034":{"body":"Start with TypeDialog forms for the best experience: provisioning config platform orchestrator","breadcrumbs":"TypeDialog Platform Config Guide » 1. Use TypeDialog for Initial Setup","id":"1034","title":"1. Use TypeDialog for Initial Setup"},"1035":{"body":"Only edit the source .ncl file, not the generated TOML files. Correct : vim workspace_librecloud/config/config.ncl Wrong : vim workspace_librecloud/config/generated/platform/orchestrator.toml","breadcrumbs":"TypeDialog Platform Config Guide » 2. Never Edit Generated Files","id":"1035","title":"2. Never Edit Generated Files"},"1036":{"body":"Always validate before deploying changes: nickel typecheck workspace_librecloud/config/config.ncl\\nprovisioning config export","breadcrumbs":"TypeDialog Platform Config Guide » 3. Validate Before Deploy","id":"1036","title":"3. Validate Before Deploy"},"1037":{"body":"Never hardcode credentials in config. Reference environment variables or KMS: Wrong : api_password = \\"my-password\\" Correct : api_password = \\"{{env.UPCLOUD_PASSWORD}}\\" Better : api_password = \\"{{kms.decrypt(\'upcloud_key\')}}\\"","breadcrumbs":"TypeDialog Platform Config Guide » 4. Use Environment Variables for Secrets","id":"1037","title":"4. Use Environment Variables for Secrets"},"1038":{"body":"Add comments explaining custom settings in the Nickel file.","breadcrumbs":"TypeDialog Platform Config Guide » 5. Document Changes","id":"1038","title":"5. Document Changes"},"1039":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Related Documentation","id":"1039","title":"Related Documentation"},"104":{"body":"Version : 1.0.0 Last Updated : 2025-12-09 Status : Production Ready","breadcrumbs":"Setup System Guide » Provisioning Setup System Guide","id":"104","title":"Provisioning Setup System Guide"},"1040":{"body":"Configuration System : See CLAUDE.md#configuration-file-format-selection Migration Guide : See provisioning/config/README.md#migration-strategy Schema Reference : See provisioning/schemas/ Nickel Language : See ADR-011 in docs/architecture/adr/","breadcrumbs":"TypeDialog Platform Config Guide » Core Resources","id":"1040","title":"Core Resources"},"1041":{"body":"Platform Services Overview : See provisioning/platform/*/README.md Core Services (Phases 8-12): orchestrator, control-center, mcp-server New Services (Phases 13-19): vault-service: Secrets management and encryption extension-registry: Extension distribution via Gitea/OCI rag: Retrieval-Augmented Generation system ai-service: AI model integration with DAG workflows provisioning-daemon: Background provisioning operations Note : Installer is a distribution tool (provisioning/tools/distribution/create-installer.nu), not a platform service configurable via TypeDialog.","breadcrumbs":"TypeDialog Platform Config Guide » Platform Services","id":"1041","title":"Platform Services"},"1042":{"body":"TypeDialog Forms (Interactive UI): provisioning/.typedialog/platform/forms/ Nickel Schemas (Type Definitions): provisioning/schemas/platform/schemas/ Default Values (Base Configuration): provisioning/schemas/platform/defaults/ Validators (Business Logic): provisioning/schemas/platform/validators/ Deployment Modes (Presets): provisioning/schemas/platform/defaults/deployment/ Rust Integration : provisioning/platform/crates/*/src/config.rs","breadcrumbs":"TypeDialog Platform Config Guide » Public Definition Locations","id":"1042","title":"Public Definition Locations"},"1043":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Getting Help","id":"1043","title":"Getting Help"},"1044":{"body":"Get detailed error messages and check available fields: nickel typecheck workspace_librecloud/config/config.ncl 2>&1 | less\\ngrep \\"prompt =\\" .typedialog/provisioning/platform/orchestrator/form.toml","breadcrumbs":"TypeDialog Platform Config Guide » Validation Errors","id":"1044","title":"Validation Errors"},"1045":{"body":"# Show all available config commands\\nprovisioning config --help # Show help for specific service\\nprovisioning config platform --help # List providers and services\\nprovisioning config providers list\\nprovisioning config services list","breadcrumbs":"TypeDialog Platform Config Guide » Configuration Questions","id":"1045","title":"Configuration Questions"},"1046":{"body":"# Validate without deploying\\nnickel typecheck workspace_librecloud/config/config.ncl # Export to see generated config\\nprovisioning config export # Check generated files\\nls -la workspace_librecloud/config/generated/","breadcrumbs":"TypeDialog Platform Config Guide » Test Configuration","id":"1046","title":"Test Configuration"},"1047":{"body":"Version : 1.0.0 Last Updated : 2026-01-05 Target Audience : DevOps Engineers, Platform Operators Status : Production Ready Practical guide for deploying the 9-service provisioning platform in any environment using mode-based configuration.","breadcrumbs":"Platform Deployment Guide » Platform Deployment Guide","id":"1047","title":"Platform Deployment Guide"},"1048":{"body":"Prerequisites Deployment Modes Quick Start Solo Mode Deployment Multiuser Mode Deployment CICD Mode Deployment Enterprise Mode Deployment Service Management Health Checks & Monitoring Troubleshooting","breadcrumbs":"Platform Deployment Guide » Table of Contents","id":"1048","title":"Table of Contents"},"1049":{"body":"","breadcrumbs":"Platform Deployment Guide » Prerequisites","id":"1049","title":"Prerequisites"},"105":{"body":"","breadcrumbs":"Setup System Guide » Quick Start","id":"105","title":"Quick Start"},"1050":{"body":"Rust : 1.70+ (for building services) Nickel : Latest (for config validation) Nushell : 0.109.1+ (for scripts) Cargo : Included with Rust Git : For cloning and pulling updates","breadcrumbs":"Platform Deployment Guide » Required Software","id":"1050","title":"Required Software"},"1051":{"body":"Tool Solo Multiuser CICD Enterprise Docker/Podman No Optional Yes Yes SurrealDB No Yes No No Etcd No No No Yes PostgreSQL No Optional No Optional OpenAI/Anthropic API No Optional Yes Yes","breadcrumbs":"Platform Deployment Guide » Required Tools (Mode-Dependent)","id":"1051","title":"Required Tools (Mode-Dependent)"},"1052":{"body":"Resource Solo Multiuser CICD Enterprise CPU Cores 2+ 4+ 8+ 16+ Memory 2 GB 4 GB 8 GB 16 GB Disk 10 GB 50 GB 100 GB 500 GB Network Local Local/Cloud Cloud HA Cloud","breadcrumbs":"Platform Deployment Guide » System Requirements","id":"1052","title":"System Requirements"},"1053":{"body":"# Ensure base directories exist\\nmkdir -p provisioning/schemas/platform\\nmkdir -p provisioning/platform/logs\\nmkdir -p provisioning/platform/data\\nmkdir -p provisioning/.typedialog/platform\\nmkdir -p provisioning/config/runtime","breadcrumbs":"Platform Deployment Guide » Directory Structure","id":"1053","title":"Directory Structure"},"1054":{"body":"","breadcrumbs":"Platform Deployment Guide » Deployment Modes","id":"1054","title":"Deployment Modes"},"1055":{"body":"Requirement Recommended Mode Development & testing solo Team environment (2-10 people) multiuser CI/CD pipelines & automation cicd Production with HA enterprise","breadcrumbs":"Platform Deployment Guide » Mode Selection Matrix","id":"1055","title":"Mode Selection Matrix"},"1056":{"body":"Solo Mode Use Case : Development, testing, demonstration Characteristics : All services run locally with minimal resources Filesystem-based storage (no external databases) No TLS/SSL required Embedded/in-memory backends Single machine only Services Configuration : 2-4 workers per service 30-60 second timeouts No replication or clustering Debug-level logging enabled Startup Time : ~2-5 minutes Data Persistence : Local files only Multiuser Mode Use Case : Team environments, shared infrastructure Characteristics : Shared database backends (SurrealDB) Multiple concurrent users CORS and multi-user features enabled Optional TLS support 2-4 machines (or containerized) Services Configuration : 4-6 workers per service 60-120 second timeouts Basic replication available Info-level logging Startup Time : ~3-8 minutes (database dependent) Data Persistence : SurrealDB (shared) CICD Mode Use Case : CI/CD pipelines, ephemeral environments Characteristics : Ephemeral storage (memory, temporary) High throughput RAG system disabled Minimal logging Stateless services Services Configuration : 8-12 workers per service 10-30 second timeouts No persistence Warn-level logging Startup Time : ~1-2 minutes Data Persistence : None (ephemeral) Enterprise Mode Use Case : Production, high availability, compliance Characteristics : Distributed, replicated backends High availability (HA) clustering TLS/SSL encryption Audit logging Full monitoring and observability Services Configuration : 16-32 workers per service 120-300 second timeouts Active replication across 3+ nodes Info-level logging with audit trails Startup Time : ~5-15 minutes (cluster initialization) Data Persistence : Replicated across cluster","breadcrumbs":"Platform Deployment Guide » Mode Characteristics","id":"1056","title":"Mode Characteristics"},"1057":{"body":"","breadcrumbs":"Platform Deployment Guide » Quick Start","id":"1057","title":"Quick Start"},"1058":{"body":"git clone https://github.com/your-org/project-provisioning.git\\ncd project-provisioning","breadcrumbs":"Platform Deployment Guide » 1. Clone Repository","id":"1058","title":"1. Clone Repository"},"1059":{"body":"Choose your mode based on use case: # For development\\nexport DEPLOYMENT_MODE=solo # For team environments\\nexport DEPLOYMENT_MODE=multiuser # For CI/CD\\nexport DEPLOYMENT_MODE=cicd # For production\\nexport DEPLOYMENT_MODE=enterprise","breadcrumbs":"Platform Deployment Guide » 2. Select Deployment Mode","id":"1059","title":"2. Select Deployment Mode"},"106":{"body":"Nushell 0.109.0+ bash One deployment tool: Docker, Kubernetes, SSH, or systemd Optional: KCL, SOPS, Age","breadcrumbs":"Setup System Guide » Prerequisites","id":"106","title":"Prerequisites"},"1060":{"body":"All services use mode-specific TOML configs automatically loaded via environment variables: # Vault Service\\nexport VAULT_MODE=$DEPLOYMENT_MODE # Extension Registry\\nexport REGISTRY_MODE=$DEPLOYMENT_MODE # RAG System\\nexport RAG_MODE=$DEPLOYMENT_MODE # AI Service\\nexport AI_SERVICE_MODE=$DEPLOYMENT_MODE # Provisioning Daemon\\nexport DAEMON_MODE=$DEPLOYMENT_MODE","breadcrumbs":"Platform Deployment Guide » 3. Set Environment Variables","id":"1060","title":"3. Set Environment Variables"},"1061":{"body":"# Build all platform crates\\ncargo build --release -p vault-service \\\\ -p extension-registry \\\\ -p provisioning-rag \\\\ -p ai-service \\\\ -p provisioning-daemon \\\\ -p orchestrator \\\\ -p control-center \\\\ -p mcp-server \\\\ -p installer","breadcrumbs":"Platform Deployment Guide » 4. Build All Services","id":"1061","title":"4. Build All Services"},"1062":{"body":"# Start in dependency order: # 1. Core infrastructure (KMS, storage)\\ncargo run --release -p vault-service & # 2. Configuration and extensions\\ncargo run --release -p extension-registry & # 3. AI/RAG layer\\ncargo run --release -p provisioning-rag &\\ncargo run --release -p ai-service & # 4. Orchestration layer\\ncargo run --release -p orchestrator &\\ncargo run --release -p control-center &\\ncargo run --release -p mcp-server & # 5. Background operations\\ncargo run --release -p provisioning-daemon & # 6. Installer (optional, for new deployments)\\ncargo run --release -p installer &","breadcrumbs":"Platform Deployment Guide » 5. Start Services (Order Matters)","id":"1062","title":"5. Start Services (Order Matters)"},"1063":{"body":"# Check all services are running\\npgrep -l \\"vault-service|extension-registry|provisioning-rag|ai-service\\" # Test endpoints\\ncurl http://localhost:8200/health # Vault\\ncurl http://localhost:8081/health # Registry\\ncurl http://localhost:8083/health # RAG\\ncurl http://localhost:8082/health # AI Service\\ncurl http://localhost:9090/health # Orchestrator\\ncurl http://localhost:8080/health # Control Center","breadcrumbs":"Platform Deployment Guide » 6. Verify Services","id":"1063","title":"6. Verify Services"},"1064":{"body":"Perfect for : Development, testing, learning","breadcrumbs":"Platform Deployment Guide » Solo Mode Deployment","id":"1064","title":"Solo Mode Deployment"},"1065":{"body":"# Check that solo schemas are available\\nls -la provisioning/schemas/platform/defaults/deployment/solo-defaults.ncl # Available schemas for each service:\\n# - provisioning/schemas/platform/schemas/vault-service.ncl\\n# - provisioning/schemas/platform/schemas/extension-registry.ncl\\n# - provisioning/schemas/platform/schemas/rag.ncl\\n# - provisioning/schemas/platform/schemas/ai-service.ncl\\n# - provisioning/schemas/platform/schemas/provisioning-daemon.ncl","breadcrumbs":"Platform Deployment Guide » Step 1: Verify Solo Configuration Files","id":"1065","title":"Step 1: Verify Solo Configuration Files"},"1066":{"body":"# Set all services to solo mode\\nexport VAULT_MODE=solo\\nexport REGISTRY_MODE=solo\\nexport RAG_MODE=solo\\nexport AI_SERVICE_MODE=solo\\nexport DAEMON_MODE=solo # Verify settings\\necho $VAULT_MODE # Should output: solo","breadcrumbs":"Platform Deployment Guide » Step 2: Set Solo Environment Variables","id":"1066","title":"Step 2: Set Solo Environment Variables"},"1067":{"body":"# Build in release mode for better performance\\ncargo build --release","breadcrumbs":"Platform Deployment Guide » Step 3: Build Services","id":"1067","title":"Step 3: Build Services"},"1068":{"body":"# Create storage directories for solo mode\\nmkdir -p /tmp/provisioning-solo/{vault,registry,rag,ai,daemon}\\nchmod 755 /tmp/provisioning-solo/{vault,registry,rag,ai,daemon}","breadcrumbs":"Platform Deployment Guide » Step 4: Create Local Data Directories","id":"1068","title":"Step 4: Create Local Data Directories"},"1069":{"body":"# Start each service in a separate terminal or use tmux: # Terminal 1: Vault\\ncargo run --release -p vault-service # Terminal 2: Registry\\ncargo run --release -p extension-registry # Terminal 3: RAG\\ncargo run --release -p provisioning-rag # Terminal 4: AI Service\\ncargo run --release -p ai-service # Terminal 5: Orchestrator\\ncargo run --release -p orchestrator # Terminal 6: Control Center\\ncargo run --release -p control-center # Terminal 7: Daemon\\ncargo run --release -p provisioning-daemon","breadcrumbs":"Platform Deployment Guide » Step 5: Start Services","id":"1069","title":"Step 5: Start Services"},"107":{"body":"# Install provisioning\\ncurl -sSL https://install.provisioning.dev | bash # Run setup wizard\\nprovisioning setup system --interactive # Create workspace\\nprovisioning setup workspace myproject # Start deploying\\nprovisioning server create\\n```plaintext ## Configuration Paths **macOS**: `~/Library/Application Support/provisioning/`\\n**Linux**: `~/.config/provisioning/`\\n**Windows**: `%APPDATA%/provisioning/` ## Directory Structure ```plaintext\\nprovisioning/\\n├── system.toml # System info (immutable)\\n├── user_preferences.toml # User settings (editable)\\n├── platform/ # Platform services\\n├── providers/ # Provider configs\\n└── workspaces/ # Workspace definitions └── myproject/ ├── config/ ├── infra/ └── auth.token\\n```plaintext ## Setup Wizard Run the interactive setup wizard: ```bash\\nprovisioning setup system --interactive\\n```plaintext The wizard guides you through: 1. Welcome & Prerequisites Check\\n2. Operating System Detection\\n3. Configuration Path Selection\\n4. Platform Services Setup\\n5. Provider Selection\\n6. Security Configuration\\n7. Review & Confirmation ## Configuration Management ### Hierarchy (highest to lowest priority) 1. Runtime Arguments (`--flag value`)\\n2. Environment Variables (`PROVISIONING_*`)\\n3. Workspace Configuration\\n4. Workspace Authentication Token\\n5. User Preferences (`user_preferences.toml`)\\n6. Platform Configurations (`platform/*.toml`)\\n7. Provider Configurations (`providers/*.toml`)\\n8. System Configuration (`system.toml`)\\n9. Built-in Defaults ### Configuration Files - `system.toml` - System information (OS, architecture, paths)\\n- `user_preferences.toml` - User preferences (editor, format, etc.)\\n- `platform/*.toml` - Service endpoints and configuration\\n- `providers/*.toml` - Cloud provider settings ## Multiple Workspaces Create and manage multiple isolated environments: ```bash\\n# Create workspace\\nprovisioning setup workspace dev\\nprovisioning setup workspace prod # List workspaces\\nprovisioning workspace list # Activate workspace\\nprovisioning workspace activate prod\\n```plaintext ## Configuration Updates Update any setting: ```bash\\n# Update platform configuration\\nprovisioning setup platform --config new-config.toml # Update provider settings\\nprovisioning setup provider upcloud --config upcloud-config.toml # Validate changes\\nprovisioning setup validate\\n```plaintext ## Backup & Restore ```bash\\n# Backup current configuration\\nprovisioning setup backup --path ./backup.tar.gz # Restore from backup\\nprovisioning setup restore --path ./backup.tar.gz # Migrate from old setup\\nprovisioning setup migrate --from-existing\\n```plaintext ## Troubleshooting ### \\"Command not found: provisioning\\" ```bash\\nexport PATH=\\"/usr/local/bin:$PATH\\"\\n```plaintext ### \\"Nushell not found\\" ```bash\\ncurl -sSL https://raw.githubusercontent.com/nushell/nushell/main/install.sh | bash\\n```plaintext ### \\"Cannot write to directory\\" ```bash\\nchmod 755 ~/Library/Application\\\\ Support/provisioning/\\n```plaintext ### Check required tools ```bash\\nprovisioning setup validate --check-tools\\n```plaintext ## FAQ **Q: Do I need all optional tools?**\\nA: No. You need at least one deployment tool (Docker, Kubernetes, SSH, or systemd). **Q: Can I use provisioning without Docker?**\\nA: Yes. Provisioning supports Docker, Kubernetes, SSH, systemd, or combinations. **Q: How do I update configuration?**\\nA: `provisioning setup update ` **Q: Can I have multiple workspaces?**\\nA: Yes, unlimited workspaces. **Q: Is my configuration secure?**\\nA: Yes. Credentials stored securely, never in config files. **Q: Can I share workspaces with my team?**\\nA: Yes, via GitOps - configurations in Git, secrets in secure storage. ## Getting Help ```bash\\n# General help\\nprovisioning help # Setup help\\nprovisioning help setup # Specific command help\\nprovisioning setup system --help\\n```plaintext ## Next Steps 1. [Installation Guide](installation-guide.md)\\n2. [Workspace Setup](workspace-setup.md)\\n3. [Provider Configuration](provider-setup.md)\\n4. [From Scratch Guide](../guides/from-scratch.md) --- **Status**: Production Ready ✅\\n**Version**: 1.0.0\\n**Last Updated**: 2025-12-09","breadcrumbs":"Setup System Guide » 30-Second Setup","id":"107","title":"30-Second Setup"},"1070":{"body":"# Wait 10-15 seconds for services to start, then test # Check service health\\ncurl -s http://localhost:8200/health | jq .\\ncurl -s http://localhost:8081/health | jq .\\ncurl -s http://localhost:8083/health | jq . # Try a simple operation\\ncurl -X GET http://localhost:9090/api/v1/health","breadcrumbs":"Platform Deployment Guide » Step 6: Test Services","id":"1070","title":"Step 6: Test Services"},"1071":{"body":"# Check that data is stored locally\\nls -la /tmp/provisioning-solo/vault/\\nls -la /tmp/provisioning-solo/registry/ # Data should accumulate as you use the services","breadcrumbs":"Platform Deployment Guide » Step 7: Verify Persistence (Optional)","id":"1071","title":"Step 7: Verify Persistence (Optional)"},"1072":{"body":"# Stop all services\\npkill -f \\"cargo run --release\\" # Remove temporary data (optional)\\nrm -rf /tmp/provisioning-solo","breadcrumbs":"Platform Deployment Guide » Cleanup","id":"1072","title":"Cleanup"},"1073":{"body":"Perfect for : Team environments, shared infrastructure","breadcrumbs":"Platform Deployment Guide » Multiuser Mode Deployment","id":"1073","title":"Multiuser Mode Deployment"},"1074":{"body":"SurrealDB : Running and accessible at http://surrealdb:8000 Network Access : All machines can reach SurrealDB DNS/Hostnames : Services accessible via hostnames (not just localhost)","breadcrumbs":"Platform Deployment Guide » Prerequisites","id":"1074","title":"Prerequisites"},"1075":{"body":"# Using Docker (recommended)\\ndocker run -d \\\\ --name surrealdb \\\\ -p 8000:8000 \\\\ surrealdb/surrealdb:latest \\\\ start --user root --pass root # Or using native installation:\\nsurreal start --user root --pass root","breadcrumbs":"Platform Deployment Guide » Step 1: Deploy SurrealDB","id":"1075","title":"Step 1: Deploy SurrealDB"},"1076":{"body":"# Test SurrealDB connection\\ncurl -s http://localhost:8000/health # Should return: {\\"version\\":\\"v1.x.x\\"}","breadcrumbs":"Platform Deployment Guide » Step 2: Verify SurrealDB Connectivity","id":"1076","title":"Step 2: Verify SurrealDB Connectivity"},"1077":{"body":"# Configure all services for multiuser mode\\nexport VAULT_MODE=multiuser\\nexport REGISTRY_MODE=multiuser\\nexport RAG_MODE=multiuser\\nexport AI_SERVICE_MODE=multiuser\\nexport DAEMON_MODE=multiuser # Set database connection\\nexport SURREALDB_URL=http://surrealdb:8000\\nexport SURREALDB_USER=root\\nexport SURREALDB_PASS=root # Set service hostnames (if not localhost)\\nexport VAULT_SERVICE_HOST=vault.internal\\nexport REGISTRY_HOST=registry.internal\\nexport RAG_HOST=rag.internal","breadcrumbs":"Platform Deployment Guide » Step 3: Set Multiuser Environment Variables","id":"1077","title":"Step 3: Set Multiuser Environment Variables"},"1078":{"body":"cargo build --release","breadcrumbs":"Platform Deployment Guide » Step 4: Build Services","id":"1078","title":"Step 4: Build Services"},"1079":{"body":"# Create directories on shared storage (NFS, etc.)\\nmkdir -p /mnt/provisioning-data/{vault,registry,rag,ai}\\nchmod 755 /mnt/provisioning-data/{vault,registry,rag,ai} # Or use local directories if on separate machines\\nmkdir -p /var/lib/provisioning/{vault,registry,rag,ai}","breadcrumbs":"Platform Deployment Guide » Step 5: Create Shared Data Directories","id":"1079","title":"Step 5: Create Shared Data Directories"},"108":{"body":"This guide has moved to a multi-chapter format for better readability.","breadcrumbs":"Quick Start (Full) » Quick Start","id":"108","title":"Quick Start"},"1080":{"body":"# Machine 1: Infrastructure services\\nssh ops@machine1\\nexport VAULT_MODE=multiuser\\ncargo run --release -p vault-service &\\ncargo run --release -p extension-registry & # Machine 2: AI services\\nssh ops@machine2\\nexport RAG_MODE=multiuser\\nexport AI_SERVICE_MODE=multiuser\\ncargo run --release -p provisioning-rag &\\ncargo run --release -p ai-service & # Machine 3: Orchestration\\nssh ops@machine3\\ncargo run --release -p orchestrator &\\ncargo run --release -p control-center & # Machine 4: Background tasks\\nssh ops@machine4\\nexport DAEMON_MODE=multiuser\\ncargo run --release -p provisioning-daemon &","breadcrumbs":"Platform Deployment Guide » Step 6: Start Services on Multiple Machines","id":"1080","title":"Step 6: Start Services on Multiple Machines"},"1081":{"body":"# From any machine, test cross-machine connectivity\\ncurl -s http://machine1:8200/health\\ncurl -s http://machine2:8083/health\\ncurl -s http://machine3:9090/health # Test integration\\ncurl -X POST http://machine3:9090/api/v1/provision \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"workspace\\": \\"test\\"}\'","breadcrumbs":"Platform Deployment Guide » Step 7: Test Multi-Machine Setup","id":"1081","title":"Step 7: Test Multi-Machine Setup"},"1082":{"body":"# Create shared credentials\\nexport VAULT_TOKEN=s.xxxxxxxxxxx # Configure TLS (optional but recommended)\\n# Update configs to use https:// URLs\\nexport VAULT_MODE=multiuser\\n# Edit provisioning/schemas/platform/schemas/vault-service.ncl\\n# Add TLS configuration in the schema definition\\n# See: provisioning/schemas/platform/validators/ for constraints","breadcrumbs":"Platform Deployment Guide » Step 8: Enable User Access","id":"1082","title":"Step 8: Enable User Access"},"1083":{"body":"# Check all services are connected to SurrealDB\\nfor host in machine1 machine2 machine3 machine4; do ssh ops@$host \\"curl -s http://localhost/api/v1/health | jq .database_connected\\"\\ndone # Monitor SurrealDB\\ncurl -s http://surrealdb:8000/version","breadcrumbs":"Platform Deployment Guide » Monitoring Multiuser Deployment","id":"1083","title":"Monitoring Multiuser Deployment"},"1084":{"body":"Perfect for : GitHub Actions, GitLab CI, Jenkins, cloud automation","breadcrumbs":"Platform Deployment Guide » CICD Mode Deployment","id":"1084","title":"CICD Mode Deployment"},"1085":{"body":"CICD mode services: Don\'t persist data between runs Use in-memory storage Have RAG completely disabled Optimize for startup speed Suitable for containerized deployments","breadcrumbs":"Platform Deployment Guide » Step 1: Understand Ephemeral Nature","id":"1085","title":"Step 1: Understand Ephemeral Nature"},"1086":{"body":"# Use cicd mode for all services\\nexport VAULT_MODE=cicd\\nexport REGISTRY_MODE=cicd\\nexport RAG_MODE=cicd\\nexport AI_SERVICE_MODE=cicd\\nexport DAEMON_MODE=cicd # Disable TLS (not needed in CI)\\nexport CI_ENVIRONMENT=true","breadcrumbs":"Platform Deployment Guide » Step 2: Set CICD Environment Variables","id":"1086","title":"Step 2: Set CICD Environment Variables"},"1087":{"body":"# Dockerfile for CICD deployments\\nFROM rust:1.75-slim WORKDIR /app\\nCOPY . . # Build all services\\nRUN cargo build --release # Set CICD mode\\nENV VAULT_MODE=cicd\\nENV REGISTRY_MODE=cicd\\nENV RAG_MODE=cicd\\nENV AI_SERVICE_MODE=cicd # Expose ports\\nEXPOSE 8200 8081 8083 8082 9090 8080 # Run services\\nCMD [\\"sh\\", \\"-c\\", \\"\\\\ cargo run --release -p vault-service & \\\\ cargo run --release -p extension-registry & \\\\ cargo run --release -p provisioning-rag & \\\\ cargo run --release -p ai-service & \\\\ cargo run --release -p orchestrator & \\\\ wait\\"]","breadcrumbs":"Platform Deployment Guide » Step 3: Containerize Services (Optional)","id":"1087","title":"Step 3: Containerize Services (Optional)"},"1088":{"body":"name: CICD Platform Deployment on: push: branches: [main, develop] jobs: test-deployment: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Install Rust uses: actions-rs/toolchain@v1 with: toolchain: 1.75 profile: minimal - name: Set CICD Mode run: | echo \\"VAULT_MODE=cicd\\" >> $GITHUB_ENV echo \\"REGISTRY_MODE=cicd\\" >> $GITHUB_ENV echo \\"RAG_MODE=cicd\\" >> $GITHUB_ENV echo \\"AI_SERVICE_MODE=cicd\\" >> $GITHUB_ENV echo \\"DAEMON_MODE=cicd\\" >> $GITHUB_ENV - name: Build Services run: cargo build --release - name: Run Integration Tests run: | # Start services in background cargo run --release -p vault-service & cargo run --release -p extension-registry & cargo run --release -p orchestrator & # Wait for startup sleep 10 # Run tests cargo test --release - name: Health Checks run: | curl -f http://localhost:8200/health curl -f http://localhost:8081/health curl -f http://localhost:9090/health deploy: needs: test-deployment runs-on: ubuntu-latest if: github.ref == \'refs/heads/main\' steps: - uses: actions/checkout@v3 - name: Deploy to Production run: | # Deploy production enterprise cluster ./scripts/deploy-enterprise.sh","breadcrumbs":"Platform Deployment Guide » Step 4: GitHub Actions Example","id":"1088","title":"Step 4: GitHub Actions Example"},"1089":{"body":"# Simulate CI environment locally\\nexport VAULT_MODE=cicd\\nexport CI_ENVIRONMENT=true # Build\\ncargo build --release # Run short-lived services for testing\\ntimeout 30 cargo run --release -p vault-service &\\ntimeout 30 cargo run --release -p extension-registry &\\ntimeout 30 cargo run --release -p orchestrator & # Run tests while services are running\\nsleep 5\\ncargo test --release # Services auto-cleanup after timeout","breadcrumbs":"Platform Deployment Guide » Step 5: Run CICD Tests","id":"1089","title":"Step 5: Run CICD Tests"},"109":{"body":"Please see the complete quick start guide here: Prerequisites - System requirements and setup Installation - Install provisioning platform First Deployment - Deploy your first infrastructure Verification - Verify your deployment","breadcrumbs":"Quick Start (Full) » 📖 Navigate to Quick Start Guide","id":"109","title":"📖 Navigate to Quick Start Guide"},"1090":{"body":"Perfect for : Production, high availability, compliance","breadcrumbs":"Platform Deployment Guide » Enterprise Mode Deployment","id":"1090","title":"Enterprise Mode Deployment"},"1091":{"body":"3+ Machines : Minimum 3 for HA Etcd Cluster : For distributed consensus Load Balancer : HAProxy, nginx, or cloud LB TLS Certificates : Valid certificates for all services Monitoring : Prometheus, ELK, or cloud monitoring Backup System : Daily snapshots to S3 or similar","breadcrumbs":"Platform Deployment Guide » Prerequisites","id":"1091","title":"Prerequisites"},"1092":{"body":"1.1 Deploy Etcd Cluster # Node 1, 2, 3\\netcd --name=node-1 \\\\ --listen-client-urls=http://0.0.0.0:2379 \\\\ --advertise-client-urls=http://node-1.internal:2379 \\\\ --initial-cluster=\\"node-1=http://node-1.internal:2380,node-2=http://node-2.internal:2380,node-3=http://node-3.internal:2380\\" \\\\ --initial-cluster-state=new # Verify cluster\\netcdctl --endpoints=http://localhost:2379 member list 1.2 Deploy Load Balancer # HAProxy configuration for vault-service (example)\\nfrontend vault_frontend bind *:8200 mode tcp default_backend vault_backend backend vault_backend mode tcp balance roundrobin server vault-1 10.0.1.10:8200 check server vault-2 10.0.1.11:8200 check server vault-3 10.0.1.12:8200 check 1.3 Configure TLS # Generate certificates (or use existing)\\nmkdir -p /etc/provisioning/tls # For each service:\\nopenssl req -x509 -newkey rsa:4096 \\\\ -keyout /etc/provisioning/tls/vault-key.pem \\\\ -out /etc/provisioning/tls/vault-cert.pem \\\\ -days 365 -nodes \\\\ -subj \\"/CN=vault.provisioning.prod\\" # Set permissions\\nchmod 600 /etc/provisioning/tls/*-key.pem\\nchmod 644 /etc/provisioning/tls/*-cert.pem","breadcrumbs":"Platform Deployment Guide » Step 1: Deploy Infrastructure","id":"1092","title":"Step 1: Deploy Infrastructure"},"1093":{"body":"# All machines: Set enterprise mode\\nexport VAULT_MODE=enterprise\\nexport REGISTRY_MODE=enterprise\\nexport RAG_MODE=enterprise\\nexport AI_SERVICE_MODE=enterprise\\nexport DAEMON_MODE=enterprise # Database cluster\\nexport SURREALDB_URL=\\"ws://surrealdb-cluster.internal:8000\\"\\nexport SURREALDB_REPLICAS=3 # Etcd cluster\\nexport ETCD_ENDPOINTS=\\"http://node-1.internal:2379,http://node-2.internal:2379,http://node-3.internal:2379\\" # TLS configuration\\nexport TLS_CERT_PATH=/etc/provisioning/tls\\nexport TLS_VERIFY=true\\nexport TLS_CA_CERT=/etc/provisioning/tls/ca.crt # Monitoring\\nexport PROMETHEUS_URL=http://prometheus.internal:9090\\nexport METRICS_ENABLED=true\\nexport AUDIT_LOG_ENABLED=true","breadcrumbs":"Platform Deployment Guide » Step 2: Set Enterprise Environment Variables","id":"1093","title":"Step 2: Set Enterprise Environment Variables"},"1094":{"body":"# Ansible playbook (simplified)\\n---\\n- hosts: provisioning_cluster tasks: - name: Build services shell: cargo build --release - name: Start vault-service (machine 1-3) shell: \\"cargo run --release -p vault-service\\" when: \\"\'vault\' in group_names\\" - name: Start orchestrator (machine 2-3) shell: \\"cargo run --release -p orchestrator\\" when: \\"\'orchestrator\' in group_names\\" - name: Start daemon (machine 3) shell: \\"cargo run --release -p provisioning-daemon\\" when: \\"\'daemon\' in group_names\\" - name: Verify cluster health uri: url: \\"https://{{ inventory_hostname }}:9090/health\\" validate_certs: yes","breadcrumbs":"Platform Deployment Guide » Step 3: Deploy Services Across Cluster","id":"1094","title":"Step 3: Deploy Services Across Cluster"},"1095":{"body":"# Check cluster status\\ncurl -s https://vault.internal:8200/health | jq .state # Check replication\\ncurl -s https://orchestrator.internal:9090/api/v1/cluster/status # Monitor etcd\\netcdctl --endpoints=https://node-1.internal:2379 endpoint health # Check leader election\\netcdctl --endpoints=https://node-1.internal:2379 election list","breadcrumbs":"Platform Deployment Guide » Step 4: Monitor Cluster Health","id":"1095","title":"Step 4: Monitor Cluster Health"},"1096":{"body":"# Prometheus configuration\\nglobal: scrape_interval: 30s evaluation_interval: 30s scrape_configs: - job_name: \'vault-service\' scheme: https tls_config: ca_file: /etc/provisioning/tls/ca.crt static_configs: - targets: [\'vault-1.internal:8200\', \'vault-2.internal:8200\', \'vault-3.internal:8200\'] - job_name: \'orchestrator\' scheme: https static_configs: - targets: [\'orch-1.internal:9090\', \'orch-2.internal:9090\', \'orch-3.internal:9090\']","breadcrumbs":"Platform Deployment Guide » Step 5: Enable Monitoring & Alerting","id":"1096","title":"Step 5: Enable Monitoring & Alerting"},"1097":{"body":"# Daily backup script\\n#!/bin/bash\\nBACKUP_DIR=\\"/mnt/provisioning-backups\\"\\nDATE=$(date +%Y%m%d_%H%M%S) # Backup etcd\\netcdctl --endpoints=https://node-1.internal:2379 \\\\ snapshot save \\"$BACKUP_DIR/etcd-$DATE.db\\" # Backup SurrealDB\\ncurl -X POST https://surrealdb.internal:8000/backup \\\\ -H \\"Authorization: Bearer $SURREALDB_TOKEN\\" \\\\ > \\"$BACKUP_DIR/surreal-$DATE.sql\\" # Upload to S3\\naws s3 cp \\"$BACKUP_DIR/etcd-$DATE.db\\" \\\\ s3://provisioning-backups/etcd/ # Cleanup old backups (keep 30 days)\\nfind \\"$BACKUP_DIR\\" -mtime +30 -delete","breadcrumbs":"Platform Deployment Guide » Step 6: Backup & Recovery","id":"1097","title":"Step 6: Backup & Recovery"},"1098":{"body":"","breadcrumbs":"Platform Deployment Guide » Service Management","id":"1098","title":"Service Management"},"1099":{"body":"Individual Service Startup # Start one service\\nexport VAULT_MODE=enterprise\\ncargo run --release -p vault-service # In another terminal\\nexport REGISTRY_MODE=enterprise\\ncargo run --release -p extension-registry Batch Startup # Start all services (dependency order)\\n#!/bin/bash\\nset -e MODE=${1:-solo}\\nexport VAULT_MODE=$MODE\\nexport REGISTRY_MODE=$MODE\\nexport RAG_MODE=$MODE\\nexport AI_SERVICE_MODE=$MODE\\nexport DAEMON_MODE=$MODE echo \\"Starting provisioning platform in $MODE mode...\\" # Core services first\\necho \\"Starting infrastructure...\\"\\ncargo run --release -p vault-service &\\nVAULT_PID=$! echo \\"Starting extension registry...\\"\\ncargo run --release -p extension-registry &\\nREGISTRY_PID=$! # AI layer\\necho \\"Starting AI services...\\"\\ncargo run --release -p provisioning-rag &\\nRAG_PID=$! cargo run --release -p ai-service &\\nAI_PID=$! # Orchestration\\necho \\"Starting orchestration...\\"\\ncargo run --release -p orchestrator &\\nORCH_PID=$! echo \\"All services started. PIDs: $VAULT_PID $REGISTRY_PID $RAG_PID $AI_PID $ORCH_PID\\"","breadcrumbs":"Platform Deployment Guide » Starting Services","id":"1099","title":"Starting Services"},"11":{"body":"Document Description Quickstart Cheatsheet Command shortcuts OCI Quick Reference OCI operations","breadcrumbs":"Home » 📦 Quick References","id":"11","title":"📦 Quick References"},"110":{"body":"# Check system status\\nprovisioning status # Get next step suggestions\\nprovisioning next # View interactive guide\\nprovisioning guide from-scratch For the complete step-by-step walkthrough, start with Prerequisites.","breadcrumbs":"Quick Start (Full) » Quick Commands","id":"110","title":"Quick Commands"},"1100":{"body":"# Stop all services gracefully\\npkill -SIGTERM -f \\"cargo run --release -p\\" # Wait for graceful shutdown\\nsleep 5 # Force kill if needed\\npkill -9 -f \\"cargo run --release -p\\" # Verify all stopped\\npgrep -f \\"cargo run --release -p\\" && echo \\"Services still running\\" || echo \\"All stopped\\"","breadcrumbs":"Platform Deployment Guide » Stopping Services","id":"1100","title":"Stopping Services"},"1101":{"body":"# Restart single service\\npkill -SIGTERM vault-service\\nsleep 2\\ncargo run --release -p vault-service & # Restart all services\\n./scripts/restart-all.sh $MODE # Restart with config reload\\nexport VAULT_MODE=multiuser\\npkill -SIGTERM vault-service\\nsleep 2\\ncargo run --release -p vault-service &","breadcrumbs":"Platform Deployment Guide » Restarting Services","id":"1101","title":"Restarting Services"},"1102":{"body":"# Check running processes\\npgrep -a \\"cargo run --release\\" # Check listening ports\\nnetstat -tlnp | grep -E \\"8200|8081|8083|8082|9090|8080\\" # Or using ss (modern alternative)\\nss -tlnp | grep -E \\"8200|8081|8083|8082|9090|8080\\" # Health endpoint checks\\nfor service in vault registry rag ai orchestrator; do echo \\"=== $service ===\\" curl -s http://localhost:${port[$service]}/health | jq .\\ndone","breadcrumbs":"Platform Deployment Guide » Checking Service Status","id":"1102","title":"Checking Service Status"},"1103":{"body":"","breadcrumbs":"Platform Deployment Guide » Health Checks & Monitoring","id":"1103","title":"Health Checks & Monitoring"},"1104":{"body":"# Vault Service\\ncurl -s http://localhost:8200/health | jq .\\n# Expected: {\\"status\\":\\"ok\\",\\"uptime\\":123.45} # Extension Registry\\ncurl -s http://localhost:8081/health | jq . # RAG System\\ncurl -s http://localhost:8083/health | jq .\\n# Expected: {\\"status\\":\\"ok\\",\\"embeddings\\":\\"ready\\",\\"vector_db\\":\\"connected\\"} # AI Service\\ncurl -s http://localhost:8082/health | jq . # Orchestrator\\ncurl -s http://localhost:9090/health | jq . # Control Center\\ncurl -s http://localhost:8080/health | jq .","breadcrumbs":"Platform Deployment Guide » Manual Health Verification","id":"1104","title":"Manual Health Verification"},"1105":{"body":"# Test vault <-> registry integration\\ncurl -X POST http://localhost:8200/api/encrypt \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"plaintext\\":\\"secret\\"}\' | jq . # Test RAG system\\ncurl -X POST http://localhost:8083/api/ingest \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"document\\":\\"test.md\\",\\"content\\":\\"# Test\\"}\' | jq . # Test orchestrator\\ncurl -X GET http://localhost:9090/api/v1/status | jq . # End-to-end workflow\\ncurl -X POST http://localhost:9090/api/v1/provision \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"workspace\\": \\"test\\", \\"services\\": [\\"vault\\", \\"registry\\"], \\"mode\\": \\"solo\\" }\' | jq .","breadcrumbs":"Platform Deployment Guide » Service Integration Tests","id":"1105","title":"Service Integration Tests"},"1106":{"body":"Prometheus Metrics # Query service uptime\\ncurl -s \'http://prometheus:9090/api/v1/query?query=up\' | jq . # Query request rate\\ncurl -s \'http://prometheus:9090/api/v1/query?query=rate(http_requests_total[5m])\' | jq . # Query error rate\\ncurl -s \'http://prometheus:9090/api/v1/query?query=rate(http_errors_total[5m])\' | jq . Log Aggregation # Follow vault logs\\ntail -f /var/log/provisioning/vault-service.log # Follow all service logs\\ntail -f /var/log/provisioning/*.log # Search for errors\\ngrep -r \\"ERROR\\" /var/log/provisioning/ # Follow with filtering\\ntail -f /var/log/provisioning/orchestrator.log | grep -E \\"ERROR|WARN\\"","breadcrumbs":"Platform Deployment Guide » Monitoring Dashboards","id":"1106","title":"Monitoring Dashboards"},"1107":{"body":"# AlertManager configuration\\ngroups: - name: provisioning rules: - alert: ServiceDown expr: up{job=~\\"vault|registry|rag|orchestrator\\"} == 0 for: 5m annotations: summary: \\"{{ $labels.job }} is down\\" - alert: HighErrorRate expr: rate(http_errors_total[5m]) > 0.05 annotations: summary: \\"High error rate detected\\" - alert: DiskSpaceWarning expr: node_filesystem_avail_bytes / node_filesystem_size_bytes < 0.2 annotations: summary: \\"Disk space below 20%\\"","breadcrumbs":"Platform Deployment Guide » Alerting","id":"1107","title":"Alerting"},"1108":{"body":"","breadcrumbs":"Platform Deployment Guide » Troubleshooting","id":"1108","title":"Troubleshooting"},"1109":{"body":"Problem : error: failed to bind to port 8200 Solutions : # Check if port is in use\\nlsof -i :8200\\nss -tlnp | grep 8200 # Kill existing process\\npkill -9 -f vault-service # Or use different port\\nexport VAULT_SERVER_PORT=8201\\ncargo run --release -p vault-service","breadcrumbs":"Platform Deployment Guide » Service Won\'t Start","id":"1109","title":"Service Won\'t Start"},"111":{"body":"Before installing the Provisioning Platform, ensure your system meets the following requirements.","breadcrumbs":"Prerequisites » Prerequisites","id":"111","title":"Prerequisites"},"1110":{"body":"Problem : error: failed to load config from mode file Solutions : # Verify schemas exist\\nls -la provisioning/schemas/platform/schemas/vault-service.ncl # Validate schema syntax\\nnickel typecheck provisioning/schemas/platform/schemas/vault-service.ncl # Check defaults are present\\nnickel typecheck provisioning/schemas/platform/defaults/vault-service-defaults.ncl # Verify deployment mode overlay exists\\nls -la provisioning/schemas/platform/defaults/deployment/$VAULT_MODE-defaults.ncl # Run service with explicit mode\\nexport VAULT_MODE=solo\\ncargo run --release -p vault-service","breadcrumbs":"Platform Deployment Guide » Configuration Loading Fails","id":"1110","title":"Configuration Loading Fails"},"1111":{"body":"Problem : error: failed to connect to database Solutions : # Verify database is running\\ncurl http://surrealdb:8000/health\\netcdctl --endpoints=http://etcd:2379 endpoint health # Check connectivity\\nnc -zv surrealdb 8000\\nnc -zv etcd 2379 # Update connection string\\nexport SURREALDB_URL=ws://surrealdb:8000\\nexport ETCD_ENDPOINTS=http://etcd:2379 # Restart service with new config\\npkill -9 vault-service\\ncargo run --release -p vault-service","breadcrumbs":"Platform Deployment Guide » Database Connection Issues","id":"1111","title":"Database Connection Issues"},"1112":{"body":"Problem : Service exits with code 1 or 139 Solutions : # Run with verbose logging\\nRUST_LOG=debug cargo run -p vault-service 2>&1 | head -50 # Check system resources\\nfree -h\\ndf -h # Check for core dumps\\ncoredumpctl list # Run under debugger (if crash suspected)\\nrust-gdb --args target/release/vault-service","breadcrumbs":"Platform Deployment Guide » Service Crashes on Startup","id":"1112","title":"Service Crashes on Startup"},"1113":{"body":"Problem : Service consuming > expected memory Solutions : # Check memory usage\\nps aux | grep vault-service | grep -v grep # Monitor over time\\nwatch -n 1 \'ps aux | grep vault-service | grep -v grep\' # Reduce worker count\\nexport VAULT_SERVER_WORKERS=2\\ncargo run --release -p vault-service # Check for memory leaks\\nvalgrind --leak-check=full target/release/vault-service","breadcrumbs":"Platform Deployment Guide » High Memory Usage","id":"1113","title":"High Memory Usage"},"1114":{"body":"Problem : error: failed to resolve hostname Solutions : # Test DNS resolution\\nnslookup vault.internal\\ndig vault.internal # Test connectivity to service\\ncurl -v http://vault.internal:8200/health # Add to /etc/hosts if needed\\necho \\"10.0.1.10 vault.internal\\" >> /etc/hosts # Check network interface\\nip addr show\\nnetstat -nr","breadcrumbs":"Platform Deployment Guide » Network/DNS Issues","id":"1114","title":"Network/DNS Issues"},"1115":{"body":"Problem : Data lost after restart Solutions : # Verify backup exists\\nls -la /mnt/provisioning-backups/\\nls -la /var/lib/provisioning/ # Check disk space\\ndf -h /var/lib/provisioning # Verify file permissions\\nls -l /var/lib/provisioning/vault/\\nchmod 755 /var/lib/provisioning/vault/* # Restore from backup\\n./scripts/restore-backup.sh /mnt/provisioning-backups/vault-20260105.sql","breadcrumbs":"Platform Deployment Guide » Data Persistence Issues","id":"1115","title":"Data Persistence Issues"},"1116":{"body":"When troubleshooting, use this systematic approach: # 1. Check service is running\\npgrep -f vault-service || echo \\"Service not running\\" # 2. Check port is listening\\nss -tlnp | grep 8200 || echo \\"Port not listening\\" # 3. Check logs for errors\\ntail -20 /var/log/provisioning/vault-service.log | grep -i error # 4. Test HTTP endpoint\\ncurl -i http://localhost:8200/health # 5. Check dependencies\\ncurl http://surrealdb:8000/health\\netcdctl --endpoints=http://etcd:2379 endpoint health # 6. Check schema definition\\nnickel typecheck provisioning/schemas/platform/schemas/vault-service.ncl # 7. Verify environment variables\\nenv | grep -E \\"VAULT_|SURREALDB_|ETCD_\\" # 8. Check system resources\\nfree -h && df -h && top -bn1 | head -10","breadcrumbs":"Platform Deployment Guide » Debugging Checklist","id":"1116","title":"Debugging Checklist"},"1117":{"body":"","breadcrumbs":"Platform Deployment Guide » Configuration Updates","id":"1117","title":"Configuration Updates"},"1118":{"body":"# 1. Edit the schema definition\\nvim provisioning/schemas/platform/schemas/vault-service.ncl # 2. Update defaults if needed\\nvim provisioning/schemas/platform/defaults/vault-service-defaults.ncl # 3. Validate syntax\\nnickel typecheck provisioning/schemas/platform/schemas/vault-service.ncl # 4. Re-export configuration from schemas\\n./provisioning/.typedialog/platform/scripts/generate-configs.nu vault-service multiuser # 5. Restart affected service (no downtime for clients)\\npkill -SIGTERM vault-service\\nsleep 2\\ncargo run --release -p vault-service & # 4. Verify configuration loaded\\ncurl http://localhost:8200/api/config | jq .","breadcrumbs":"Platform Deployment Guide » Updating Service Configuration","id":"1118","title":"Updating Service Configuration"},"1119":{"body":"# Migrate from solo to multiuser: # 1. Stop services\\npkill -SIGTERM -f \\"cargo run\\"\\nsleep 5 # 2. Backup current data\\ntar -czf /backup/provisioning-solo-$(date +%s).tar.gz /var/lib/provisioning/ # 3. Set new mode\\nexport VAULT_MODE=multiuser\\nexport REGISTRY_MODE=multiuser\\nexport RAG_MODE=multiuser # 4. Start services with new config\\ncargo run --release -p vault-service &\\ncargo run --release -p extension-registry & # 5. Verify new mode\\ncurl http://localhost:8200/api/config | jq .deployment_mode","breadcrumbs":"Platform Deployment Guide » Mode Migration","id":"1119","title":"Mode Migration"},"112":{"body":"","breadcrumbs":"Prerequisites » Hardware Requirements","id":"112","title":"Hardware Requirements"},"1120":{"body":"Before deploying to production: All services compiled in release mode (--release) TLS certificates installed and valid Database cluster deployed and healthy Load balancer configured and routing traffic Monitoring and alerting configured Backup system tested and working High availability verified (failover tested) Security hardening applied (firewall rules, etc.) Documentation updated for your environment Team trained on deployment procedures Runbooks created for common operations Disaster recovery plan tested","breadcrumbs":"Platform Deployment Guide » Production Checklist","id":"1120","title":"Production Checklist"},"1121":{"body":"","breadcrumbs":"Platform Deployment Guide » Getting Help","id":"1121","title":"Getting Help"},"1122":{"body":"GitHub Issues : Report bugs at github.com/your-org/provisioning/issues Documentation : Full docs at provisioning/docs/ Slack Channel : #provisioning-platform","breadcrumbs":"Platform Deployment Guide » Community Resources","id":"1122","title":"Community Resources"},"1123":{"body":"Platform Team : platform@your-org.com On-Call : Check PagerDuty for active rotation Escalation : Contact infrastructure leadership","breadcrumbs":"Platform Deployment Guide » Internal Support","id":"1123","title":"Internal Support"},"1124":{"body":"# View all available commands\\ncargo run -- --help # View service schemas\\nls -la provisioning/schemas/platform/schemas/\\nls -la provisioning/schemas/platform/defaults/ # List running services\\nps aux | grep cargo # Monitor service logs in real-time\\njournalctl -fu provisioning-vault # Generate diagnostics bundle\\n./scripts/generate-diagnostics.sh > /tmp/diagnostics-$(date +%s).tar.gz","breadcrumbs":"Platform Deployment Guide » Useful Commands Reference","id":"1124","title":"Useful Commands Reference"},"1125":{"body":"Version : 1.0.0 Last Updated : 2025-10-06","breadcrumbs":"Service Management Guide » Service Management Guide","id":"1125","title":"Service Management Guide"},"1126":{"body":"Overview Service Architecture Service Registry Platform Commands Service Commands Deployment Modes Health Monitoring Dependency Management Pre-flight Checks Troubleshooting","breadcrumbs":"Service Management Guide » Table of Contents","id":"1126","title":"Table of Contents"},"1127":{"body":"The Service Management System provides comprehensive lifecycle management for all platform services (orchestrator, control-center, CoreDNS, Gitea, OCI registry, MCP server, API gateway).","breadcrumbs":"Service Management Guide » Overview","id":"1127","title":"Overview"},"1128":{"body":"Unified Service Management : Single interface for all services Automatic Dependency Resolution : Start services in correct order Health Monitoring : Continuous health checks with automatic recovery Multiple Deployment Modes : Binary, Docker, Docker Compose, Kubernetes, Remote Pre-flight Checks : Validate prerequisites before operations Service Registry : Centralized service configuration","breadcrumbs":"Service Management Guide » Key Features","id":"1128","title":"Key Features"},"1129":{"body":"Service Type Category Description orchestrator Platform Orchestration Rust-based workflow coordinator control-center Platform UI Web-based management interface coredns Infrastructure DNS Local DNS resolution gitea Infrastructure Git Self-hosted Git service oci-registry Infrastructure Registry OCI-compliant container registry mcp-server Platform API Model Context Protocol server api-gateway Platform API Unified REST API gateway","breadcrumbs":"Service Management Guide » Supported Services","id":"1129","title":"Supported Services"},"113":{"body":"CPU : 2 cores RAM : 4GB Disk : 20GB available space Network : Internet connection for downloading dependencies","breadcrumbs":"Prerequisites » Minimum Requirements (Solo Mode)","id":"113","title":"Minimum Requirements (Solo Mode)"},"1130":{"body":"","breadcrumbs":"Service Management Guide » Service Architecture","id":"1130","title":"Service Architecture"},"1131":{"body":"┌─────────────────────────────────────────┐\\n│ Service Management CLI │\\n│ (platform/services commands) │\\n└─────────────────┬───────────────────────┘ │ ┌──────────┴──────────┐ │ │ ▼ ▼\\n┌──────────────┐ ┌───────────────┐\\n│ Manager │ │ Lifecycle │\\n│ (Core) │ │ (Start/Stop)│\\n└──────┬───────┘ └───────┬───────┘ │ │ ▼ ▼\\n┌──────────────┐ ┌───────────────┐\\n│ Health │ │ Dependencies │\\n│ (Checks) │ │ (Resolution) │\\n└──────────────┘ └───────────────┘ │ │ └────────┬───────────┘ │ ▼ ┌────────────────┐ │ Pre-flight │ │ (Validation) │ └────────────────┘\\n```plaintext ### Component Responsibilities **Manager** (`manager.nu`) - Service registry loading\\n- Service status tracking\\n- State persistence **Lifecycle** (`lifecycle.nu`) - Service start/stop operations\\n- Deployment mode handling\\n- Process management **Health** (`health.nu`) - Health check execution\\n- HTTP/TCP/Command/File checks\\n- Continuous monitoring **Dependencies** (`dependencies.nu`) - Dependency graph analysis\\n- Topological sorting\\n- Startup order calculation **Pre-flight** (`preflight.nu`) - Prerequisite validation\\n- Conflict detection\\n- Auto-start orchestration --- ## Service Registry ### Configuration File **Location**: `provisioning/config/services.toml` ### Service Definition Structure ```toml\\n[services.]\\nname = \\"\\"\\ntype = \\"platform\\" | \\"infrastructure\\" | \\"utility\\"\\ncategory = \\"orchestration\\" | \\"auth\\" | \\"dns\\" | \\"git\\" | \\"registry\\" | \\"api\\" | \\"ui\\"\\ndescription = \\"Service description\\"\\nrequired_for = [\\"operation1\\", \\"operation2\\"]\\ndependencies = [\\"dependency1\\", \\"dependency2\\"]\\nconflicts = [\\"conflicting-service\\"] [services..deployment]\\nmode = \\"binary\\" | \\"docker\\" | \\"docker-compose\\" | \\"kubernetes\\" | \\"remote\\" # Mode-specific configuration\\n[services..deployment.binary]\\nbinary_path = \\"/path/to/binary\\"\\nargs = [\\"--arg1\\", \\"value1\\"]\\nworking_dir = \\"/working/directory\\"\\nenv = { KEY = \\"value\\" } [services..health_check]\\ntype = \\"http\\" | \\"tcp\\" | \\"command\\" | \\"file\\" | \\"none\\"\\ninterval = 10\\nretries = 3\\ntimeout = 5 [services..health_check.http]\\nendpoint = \\"http://localhost:9090/health\\"\\nexpected_status = 200\\nmethod = \\"GET\\" [services..startup]\\nauto_start = true\\nstart_timeout = 30\\nstart_order = 10\\nrestart_on_failure = true\\nmax_restarts = 3\\n```plaintext ### Example: Orchestrator Service ```toml\\n[services.orchestrator]\\nname = \\"orchestrator\\"\\ntype = \\"platform\\"\\ncategory = \\"orchestration\\"\\ndescription = \\"Rust-based orchestrator for workflow coordination\\"\\nrequired_for = [\\"server\\", \\"taskserv\\", \\"cluster\\", \\"workflow\\", \\"batch\\"] [services.orchestrator.deployment]\\nmode = \\"binary\\" [services.orchestrator.deployment.binary]\\nbinary_path = \\"${HOME}/.provisioning/bin/provisioning-orchestrator\\"\\nargs = [\\"--port\\", \\"8080\\", \\"--data-dir\\", \\"${HOME}/.provisioning/orchestrator/data\\"] [services.orchestrator.health_check]\\ntype = \\"http\\" [services.orchestrator.health_check.http]\\nendpoint = \\"http://localhost:9090/health\\"\\nexpected_status = 200 [services.orchestrator.startup]\\nauto_start = true\\nstart_timeout = 30\\nstart_order = 10\\n```plaintext --- ## Platform Commands Platform commands manage all services as a cohesive system. ### Start Platform Start all auto-start services or specific services: ```bash\\n# Start all auto-start services\\nprovisioning platform start # Start specific services (with dependencies)\\nprovisioning platform start orchestrator control-center # Force restart if already running\\nprovisioning platform start --force orchestrator\\n```plaintext **Behavior**: 1. Resolves dependencies\\n2. Calculates startup order (topological sort)\\n3. Starts services in correct order\\n4. Waits for health checks\\n5. Reports success/failure ### Stop Platform Stop all running services or specific services: ```bash\\n# Stop all running services\\nprovisioning platform stop # Stop specific services\\nprovisioning platform stop orchestrator control-center # Force stop (kill -9)\\nprovisioning platform stop --force orchestrator\\n```plaintext **Behavior**: 1. Checks for dependent services\\n2. Stops in reverse dependency order\\n3. Updates service state\\n4. Cleans up PID files ### Restart Platform Restart running services: ```bash\\n# Restart all running services\\nprovisioning platform restart # Restart specific services\\nprovisioning platform restart orchestrator\\n```plaintext ### Platform Status Show status of all services: ```bash\\nprovisioning platform status\\n```plaintext **Output**: ```plaintext\\nPlatform Services Status Running: 3/7 === ORCHESTRATION === 🟢 orchestrator - running (uptime: 3600s) ✅ === UI === 🟢 control-center - running (uptime: 3550s) ✅ === DNS === ⚪ coredns - stopped ❓ === GIT === ⚪ gitea - stopped ❓ === REGISTRY === ⚪ oci-registry - stopped ❓ === API === 🟢 mcp-server - running (uptime: 3540s) ✅ ⚪ api-gateway - stopped ❓\\n```plaintext ### Platform Health Check health of all running services: ```bash\\nprovisioning platform health\\n```plaintext **Output**: ```plaintext\\nPlatform Health Check ✅ orchestrator: Healthy - HTTP health check passed\\n✅ control-center: Healthy - HTTP status 200 matches expected\\n⚪ coredns: Not running\\n✅ mcp-server: Healthy - HTTP health check passed Summary: 3 healthy, 0 unhealthy, 4 not running\\n```plaintext ### Platform Logs View service logs: ```bash\\n# View last 50 lines\\nprovisioning platform logs orchestrator # View last 100 lines\\nprovisioning platform logs orchestrator --lines 100 # Follow logs in real-time\\nprovisioning platform logs orchestrator --follow\\n```plaintext --- ## Service Commands Individual service management commands. ### List Services ```bash\\n# List all services\\nprovisioning services list # List only running services\\nprovisioning services list --running # Filter by category\\nprovisioning services list --category orchestration\\n```plaintext **Output**: ```plaintext\\nname type category status deployment_mode auto_start\\norchestrator platform orchestration running binary true\\ncontrol-center platform ui stopped binary false\\ncoredns infrastructure dns stopped docker false\\n```plaintext ### Service Status Get detailed status of a service: ```bash\\nprovisioning services status orchestrator\\n```plaintext **Output**: ```plaintext\\nService: orchestrator\\nType: platform\\nCategory: orchestration\\nStatus: running\\nDeployment: binary\\nHealth: healthy\\nAuto-start: true\\nPID: 12345\\nUptime: 3600s\\nDependencies: []\\n```plaintext ### Start Service ```bash\\n# Start service (with pre-flight checks)\\nprovisioning services start orchestrator # Force start (skip checks)\\nprovisioning services start orchestrator --force\\n```plaintext **Pre-flight Checks**: 1. Validate prerequisites (binary exists, Docker running, etc.)\\n2. Check for conflicts\\n3. Verify dependencies are running\\n4. Auto-start dependencies if needed ### Stop Service ```bash\\n# Stop service (with dependency check)\\nprovisioning services stop orchestrator # Force stop (ignore dependents)\\nprovisioning services stop orchestrator --force\\n```plaintext ### Restart Service ```bash\\nprovisioning services restart orchestrator\\n```plaintext ### Service Health Check service health: ```bash\\nprovisioning services health orchestrator\\n```plaintext **Output**: ```plaintext\\nService: orchestrator\\nStatus: healthy\\nHealthy: true\\nMessage: HTTP health check passed\\nCheck type: http\\nCheck duration: 15ms\\n```plaintext ### Service Logs ```bash\\n# View logs\\nprovisioning services logs orchestrator # Follow logs\\nprovisioning services logs orchestrator --follow # Custom line count\\nprovisioning services logs orchestrator --lines 200\\n```plaintext ### Check Required Services Check which services are required for an operation: ```bash\\nprovisioning services check server\\n```plaintext **Output**: ```plaintext\\nOperation: server\\nRequired services: orchestrator\\nAll running: true\\n```plaintext ### Service Dependencies View dependency graph: ```bash\\n# View all dependencies\\nprovisioning services dependencies # View specific service dependencies\\nprovisioning services dependencies control-center\\n```plaintext ### Validate Services Validate all service configurations: ```bash\\nprovisioning services validate\\n```plaintext **Output**: ```plaintext\\nTotal services: 7\\nValid: 6\\nInvalid: 1 Invalid services: ❌ coredns: - Docker is not installed or not running\\n```plaintext ### Readiness Report Get platform readiness report: ```bash\\nprovisioning services readiness\\n```plaintext **Output**: ```plaintext\\nPlatform Readiness Report Total services: 7\\nRunning: 3\\nReady to start: 6 Services: 🟢 orchestrator - platform - orchestration 🟢 control-center - platform - ui 🔴 coredns - infrastructure - dns Issues: 1 🟡 gitea - infrastructure - git\\n```plaintext ### Monitor Service Continuous health monitoring: ```bash\\n# Monitor with default interval (30s)\\nprovisioning services monitor orchestrator # Custom interval\\nprovisioning services monitor orchestrator --interval 10\\n```plaintext --- ## Deployment Modes ### Binary Deployment Run services as native binaries. **Configuration**: ```toml\\n[services.orchestrator.deployment]\\nmode = \\"binary\\" [services.orchestrator.deployment.binary]\\nbinary_path = \\"${HOME}/.provisioning/bin/provisioning-orchestrator\\"\\nargs = [\\"--port\\", \\"8080\\"]\\nworking_dir = \\"${HOME}/.provisioning/orchestrator\\"\\nenv = { RUST_LOG = \\"info\\" }\\n```plaintext **Process Management**: - PID tracking in `~/.provisioning/services/pids/`\\n- Log output to `~/.provisioning/services/logs/`\\n- State tracking in `~/.provisioning/services/state/` ### Docker Deployment Run services as Docker containers. **Configuration**: ```toml\\n[services.coredns.deployment]\\nmode = \\"docker\\" [services.coredns.deployment.docker]\\nimage = \\"coredns/coredns:1.11.1\\"\\ncontainer_name = \\"provisioning-coredns\\"\\nports = [\\"5353:53/udp\\"]\\nvolumes = [\\"${HOME}/.provisioning/coredns/Corefile:/Corefile:ro\\"]\\nrestart_policy = \\"unless-stopped\\"\\n```plaintext **Prerequisites**: - Docker daemon running\\n- Docker CLI installed ### Docker Compose Deployment Run services via Docker Compose. **Configuration**: ```toml\\n[services.platform.deployment]\\nmode = \\"docker-compose\\" [services.platform.deployment.docker_compose]\\ncompose_file = \\"${HOME}/.provisioning/platform/docker-compose.yaml\\"\\nservice_name = \\"orchestrator\\"\\nproject_name = \\"provisioning\\"\\n```plaintext **File**: `provisioning/platform/docker-compose.yaml` ### Kubernetes Deployment Run services on Kubernetes. **Configuration**: ```toml\\n[services.orchestrator.deployment]\\nmode = \\"kubernetes\\" [services.orchestrator.deployment.kubernetes]\\nnamespace = \\"provisioning\\"\\ndeployment_name = \\"orchestrator\\"\\nmanifests_path = \\"${HOME}/.provisioning/k8s/orchestrator/\\"\\n```plaintext **Prerequisites**: - kubectl installed and configured\\n- Kubernetes cluster accessible ### Remote Deployment Connect to remotely-running services. **Configuration**: ```toml\\n[services.orchestrator.deployment]\\nmode = \\"remote\\" [services.orchestrator.deployment.remote]\\nendpoint = \\"https://orchestrator.example.com\\"\\ntls_enabled = true\\nauth_token_path = \\"${HOME}/.provisioning/tokens/orchestrator.token\\"\\n```plaintext --- ## Health Monitoring ### Health Check Types #### HTTP Health Check ```toml\\n[services.orchestrator.health_check]\\ntype = \\"http\\" [services.orchestrator.health_check.http]\\nendpoint = \\"http://localhost:9090/health\\"\\nexpected_status = 200\\nmethod = \\"GET\\"\\n```plaintext #### TCP Health Check ```toml\\n[services.coredns.health_check]\\ntype = \\"tcp\\" [services.coredns.health_check.tcp]\\nhost = \\"localhost\\"\\nport = 5353\\n```plaintext #### Command Health Check ```toml\\n[services.custom.health_check]\\ntype = \\"command\\" [services.custom.health_check.command]\\ncommand = \\"systemctl is-active myservice\\"\\nexpected_exit_code = 0\\n```plaintext #### File Health Check ```toml\\n[services.custom.health_check]\\ntype = \\"file\\" [services.custom.health_check.file]\\npath = \\"/var/run/myservice.pid\\"\\nmust_exist = true\\n```plaintext ### Health Check Configuration - `interval`: Seconds between checks (default: 10)\\n- `retries`: Max retry attempts (default: 3)\\n- `timeout`: Check timeout in seconds (default: 5) ### Continuous Monitoring ```bash\\nprovisioning services monitor orchestrator --interval 30\\n```plaintext **Output**: ```plaintext\\nStarting health monitoring for orchestrator (interval: 30s)\\nPress Ctrl+C to stop\\n2025-10-06 14:30:00 ✅ orchestrator: HTTP health check passed\\n2025-10-06 14:30:30 ✅ orchestrator: HTTP health check passed\\n2025-10-06 14:31:00 ✅ orchestrator: HTTP health check passed\\n```plaintext --- ## Dependency Management ### Dependency Graph Services can depend on other services: ```toml\\n[services.control-center]\\ndependencies = [\\"orchestrator\\"] [services.api-gateway]\\ndependencies = [\\"orchestrator\\", \\"control-center\\", \\"mcp-server\\"]\\n```plaintext ### Startup Order Services start in topological order: ```plaintext\\norchestrator (order: 10) └─> control-center (order: 20) └─> api-gateway (order: 45)\\n```plaintext ### Dependency Resolution Automatic dependency resolution when starting services: ```bash\\n# Starting control-center automatically starts orchestrator first\\nprovisioning services start control-center\\n```plaintext **Output**: ```plaintext\\nStarting dependency: orchestrator\\n✅ Started orchestrator with PID 12345\\nWaiting for orchestrator to become healthy...\\n✅ Service orchestrator is healthy\\nStarting service: control-center\\n✅ Started control-center with PID 12346\\n✅ Service control-center is healthy\\n```plaintext ### Conflicts Services can conflict with each other: ```toml\\n[services.coredns]\\nconflicts = [\\"dnsmasq\\", \\"systemd-resolved\\"]\\n```plaintext Attempting to start a conflicting service will fail: ```bash\\nprovisioning services start coredns\\n```plaintext **Output**: ```plaintext\\n❌ Pre-flight check failed: conflicts\\nConflicting services running: dnsmasq\\n```plaintext ### Reverse Dependencies Check which services depend on a service: ```bash\\nprovisioning services dependencies orchestrator\\n```plaintext **Output**: ```plaintext\\n## orchestrator\\n- Type: platform\\n- Category: orchestration\\n- Required by: - control-center - mcp-server - api-gateway\\n```plaintext ### Safe Stop System prevents stopping services with running dependents: ```bash\\nprovisioning services stop orchestrator\\n```plaintext **Output**: ```plaintext\\n❌ Cannot stop orchestrator: Dependent services running: control-center, mcp-server, api-gateway Use --force to stop anyway\\n```plaintext --- ## Pre-flight Checks ### Purpose Pre-flight checks ensure services can start successfully before attempting to start them. ### Check Types 1. **Prerequisites**: Binary exists, Docker running, etc.\\n2. **Conflicts**: No conflicting services running\\n3. **Dependencies**: All dependencies available ### Automatic Checks Pre-flight checks run automatically when starting services: ```bash\\nprovisioning services start orchestrator\\n```plaintext **Check Process**: ```plaintext\\nRunning pre-flight checks for orchestrator...\\n✅ Binary found: /Users/user/.provisioning/bin/provisioning-orchestrator\\n✅ No conflicts detected\\n✅ All dependencies available\\nStarting service: orchestrator\\n```plaintext ### Manual Validation Validate all services: ```bash\\nprovisioning services validate\\n```plaintext Validate specific service: ```bash\\nprovisioning services status orchestrator\\n```plaintext ### Auto-Start Services with `auto_start = true` can be started automatically when needed: ```bash\\n# Orchestrator auto-starts if needed for server operations\\nprovisioning server create\\n```plaintext **Output**: ```plaintext\\nStarting required services...\\n✅ Orchestrator started\\nCreating server...\\n```plaintext --- ## Troubleshooting ### Service Won\'t Start **Check prerequisites**: ```bash\\nprovisioning services validate\\nprovisioning services status \\n```plaintext **Common issues**: - Binary not found: Check `binary_path` in config\\n- Docker not running: Start Docker daemon\\n- Port already in use: Check for conflicting processes\\n- Dependencies not running: Start dependencies first ### Service Health Check Failing **View health status**: ```bash\\nprovisioning services health \\n```plaintext **Check logs**: ```bash\\nprovisioning services logs --follow\\n```plaintext **Common issues**: - Service not fully initialized: Wait longer or increase `start_timeout`\\n- Wrong health check endpoint: Verify endpoint in config\\n- Network issues: Check firewall, port bindings ### Dependency Issues **View dependency tree**: ```bash\\nprovisioning services dependencies \\n```plaintext **Check dependency status**: ```bash\\nprovisioning services status \\n```plaintext **Start with dependencies**: ```bash\\nprovisioning platform start \\n```plaintext ### Circular Dependencies **Validate dependency graph**: ```bash\\n# This is done automatically but you can check manually\\nnu -c \\"use lib_provisioning/services/mod.nu *; validate-dependency-graph\\"\\n```plaintext ### PID File Stale If service reports running but isn\'t: ```bash\\n# Manual cleanup\\nrm ~/.provisioning/services/pids/.pid # Force restart\\nprovisioning services restart \\n```plaintext ### Port Conflicts **Find process using port**: ```bash\\nlsof -i :9090\\n```plaintext **Kill conflicting process**: ```bash\\nkill \\n```plaintext ### Docker Issues **Check Docker status**: ```bash\\ndocker ps\\ndocker info\\n```plaintext **View container logs**: ```bash\\ndocker logs provisioning-\\n```plaintext **Restart Docker daemon**: ```bash\\n# macOS\\nkillall Docker && open /Applications/Docker.app # Linux\\nsystemctl restart docker\\n```plaintext ### Service Logs **View recent logs**: ```bash\\ntail -f ~/.provisioning/services/logs/.log\\n```plaintext **Search logs**: ```bash\\ngrep \\"ERROR\\" ~/.provisioning/services/logs/.log\\n```plaintext --- ## Advanced Usage ### Custom Service Registration Add custom services by editing `provisioning/config/services.toml`. ### Integration with Workflows Services automatically start when required by workflows: ```bash\\n# Orchestrator starts automatically if not running\\nprovisioning workflow submit my-workflow\\n```plaintext ### CI/CD Integration ```yaml\\n# GitLab CI\\nbefore_script: - provisioning platform start orchestrator - provisioning services health orchestrator test: script: - provisioning test quick kubernetes\\n```plaintext ### Monitoring Integration Services can integrate with monitoring systems via health endpoints. --- ## Related Documentation - Orchestrator README\\n- [Test Environment Guide](test-environment-guide.md)\\n- [Workflow Management](workflow-management.md) --- ## Quick Reference **Version**: 1.0.0 ### Platform Commands (Manage All Services) ```bash\\n# Start all auto-start services\\nprovisioning platform start # Start specific services with dependencies\\nprovisioning platform start control-center mcp-server # Stop all running services\\nprovisioning platform stop # Stop specific services\\nprovisioning platform stop orchestrator # Restart services\\nprovisioning platform restart # Show platform status\\nprovisioning platform status # Check platform health\\nprovisioning platform health # View service logs\\nprovisioning platform logs orchestrator --follow\\n```plaintext --- ### Service Commands (Individual Services) ```bash\\n# List all services\\nprovisioning services list # List only running services\\nprovisioning services list --running # Filter by category\\nprovisioning services list --category orchestration # Service status\\nprovisioning services status orchestrator # Start service (with pre-flight checks)\\nprovisioning services start orchestrator # Force start (skip checks)\\nprovisioning services start orchestrator --force # Stop service\\nprovisioning services stop orchestrator # Force stop (ignore dependents)\\nprovisioning services stop orchestrator --force # Restart service\\nprovisioning services restart orchestrator # Check health\\nprovisioning services health orchestrator # View logs\\nprovisioning services logs orchestrator --follow --lines 100 # Monitor health continuously\\nprovisioning services monitor orchestrator --interval 30\\n```plaintext --- ### Dependency & Validation ```bash\\n# View dependency graph\\nprovisioning services dependencies # View specific service dependencies\\nprovisioning services dependencies control-center # Validate all services\\nprovisioning services validate # Check readiness\\nprovisioning services readiness # Check required services for operation\\nprovisioning services check server\\n```plaintext --- ### Registered Services | Service | Port | Type | Auto-Start | Dependencies |\\n|---------|------|------|------------|--------------|\\n| orchestrator | 8080 | Platform | Yes | - |\\n| control-center | 8081 | Platform | No | orchestrator |\\n| coredns | 5353 | Infrastructure | No | - |\\n| gitea | 3000, 222 | Infrastructure | No | - |\\n| oci-registry | 5000 | Infrastructure | No | - |\\n| mcp-server | 8082 | Platform | No | orchestrator |\\n| api-gateway | 8083 | Platform | No | orchestrator, control-center, mcp-server | --- ### Docker Compose ```bash\\n# Start all services\\ncd provisioning/platform\\ndocker-compose up -d # Start specific services\\ndocker-compose up -d orchestrator control-center # Check status\\ndocker-compose ps # View logs\\ndocker-compose logs -f orchestrator # Stop all services\\ndocker-compose down # Stop and remove volumes\\ndocker-compose down -v\\n```plaintext --- ### Service State Directories ```plaintext\\n~/.provisioning/services/\\n├── pids/ # Process ID files\\n├── state/ # Service state (JSON)\\n└── logs/ # Service logs\\n```plaintext --- ### Health Check Endpoints | Service | Endpoint | Type |\\n|---------|----------|------|\\n| orchestrator | | HTTP |\\n| control-center | | HTTP |\\n| coredns | localhost:5353 | TCP |\\n| gitea | | HTTP |\\n| oci-registry | | HTTP |\\n| mcp-server | | HTTP |\\n| api-gateway | | HTTP | --- ### Common Workflows #### Start Platform for Development ```bash\\n# Start core services\\nprovisioning platform start orchestrator # Check status\\nprovisioning platform status # Check health\\nprovisioning platform health\\n```plaintext #### Start Full Platform Stack ```bash\\n# Use Docker Compose\\ncd provisioning/platform\\ndocker-compose up -d # Verify\\ndocker-compose ps\\nprovisioning platform health\\n```plaintext #### Debug Service Issues ```bash\\n# Check service status\\nprovisioning services status # View logs\\nprovisioning services logs --follow # Check health\\nprovisioning services health # Validate prerequisites\\nprovisioning services validate # Restart service\\nprovisioning services restart \\n```plaintext #### Safe Service Shutdown ```bash\\n# Check dependents\\nnu -c \\"use lib_provisioning/services/mod.nu *; can-stop-service orchestrator\\" # Stop with dependency check\\nprovisioning services stop orchestrator # Force stop if needed\\nprovisioning services stop orchestrator --force\\n```plaintext --- ### Troubleshooting #### Service Won\'t Start ```bash\\n# 1. Check prerequisites\\nprovisioning services validate # 2. View detailed status\\nprovisioning services status # 3. Check logs\\nprovisioning services logs # 4. Verify binary/image exists\\nls ~/.provisioning/bin/\\ndocker images | grep \\n```plaintext #### Health Check Failing ```bash\\n# Check endpoint manually\\ncurl http://localhost:9090/health # View health details\\nprovisioning services health # Monitor continuously\\nprovisioning services monitor --interval 10\\n```plaintext #### PID File Stale ```bash\\n# Remove stale PID file\\nrm ~/.provisioning/services/pids/.pid # Restart service\\nprovisioning services restart \\n```plaintext #### Port Already in Use ```bash\\n# Find process using port\\nlsof -i :9090 # Kill process\\nkill # Restart service\\nprovisioning services start \\n```plaintext --- ### Integration with Operations #### Server Operations ```bash\\n# Orchestrator auto-starts if needed\\nprovisioning server create # Manual check\\nprovisioning services check server\\n```plaintext #### Workflow Operations ```bash\\n# Orchestrator auto-starts\\nprovisioning workflow submit my-workflow # Check status\\nprovisioning services status orchestrator\\n```plaintext #### Test Operations ```bash\\n# Orchestrator required for test environments\\nprovisioning test quick kubernetes # Pre-flight check\\nprovisioning services check test-env\\n```plaintext --- ### Advanced Usage #### Custom Service Startup Order Services start based on: 1. Dependency order (topological sort)\\n2. `start_order` field (lower = earlier) #### Auto-Start Configuration Edit `provisioning/config/services.toml`: ```toml\\n[services..startup]\\nauto_start = true # Enable auto-start\\nstart_timeout = 30 # Timeout in seconds\\nstart_order = 10 # Startup priority\\n```plaintext #### Health Check Configuration ```toml\\n[services..health_check]\\ntype = \\"http\\" # http, tcp, command, file\\ninterval = 10 # Seconds between checks\\nretries = 3 # Max retry attempts\\ntimeout = 5 # Check timeout [services..health_check.http]\\nendpoint = \\"http://localhost:9090/health\\"\\nexpected_status = 200\\n```plaintext --- ### Key Files - **Service Registry**: `provisioning/config/services.toml`\\n- **KCL Schema**: `provisioning/kcl/services.k`\\n- **Docker Compose**: `provisioning/platform/docker-compose.yaml`\\n- **User Guide**: `docs/user/SERVICE_MANAGEMENT_GUIDE.md` --- ### Getting Help ```bash\\n# View documentation\\ncat docs/user/SERVICE_MANAGEMENT_GUIDE.md | less # Run verification\\nnu provisioning/core/nulib/tests/verify_services.nu # Check readiness\\nprovisioning services readiness\\n```plaintext --- **Quick Tip**: Use `--help` flag with any command for detailed usage information. --- **Maintained By**: Platform Team\\n**Support**: [GitHub Issues](https://github.com/your-org/provisioning/issues)","breadcrumbs":"Service Management Guide » System Architecture","id":"1131","title":"System Architecture"},"1132":{"body":"Complete guide for monitoring the 9-service platform with Prometheus, Grafana, and AlertManager Version : 1.0.0 Last Updated : 2026-01-05 Target Audience : DevOps Engineers, Platform Operators Status : Production Ready","breadcrumbs":"Monitoring & Alerting Setup » Service Monitoring & Alerting Setup","id":"1132","title":"Service Monitoring & Alerting Setup"},"1133":{"body":"This guide provides complete setup instructions for monitoring and alerting on the provisioning platform using industry-standard tools: Prometheus : Metrics collection and time-series database Grafana : Visualization and dashboarding AlertManager : Alert routing and notification","breadcrumbs":"Monitoring & Alerting Setup » Overview","id":"1133","title":"Overview"},"1134":{"body":"Services (metrics endpoints) ↓\\nPrometheus (scrapes every 30s) ↓\\nAlertManager (evaluates rules) ↓\\nNotification Channels (email, slack, pagerduty) Prometheus Data ↓\\nGrafana (queries) ↓\\nDashboards & Visualization","breadcrumbs":"Monitoring & Alerting Setup » Architecture","id":"1134","title":"Architecture"},"1135":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Prerequisites","id":"1135","title":"Prerequisites"},"1136":{"body":"# Prometheus (for metrics)\\nwget https://github.com/prometheus/prometheus/releases/download/v2.48.0/prometheus-2.48.0.linux-amd64.tar.gz\\ntar xvfz prometheus-2.48.0.linux-amd64.tar.gz\\nsudo mv prometheus-2.48.0.linux-amd64 /opt/prometheus # Grafana (for dashboards)\\nsudo apt-get install -y grafana-server # AlertManager (for alerting)\\nwget https://github.com/prometheus/alertmanager/releases/download/v0.26.0/alertmanager-0.26.0.linux-amd64.tar.gz\\ntar xvfz alertmanager-0.26.0.linux-amd64.tar.gz\\nsudo mv alertmanager-0.26.0.linux-amd64 /opt/alertmanager","breadcrumbs":"Monitoring & Alerting Setup » Software Requirements","id":"1136","title":"Software Requirements"},"1137":{"body":"CPU : 2+ cores Memory : 4 GB minimum, 8 GB recommended Disk : 100 GB for metrics retention (30 days) Network : Access to all service endpoints","breadcrumbs":"Monitoring & Alerting Setup » System Requirements","id":"1137","title":"System Requirements"},"1138":{"body":"Component Port Purpose Prometheus 9090 Web UI & API Grafana 3000 Web UI AlertManager 9093 Web UI & API Node Exporter 9100 System metrics","breadcrumbs":"Monitoring & Alerting Setup » Ports","id":"1138","title":"Ports"},"1139":{"body":"All platform services expose metrics on the /metrics endpoint: # Health and metrics endpoints for each service\\ncurl http://localhost:8200/health # Vault health\\ncurl http://localhost:8200/metrics # Vault metrics (Prometheus format) curl http://localhost:8081/health # Registry health\\ncurl http://localhost:8081/metrics # Registry metrics curl http://localhost:8083/health # RAG health\\ncurl http://localhost:8083/metrics # RAG metrics curl http://localhost:8082/health # AI Service health\\ncurl http://localhost:8082/metrics # AI Service metrics curl http://localhost:9090/health # Orchestrator health\\ncurl http://localhost:9090/metrics # Orchestrator metrics curl http://localhost:8080/health # Control Center health\\ncurl http://localhost:8080/metrics # Control Center metrics curl http://localhost:8084/health # MCP Server health\\ncurl http://localhost:8084/metrics # MCP Server metrics","breadcrumbs":"Monitoring & Alerting Setup » Service Metrics Endpoints","id":"1139","title":"Service Metrics Endpoints"},"114":{"body":"CPU : 4 cores RAM : 8GB Disk : 50GB available space Network : Reliable internet connection","breadcrumbs":"Prerequisites » Recommended Requirements (Multi-User Mode)","id":"114","title":"Recommended Requirements (Multi-User Mode)"},"1140":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Prometheus Configuration","id":"1140","title":"Prometheus Configuration"},"1141":{"body":"# /etc/prometheus/prometheus.yml\\nglobal: scrape_interval: 30s evaluation_interval: 30s external_labels: monitor: \'provisioning-platform\' environment: \'production\' alerting: alertmanagers: - static_configs: - targets: - localhost:9093 rule_files: - \'/etc/prometheus/rules/*.yml\' scrape_configs: # Core Platform Services - job_name: \'vault-service\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8200\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'vault-service\' - job_name: \'extension-registry\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8081\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'registry\' - job_name: \'rag-service\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8083\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'rag\' - job_name: \'ai-service\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8082\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'ai-service\' - job_name: \'orchestrator\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:9090\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'orchestrator\' - job_name: \'control-center\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8080\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'control-center\' - job_name: \'mcp-server\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8084\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'mcp-server\' # System Metrics (Node Exporter) - job_name: \'node\' static_configs: - targets: [\'localhost:9100\'] labels: instance: \'system\' # SurrealDB (if multiuser/enterprise) - job_name: \'surrealdb\' metrics_path: \'/metrics\' static_configs: - targets: [\'surrealdb:8000\'] # Etcd (if enterprise) - job_name: \'etcd\' metrics_path: \'/metrics\' static_configs: - targets: [\'etcd:2379\']","breadcrumbs":"Monitoring & Alerting Setup » 1. Create Prometheus Config","id":"1141","title":"1. Create Prometheus Config"},"1142":{"body":"# Create necessary directories\\nsudo mkdir -p /etc/prometheus /var/lib/prometheus\\nsudo mkdir -p /etc/prometheus/rules # Start Prometheus\\ncd /opt/prometheus\\nsudo ./prometheus --config.file=/etc/prometheus/prometheus.yml \\\\ --storage.tsdb.path=/var/lib/prometheus \\\\ --web.console.templates=consoles \\\\ --web.console.libraries=console_libraries # Or as systemd service\\nsudo tee /etc/systemd/system/prometheus.service > /dev/null << EOF\\n[Unit]\\nDescription=Prometheus\\nWants=network-online.target\\nAfter=network-online.target [Service]\\nUser=prometheus\\nType=simple\\nExecStart=/opt/prometheus/prometheus \\\\ --config.file=/etc/prometheus/prometheus.yml \\\\ --storage.tsdb.path=/var/lib/prometheus Restart=on-failure\\nRestartSec=10 [Install]\\nWantedBy=multi-user.target\\nEOF sudo systemctl daemon-reload\\nsudo systemctl enable prometheus\\nsudo systemctl start prometheus","breadcrumbs":"Monitoring & Alerting Setup » 2. Start Prometheus","id":"1142","title":"2. Start Prometheus"},"1143":{"body":"# Check Prometheus is running\\ncurl -s http://localhost:9090/-/healthy # List scraped targets\\ncurl -s http://localhost:9090/api/v1/targets | jq . # Query test metric\\ncurl -s \'http://localhost:9090/api/v1/query?query=up\' | jq .","breadcrumbs":"Monitoring & Alerting Setup » 3. Verify Prometheus","id":"1143","title":"3. Verify Prometheus"},"1144":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Alert Rules Configuration","id":"1144","title":"Alert Rules Configuration"},"1145":{"body":"# /etc/prometheus/rules/platform-alerts.yml\\ngroups: - name: platform_availability interval: 30s rules: - alert: ServiceDown expr: up{job=~\\"vault-service|registry|rag|ai-service|orchestrator\\"} == 0 for: 5m labels: severity: critical service: \'{{ $labels.job }}\' annotations: summary: \\"{{ $labels.job }} is DOWN\\" description: \\"{{ $labels.job }} has been down for 5+ minutes\\" - alert: ServiceSlowResponse expr: histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m])) > 1 for: 5m labels: severity: warning service: \'{{ $labels.job }}\' annotations: summary: \\"{{ $labels.job }} slow response times\\" description: \\"95th percentile latency above 1 second\\" - name: platform_errors interval: 30s rules: - alert: HighErrorRate expr: rate(http_requests_total{status=~\\"5..\\"}[5m]) > 0.05 for: 5m labels: severity: warning service: \'{{ $labels.job }}\' annotations: summary: \\"{{ $labels.job }} high error rate\\" description: \\"Error rate above 5% for 5 minutes\\" - alert: DatabaseConnectionError expr: increase(database_connection_errors_total[5m]) > 10 for: 2m labels: severity: critical component: database annotations: summary: \\"Database connection failures detected\\" description: \\"{{ $value }} connection errors in last 5 minutes\\" - alert: QueueBacklog expr: orchestrator_queue_depth > 1000 for: 5m labels: severity: warning component: orchestrator annotations: summary: \\"Orchestrator queue backlog growing\\" description: \\"Queue depth: {{ $value }} tasks\\" - name: platform_resources interval: 30s rules: - alert: HighMemoryUsage expr: container_memory_usage_bytes / container_spec_memory_limit_bytes > 0.9 for: 5m labels: severity: warning resource: memory annotations: summary: \\"{{ $labels.container_name }} memory usage critical\\" description: \\"Memory usage: {{ $value | humanizePercentage }}\\" - alert: HighDiskUsage expr: node_filesystem_avail_bytes{mountpoint=\\"/\\"} / node_filesystem_size_bytes < 0.1 for: 5m labels: severity: warning resource: disk annotations: summary: \\"Disk space critically low\\" description: \\"Available disk space: {{ $value | humanizePercentage }}\\" - alert: HighCPUUsage expr: (1 - avg(rate(node_cpu_seconds_total{mode=\\"idle\\"}[5m])) by (instance)) > 0.9 for: 10m labels: severity: warning resource: cpu annotations: summary: \\"High CPU usage detected\\" description: \\"CPU usage: {{ $value | humanizePercentage }}\\" - alert: DiskIOLatency expr: node_disk_io_time_seconds_total > 100 for: 5m labels: severity: warning resource: disk annotations: summary: \\"High disk I/O latency\\" description: \\"I/O latency: {{ $value }}ms\\" - name: platform_network interval: 30s rules: - alert: HighNetworkLatency expr: probe_duration_seconds > 0.5 for: 5m labels: severity: warning component: network annotations: summary: \\"High network latency detected\\" description: \\"Latency: {{ $value }}ms\\" - alert: PacketLoss expr: node_network_transmit_errors_total > 100 for: 5m labels: severity: warning component: network annotations: summary: \\"Packet loss detected\\" description: \\"Transmission errors: {{ $value }}\\" - name: platform_services interval: 30s rules: - alert: VaultSealed expr: vault_core_unsealed == 0 for: 1m labels: severity: critical service: vault annotations: summary: \\"Vault is sealed\\" description: \\"Vault instance is sealed and requires unseal operation\\" - alert: RegistryAuthError expr: increase(registry_auth_failures_total[5m]) > 5 for: 2m labels: severity: warning service: registry annotations: summary: \\"Registry authentication failures\\" description: \\"{{ $value }} auth failures in last 5 minutes\\" - alert: RAGVectorDBDown expr: rag_vectordb_connection_status == 0 for: 2m labels: severity: critical service: rag annotations: summary: \\"RAG Vector Database disconnected\\" description: \\"Vector DB connection lost\\" - alert: AIServiceMCPError expr: increase(ai_service_mcp_errors_total[5m]) > 10 for: 2m labels: severity: warning service: ai_service annotations: summary: \\"AI Service MCP integration errors\\" description: \\"{{ $value }} errors in last 5 minutes\\" - alert: OrchestratorLeaderElectionIssue expr: orchestrator_leader_elected == 0 for: 5m labels: severity: critical service: orchestrator annotations: summary: \\"Orchestrator leader election failed\\" description: \\"No leader elected in cluster\\"","breadcrumbs":"Monitoring & Alerting Setup » 1. Create Alert Rules","id":"1145","title":"1. Create Alert Rules"},"1146":{"body":"# Check rule syntax\\n/opt/prometheus/promtool check rules /etc/prometheus/rules/platform-alerts.yml # Reload Prometheus with new rules (without restart)\\ncurl -X POST http://localhost:9090/-/reload","breadcrumbs":"Monitoring & Alerting Setup » 2. Validate Alert Rules","id":"1146","title":"2. Validate Alert Rules"},"1147":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » AlertManager Configuration","id":"1147","title":"AlertManager Configuration"},"1148":{"body":"# /etc/alertmanager/alertmanager.yml\\nglobal: resolve_timeout: 5m slack_api_url: \'YOUR_SLACK_WEBHOOK_URL\' pagerduty_url: \'https://events.pagerduty.com/v2/enqueue\' route: receiver: \'platform-notifications\' group_by: [\'alertname\', \'service\', \'severity\'] group_wait: 10s group_interval: 10s repeat_interval: 12h routes: # Critical alerts go to PagerDuty - match: severity: critical receiver: \'pagerduty-critical\' group_wait: 0s repeat_interval: 5m # Warnings go to Slack - match: severity: warning receiver: \'slack-warnings\' repeat_interval: 1h # Service-specific routing - match: service: vault receiver: \'vault-team\' group_by: [\'service\', \'severity\'] - match: service: orchestrator receiver: \'orchestrator-team\' group_by: [\'service\', \'severity\'] receivers: - name: \'platform-notifications\' slack_configs: - channel: \'#platform-alerts\' title: \'Platform Alert\' text: \'{{ range .Alerts }}{{ .Annotations.description }}{{ end }}\' send_resolved: true - name: \'slack-warnings\' slack_configs: - channel: \'#platform-warnings\' title: \'Warning: {{ .GroupLabels.alertname }}\' text: \'{{ range .Alerts }}{{ .Annotations.description }}{{ end }}\' - name: \'pagerduty-critical\' pagerduty_configs: - service_key: \'YOUR_PAGERDUTY_SERVICE_KEY\' description: \'{{ .GroupLabels.alertname }}\' details: firing: \'{{ template \\"pagerduty.default.instances\\" .Alerts.Firing }}\' - name: \'vault-team\' email_configs: - to: \'vault-team@company.com\' from: \'alertmanager@company.com\' smarthost: \'smtp.company.com:587\' auth_username: \'alerts@company.com\' auth_password: \'PASSWORD\' headers: Subject: \'Vault Alert: {{ .GroupLabels.alertname }}\' - name: \'orchestrator-team\' email_configs: - to: \'orchestrator-team@company.com\' from: \'alertmanager@company.com\' smarthost: \'smtp.company.com:587\' inhibit_rules: # Don\'t alert on errors if service is already down - source_match: severity: \'critical\' alertname: \'ServiceDown\' target_match_re: severity: \'warning|info\' equal: [\'service\', \'instance\'] # Don\'t alert on resource exhaustion if service is down - source_match: alertname: \'ServiceDown\' target_match_re: alertname: \'HighMemoryUsage|HighCPUUsage\' equal: [\'instance\']","breadcrumbs":"Monitoring & Alerting Setup » 1. Create AlertManager Config","id":"1148","title":"1. Create AlertManager Config"},"1149":{"body":"cd /opt/alertmanager\\nsudo ./alertmanager --config.file=/etc/alertmanager/alertmanager.yml \\\\ --storage.path=/var/lib/alertmanager # Or as systemd service\\nsudo tee /etc/systemd/system/alertmanager.service > /dev/null << EOF\\n[Unit]\\nDescription=AlertManager\\nWants=network-online.target\\nAfter=network-online.target [Service]\\nUser=alertmanager\\nType=simple\\nExecStart=/opt/alertmanager/alertmanager \\\\ --config.file=/etc/alertmanager/alertmanager.yml \\\\ --storage.path=/var/lib/alertmanager Restart=on-failure\\nRestartSec=10 [Install]\\nWantedBy=multi-user.target\\nEOF sudo systemctl daemon-reload\\nsudo systemctl enable alertmanager\\nsudo systemctl start alertmanager","breadcrumbs":"Monitoring & Alerting Setup » 2. Start AlertManager","id":"1149","title":"2. Start AlertManager"},"115":{"body":"CPU : 16 cores RAM : 32GB Disk : 500GB available space (SSD recommended) Network : High-bandwidth connection with static IP","breadcrumbs":"Prerequisites » Production Requirements (Enterprise Mode)","id":"115","title":"Production Requirements (Enterprise Mode)"},"1150":{"body":"# Check AlertManager is running\\ncurl -s http://localhost:9093/-/healthy # List active alerts\\ncurl -s http://localhost:9093/api/v1/alerts | jq . # Check configuration\\ncurl -s http://localhost:9093/api/v1/status | jq .","breadcrumbs":"Monitoring & Alerting Setup » 3. Verify AlertManager","id":"1150","title":"3. Verify AlertManager"},"1151":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Grafana Dashboards","id":"1151","title":"Grafana Dashboards"},"1152":{"body":"# Install Grafana\\nsudo apt-get install -y grafana-server # Start Grafana\\nsudo systemctl enable grafana-server\\nsudo systemctl start grafana-server # Access at http://localhost:3000\\n# Default: admin/admin","breadcrumbs":"Monitoring & Alerting Setup » 1. Install Grafana","id":"1152","title":"1. Install Grafana"},"1153":{"body":"# Via API\\ncurl -X POST http://localhost:3000/api/datasources \\\\ -H \\"Content-Type: application/json\\" \\\\ -u admin:admin \\\\ -d \'{ \\"name\\": \\"Prometheus\\", \\"type\\": \\"prometheus\\", \\"url\\": \\"http://localhost:9090\\", \\"access\\": \\"proxy\\", \\"isDefault\\": true }\'","breadcrumbs":"Monitoring & Alerting Setup » 2. Add Prometheus Data Source","id":"1153","title":"2. Add Prometheus Data Source"},"1154":{"body":"{ \\"dashboard\\": { \\"title\\": \\"Platform Overview\\", \\"description\\": \\"9-service provisioning platform metrics\\", \\"tags\\": [\\"platform\\", \\"overview\\"], \\"timezone\\": \\"browser\\", \\"panels\\": [ { \\"title\\": \\"Service Status\\", \\"type\\": \\"stat\\", \\"targets\\": [ { \\"expr\\": \\"up{job=~\\\\\\"vault-service|registry|rag|ai-service|orchestrator|control-center|mcp-server\\\\\\"}\\" } ], \\"fieldConfig\\": { \\"defaults\\": { \\"mappings\\": [ { \\"type\\": \\"value\\", \\"value\\": \\"1\\", \\"text\\": \\"UP\\" }, { \\"type\\": \\"value\\", \\"value\\": \\"0\\", \\"text\\": \\"DOWN\\" } ] } } }, { \\"title\\": \\"Request Rate\\", \\"type\\": \\"graph\\", \\"targets\\": [ { \\"expr\\": \\"rate(http_requests_total[5m])\\" } ] }, { \\"title\\": \\"Error Rate\\", \\"type\\": \\"graph\\", \\"targets\\": [ { \\"expr\\": \\"rate(http_requests_total{status=~\\\\\\"5..\\\\\\"}[5m])\\" } ] }, { \\"title\\": \\"Latency (p95)\\", \\"type\\": \\"graph\\", \\"targets\\": [ { \\"expr\\": \\"histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))\\" } ] }, { \\"title\\": \\"Memory Usage\\", \\"type\\": \\"graph\\", \\"targets\\": [ { \\"expr\\": \\"container_memory_usage_bytes / 1024 / 1024\\" } ] }, { \\"title\\": \\"Disk Usage\\", \\"type\\": \\"gauge\\", \\"targets\\": [ { \\"expr\\": \\"(1 - (node_filesystem_avail_bytes / node_filesystem_size_bytes)) * 100\\" } ] } ] }\\n}","breadcrumbs":"Monitoring & Alerting Setup » 3. Create Platform Overview Dashboard","id":"1154","title":"3. Create Platform Overview Dashboard"},"1155":{"body":"# Save dashboard JSON to file\\ncat > platform-overview.json << \'EOF\'\\n{ \\"dashboard\\": { ... }\\n}\\nEOF # Import dashboard\\ncurl -X POST http://localhost:3000/api/dashboards/db \\\\ -H \\"Content-Type: application/json\\" \\\\ -u admin:admin \\\\ -d @platform-overview.json","breadcrumbs":"Monitoring & Alerting Setup » 4. Import Dashboard via API","id":"1155","title":"4. Import Dashboard via API"},"1156":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Health Check Monitoring","id":"1156","title":"Health Check Monitoring"},"1157":{"body":"#!/bin/bash\\n# scripts/check-service-health.sh SERVICES=( \\"vault:8200\\" \\"registry:8081\\" \\"rag:8083\\" \\"ai-service:8082\\" \\"orchestrator:9090\\" \\"control-center:8080\\" \\"mcp-server:8084\\"\\n) UNHEALTHY=0 for service in \\"${SERVICES[@]}\\"; do IFS=\':\' read -r name port <<< \\"$service\\" response=$(curl -s -o /dev/null -w \\"%{http_code}\\" http://localhost:$port/health) if [ \\"$response\\" = \\"200\\" ]; then echo \\"✓ $name is healthy\\" else echo \\"✗ $name is UNHEALTHY (HTTP $response)\\" ((UNHEALTHY++)) fi\\ndone if [ $UNHEALTHY -gt 0 ]; then echo \\"\\" echo \\"WARNING: $UNHEALTHY service(s) unhealthy\\" exit 1\\nfi exit 0","breadcrumbs":"Monitoring & Alerting Setup » 1. Service Health Check Script","id":"1157","title":"1. Service Health Check Script"},"1158":{"body":"# For Kubernetes deployments\\napiVersion: v1\\nkind: Pod\\nmetadata: name: vault-service\\nspec: containers: - name: vault-service image: vault-service:latest livenessProbe: httpGet: path: /health port: 8200 initialDelaySeconds: 30 periodSeconds: 10 failureThreshold: 3 readinessProbe: httpGet: path: /health port: 8200 initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 2","breadcrumbs":"Monitoring & Alerting Setup » 2. Liveness Probe Configuration","id":"1158","title":"2. Liveness Probe Configuration"},"1159":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Log Aggregation (ELK Stack)","id":"1159","title":"Log Aggregation (ELK Stack)"},"116":{"body":"","breadcrumbs":"Prerequisites » Operating System","id":"116","title":"Operating System"},"1160":{"body":"# Install Elasticsearch\\nwget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.11.0-linux-x86_64.tar.gz\\ntar xvfz elasticsearch-8.11.0-linux-x86_64.tar.gz\\ncd elasticsearch-8.11.0/bin\\n./elasticsearch","breadcrumbs":"Monitoring & Alerting Setup » 1. Elasticsearch Setup","id":"1160","title":"1. Elasticsearch Setup"},"1161":{"body":"# /etc/filebeat/filebeat.yml\\nfilebeat.inputs: - type: log enabled: true paths: - /var/log/provisioning/*.log fields: service: provisioning-platform environment: production output.elasticsearch: hosts: [\\"localhost:9200\\"] username: \\"elastic\\" password: \\"changeme\\" logging.level: info\\nlogging.to_files: true\\nlogging.files: path: /var/log/filebeat","breadcrumbs":"Monitoring & Alerting Setup » 2. Filebeat Configuration","id":"1161","title":"2. Filebeat Configuration"},"1162":{"body":"# Access at http://localhost:5601\\n# Create index pattern: provisioning-*\\n# Create visualizations for:\\n# - Error rate over time\\n# - Service availability\\n# - Performance metrics\\n# - Request volume","breadcrumbs":"Monitoring & Alerting Setup » 3. Kibana Dashboard","id":"1162","title":"3. Kibana Dashboard"},"1163":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Monitoring Dashboard Queries","id":"1163","title":"Monitoring Dashboard Queries"},"1164":{"body":"# Service availability (last hour)\\navg(increase(up[1h])) by (job) # Request rate per service\\nsum(rate(http_requests_total[5m])) by (job) # Error rate per service\\nsum(rate(http_requests_total{status=~\\"5..\\"}[5m])) by (job) # Latency percentiles\\nhistogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))\\nhistogram_quantile(0.99, rate(http_request_duration_seconds_bucket[5m])) # Memory usage per service\\ncontainer_memory_usage_bytes / 1024 / 1024 / 1024 # CPU usage per service\\nrate(container_cpu_usage_seconds_total[5m]) * 100 # Disk I/O operations\\nrate(node_disk_io_time_seconds_total[5m]) # Network throughput\\nrate(node_network_transmit_bytes_total[5m]) # Queue depth (Orchestrator)\\norchestrator_queue_depth # Task processing rate\\nrate(orchestrator_tasks_total[5m]) # Task failure rate\\nrate(orchestrator_tasks_failed_total[5m]) # Cache hit ratio\\nrate(service_cache_hits_total[5m]) / (rate(service_cache_hits_total[5m]) + rate(service_cache_misses_total[5m])) # Database connection pool status\\ndatabase_connection_pool_usage{job=\\"orchestrator\\"} # TLS certificate expiration\\n(ssl_certificate_expiry - time()) / 86400","breadcrumbs":"Monitoring & Alerting Setup » Common Prometheus Queries","id":"1164","title":"Common Prometheus Queries"},"1165":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Alert Testing","id":"1165","title":"Alert Testing"},"1166":{"body":"# Manually fire test alert\\ncurl -X POST http://localhost:9093/api/v1/alerts \\\\ -H \'Content-Type: application/json\' \\\\ -d \'[ { \\"status\\": \\"firing\\", \\"labels\\": { \\"alertname\\": \\"TestAlert\\", \\"severity\\": \\"critical\\" }, \\"annotations\\": { \\"summary\\": \\"This is a test alert\\", \\"description\\": \\"Test alert to verify notification routing\\" } } ]\'","breadcrumbs":"Monitoring & Alerting Setup » 1. Test Alert Firing","id":"1166","title":"1. Test Alert Firing"},"1167":{"body":"# Stop a service to trigger ServiceDown alert\\npkill -9 vault-service # Within 5 minutes, alert should fire\\n# Check AlertManager UI: http://localhost:9093 # Restart service\\ncargo run --release -p vault-service & # Alert should resolve after service is back up","breadcrumbs":"Monitoring & Alerting Setup » 2. Stop Service to Trigger Alert","id":"1167","title":"2. Stop Service to Trigger Alert"},"1168":{"body":"# Generate request load\\nab -n 10000 -c 100 http://localhost:9090/api/v1/health # Monitor error rate in Prometheus\\ncurl -s \'http://localhost:9090/api/v1/query?query=rate(http_requests_total{status=~\\"5..\\"}[5m])\' | jq .","breadcrumbs":"Monitoring & Alerting Setup » 3. Generate Load to Test Error Alerts","id":"1168","title":"3. Generate Load to Test Error Alerts"},"1169":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Backup & Retention Policies","id":"1169","title":"Backup & Retention Policies"},"117":{"body":"macOS : 12.0 (Monterey) or later Linux : Ubuntu 22.04 LTS or later Fedora 38 or later Debian 12 (Bookworm) or later RHEL 9 or later","breadcrumbs":"Prerequisites » Supported Platforms","id":"117","title":"Supported Platforms"},"1170":{"body":"#!/bin/bash\\n# scripts/backup-prometheus-data.sh BACKUP_DIR=\\"/backups/prometheus\\"\\nRETENTION_DAYS=30 # Create snapshot\\ncurl -X POST http://localhost:9090/api/v1/admin/tsdb/snapshot # Backup snapshot\\nSNAPSHOT=$(ls -t /var/lib/prometheus/snapshots | head -1)\\ntar -czf \\"$BACKUP_DIR/prometheus-$SNAPSHOT.tar.gz\\" \\\\ \\"/var/lib/prometheus/snapshots/$SNAPSHOT\\" # Upload to S3\\naws s3 cp \\"$BACKUP_DIR/prometheus-$SNAPSHOT.tar.gz\\" \\\\ s3://backups/prometheus/ # Clean old backups\\nfind \\"$BACKUP_DIR\\" -mtime +$RETENTION_DAYS -delete","breadcrumbs":"Monitoring & Alerting Setup » 1. Prometheus Data Backup","id":"1170","title":"1. Prometheus Data Backup"},"1171":{"body":"# Keep metrics for 15 days\\n/opt/prometheus/prometheus \\\\ --storage.tsdb.retention.time=15d \\\\ --storage.tsdb.retention.size=50GB","breadcrumbs":"Monitoring & Alerting Setup » 2. Prometheus Retention Configuration","id":"1171","title":"2. Prometheus Retention Configuration"},"1172":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Maintenance & Troubleshooting","id":"1172","title":"Maintenance & Troubleshooting"},"1173":{"body":"Prometheus Won\'t Scrape Service # Check configuration\\n/opt/prometheus/promtool check config /etc/prometheus/prometheus.yml # Verify service is accessible\\ncurl http://localhost:8200/metrics # Check Prometheus targets\\ncurl -s http://localhost:9090/api/v1/targets | jq \'.data.activeTargets[] | select(.job==\\"vault-service\\")\' # Check scrape error\\ncurl -s http://localhost:9090/api/v1/targets | jq \'.data.activeTargets[] | .lastError\' AlertManager Not Sending Notifications # Verify AlertManager config\\n/opt/alertmanager/amtool config routes # Test webhook\\ncurl -X POST http://localhost:3012/ -d \'{\\"test\\": \\"alert\\"}\' # Check AlertManager logs\\njournalctl -u alertmanager -n 100 -f # Verify notification channels configured\\ncurl -s http://localhost:9093/api/v1/receivers High Memory Usage # Reduce Prometheus retention\\nprometheus --storage.tsdb.retention.time=7d --storage.tsdb.max-block-duration=2h # Disable unused scrape jobs\\n# Edit prometheus.yml and remove unused jobs # Monitor memory\\nps aux | grep prometheus | grep -v grep","breadcrumbs":"Monitoring & Alerting Setup » Common Issues","id":"1173","title":"Common Issues"},"1174":{"body":"Prometheus installed and running AlertManager installed and running Grafana installed and configured Prometheus scraping all 8 services Alert rules deployed and validated Notification channels configured (Slack, email, PagerDuty) AlertManager webhooks tested Grafana dashboards created Log aggregation stack deployed (optional) Backup scripts configured Retention policies set Health checks configured Team notified of alerting setup Runbooks created for common alerts Alert testing procedure documented","breadcrumbs":"Monitoring & Alerting Setup » Production Deployment Checklist","id":"1174","title":"Production Deployment Checklist"},"1175":{"body":"# Prometheus\\ncurl http://localhost:9090/api/v1/targets # List scrape targets\\ncurl \'http://localhost:9090/api/v1/query?query=up\' # Query metric\\ncurl -X POST http://localhost:9090/-/reload # Reload config # AlertManager\\ncurl http://localhost:9093/api/v1/alerts # List active alerts\\ncurl http://localhost:9093/api/v1/receivers # List receivers\\ncurl http://localhost:9093/api/v2/status # Check status # Grafana\\ncurl -u admin:admin http://localhost:3000/api/datasources # List data sources\\ncurl -u admin:admin http://localhost:3000/api/dashboards # List dashboards # Validation\\npromtool check config /etc/prometheus/prometheus.yml\\npromtool check rules /etc/prometheus/rules/platform-alerts.yml\\namtool config routes","breadcrumbs":"Monitoring & Alerting Setup » Quick Commands Reference","id":"1175","title":"Quick Commands Reference"},"1176":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Documentation & Runbooks","id":"1176","title":"Documentation & Runbooks"},"1177":{"body":"# Service Down Alert ## Detection\\nAlert fires when service is unreachable for 5+ minutes ## Immediate Actions\\n1. Check service is running: pgrep -f service-name\\n2. Check service port: ss -tlnp | grep 8200\\n3. Check service logs: tail -100 /var/log/provisioning/service.log ## Diagnosis\\n1. Service crashed: look for panic/error in logs\\n2. Port conflict: lsof -i :8200\\n3. Configuration issue: validate config file\\n4. Dependency down: check database/cache connectivity ## Remediation\\n1. Restart service: pkill service && cargo run --release -p service &\\n2. Check health: curl http://localhost:8200/health\\n3. Verify dependencies: curl http://localhost:5432/health ## Escalation\\nIf service doesn\'t recover after restart, escalate to on-call engineer","breadcrumbs":"Monitoring & Alerting Setup » Sample Runbook: Service Down","id":"1177","title":"Sample Runbook: Service Down"},"1178":{"body":"Prometheus Documentation AlertManager Documentation Grafana Documentation Platform Deployment Guide Service Management Guide Last Updated : 2026-01-05 Version : 1.0.0 Status : Production Ready ✅","breadcrumbs":"Monitoring & Alerting Setup » Resources","id":"1178","title":"Resources"},"1179":{"body":"","breadcrumbs":"Service Management Quick Reference » Service Management Quick Reference","id":"1179","title":"Service Management Quick Reference"},"118":{"body":"macOS : Xcode Command Line Tools required Homebrew recommended for package management Linux : systemd-based distribution recommended sudo access required for some operations","breadcrumbs":"Prerequisites » Platform-Specific Notes","id":"118","title":"Platform-Specific Notes"},"1180":{"body":"Version : 1.0.0 Date : 2025-10-06 Author : CoreDNS Integration Agent","breadcrumbs":"CoreDNS Guide » CoreDNS Integration Guide","id":"1180","title":"CoreDNS Integration Guide"},"1181":{"body":"Overview Installation Configuration CLI Commands Zone Management Record Management Docker Deployment Integration Troubleshooting Advanced Topics","breadcrumbs":"CoreDNS Guide » Table of Contents","id":"1181","title":"Table of Contents"},"1182":{"body":"The CoreDNS integration provides comprehensive DNS management capabilities for the provisioning system. It supports: Local DNS service - Run CoreDNS as binary or Docker container Dynamic DNS updates - Automatic registration of infrastructure changes Multi-zone support - Manage multiple DNS zones Provider integration - Seamless integration with orchestrator REST API - Programmatic DNS management Docker deployment - Containerized CoreDNS with docker-compose","breadcrumbs":"CoreDNS Guide » Overview","id":"1182","title":"Overview"},"1183":{"body":"✅ Automatic Server Registration - Servers automatically registered in DNS on creation ✅ Zone File Management - Create, update, and manage zone files programmatically ✅ Multiple Deployment Modes - Binary, Docker, remote, or hybrid ✅ Health Monitoring - Built-in health checks and metrics ✅ CLI Interface - Comprehensive command-line tools ✅ API Integration - REST API for external integration","breadcrumbs":"CoreDNS Guide » Key Features","id":"1183","title":"Key Features"},"1184":{"body":"","breadcrumbs":"CoreDNS Guide » Installation","id":"1184","title":"Installation"},"1185":{"body":"Nushell 0.107+ - For CLI and scripts Docker (optional) - For containerized deployment dig (optional) - For DNS queries","breadcrumbs":"CoreDNS Guide » Prerequisites","id":"1185","title":"Prerequisites"},"1186":{"body":"# Install latest version\\nprovisioning dns install # Install specific version\\nprovisioning dns install 1.11.1 # Check mode\\nprovisioning dns install --check\\n```plaintext The binary will be installed to `~/.provisioning/bin/coredns`. ### Verify Installation ```bash\\n# Check CoreDNS version\\n~/.provisioning/bin/coredns -version # Verify installation\\nls -lh ~/.provisioning/bin/coredns\\n```plaintext --- ## Configuration ### KCL Configuration Schema Add CoreDNS configuration to your infrastructure config: ```kcl\\n# In workspace/infra/{name}/config.k\\nimport provisioning.coredns as dns coredns_config: dns.CoreDNSConfig = { mode = \\"local\\" local = { enabled = True deployment_type = \\"binary\\" # or \\"docker\\" binary_path = \\"~/.provisioning/bin/coredns\\" config_path = \\"~/.provisioning/coredns/Corefile\\" zones_path = \\"~/.provisioning/coredns/zones\\" port = 5353 auto_start = True zones = [\\"provisioning.local\\", \\"workspace.local\\"] } dynamic_updates = { enabled = True api_endpoint = \\"http://localhost:9090/dns\\" auto_register_servers = True auto_unregister_servers = True ttl = 300 } upstream = [\\"8.8.8.8\\", \\"1.1.1.1\\"] default_ttl = 3600 enable_logging = True enable_metrics = True metrics_port = 9153\\n}\\n```plaintext ### Configuration Modes #### Local Mode (Binary) Run CoreDNS as a local binary process: ```kcl\\ncoredns_config: CoreDNSConfig = { mode = \\"local\\" local = { deployment_type = \\"binary\\" auto_start = True }\\n}\\n```plaintext #### Local Mode (Docker) Run CoreDNS in Docker container: ```kcl\\ncoredns_config: CoreDNSConfig = { mode = \\"local\\" local = { deployment_type = \\"docker\\" docker = { image = \\"coredns/coredns:1.11.1\\" container_name = \\"provisioning-coredns\\" restart_policy = \\"unless-stopped\\" } }\\n}\\n```plaintext #### Remote Mode Connect to external CoreDNS service: ```kcl\\ncoredns_config: CoreDNSConfig = { mode = \\"remote\\" remote = { enabled = True endpoints = [\\"https://dns1.example.com\\", \\"https://dns2.example.com\\"] zones = [\\"production.local\\"] verify_tls = True }\\n}\\n```plaintext #### Disabled Mode Disable CoreDNS integration: ```kcl\\ncoredns_config: CoreDNSConfig = { mode = \\"disabled\\"\\n}\\n```plaintext --- ## CLI Commands ### Service Management ```bash\\n# Check status\\nprovisioning dns status # Start service\\nprovisioning dns start # Start in foreground (for debugging)\\nprovisioning dns start --foreground # Stop service\\nprovisioning dns stop # Restart service\\nprovisioning dns restart # Reload configuration (graceful)\\nprovisioning dns reload # View logs\\nprovisioning dns logs # Follow logs\\nprovisioning dns logs --follow # Show last 100 lines\\nprovisioning dns logs --lines 100\\n```plaintext ### Health & Monitoring ```bash\\n# Check health\\nprovisioning dns health # View configuration\\nprovisioning dns config show # Validate configuration\\nprovisioning dns config validate # Generate new Corefile\\nprovisioning dns config generate\\n```plaintext --- ## Zone Management ### List Zones ```bash\\n# List all zones\\nprovisioning dns zone list\\n```plaintext **Output:** ```plaintext\\nDNS Zones\\n========= • provisioning.local ✓ • workspace.local ✓\\n```plaintext ### Create Zone ```bash\\n# Create new zone\\nprovisioning dns zone create myapp.local # Check mode\\nprovisioning dns zone create myapp.local --check\\n```plaintext ### Show Zone Details ```bash\\n# Show all records in zone\\nprovisioning dns zone show provisioning.local # JSON format\\nprovisioning dns zone show provisioning.local --format json # YAML format\\nprovisioning dns zone show provisioning.local --format yaml\\n```plaintext ### Delete Zone ```bash\\n# Delete zone (with confirmation)\\nprovisioning dns zone delete myapp.local # Force deletion (skip confirmation)\\nprovisioning dns zone delete myapp.local --force # Check mode\\nprovisioning dns zone delete myapp.local --check\\n```plaintext --- ## Record Management ### Add Records #### A Record (IPv4) ```bash\\nprovisioning dns record add server-01 A 10.0.1.10 # With custom TTL\\nprovisioning dns record add server-01 A 10.0.1.10 --ttl 600 # With comment\\nprovisioning dns record add server-01 A 10.0.1.10 --comment \\"Web server\\" # Different zone\\nprovisioning dns record add server-01 A 10.0.1.10 --zone myapp.local\\n```plaintext #### AAAA Record (IPv6) ```bash\\nprovisioning dns record add server-01 AAAA 2001:db8::1\\n```plaintext #### CNAME Record ```bash\\nprovisioning dns record add web CNAME server-01.provisioning.local\\n```plaintext #### MX Record ```bash\\nprovisioning dns record add @ MX mail.example.com --priority 10\\n```plaintext #### TXT Record ```bash\\nprovisioning dns record add @ TXT \\"v=spf1 mx -all\\"\\n```plaintext ### Remove Records ```bash\\n# Remove record\\nprovisioning dns record remove server-01 # Different zone\\nprovisioning dns record remove server-01 --zone myapp.local # Check mode\\nprovisioning dns record remove server-01 --check\\n```plaintext ### Update Records ```bash\\n# Update record value\\nprovisioning dns record update server-01 A 10.0.1.20 # With new TTL\\nprovisioning dns record update server-01 A 10.0.1.20 --ttl 1800\\n```plaintext ### List Records ```bash\\n# List all records in zone\\nprovisioning dns record list # Different zone\\nprovisioning dns record list --zone myapp.local # JSON format\\nprovisioning dns record list --format json # YAML format\\nprovisioning dns record list --format yaml\\n```plaintext **Example Output:** ```plaintext\\nDNS Records - Zone: provisioning.local ╭───┬──────────────┬──────┬─────────────┬─────╮\\n│ # │ name │ type │ value │ ttl │\\n├───┼──────────────┼──────┼─────────────┼─────┤\\n│ 0 │ server-01 │ A │ 10.0.1.10 │ 300 │\\n│ 1 │ server-02 │ A │ 10.0.1.11 │ 300 │\\n│ 2 │ db-01 │ A │ 10.0.2.10 │ 300 │\\n│ 3 │ web │ CNAME│ server-01 │ 300 │\\n╰───┴──────────────┴──────┴─────────────┴─────╯\\n```plaintext --- ## Docker Deployment ### Prerequisites Ensure Docker and docker-compose are installed: ```bash\\ndocker --version\\ndocker-compose --version\\n```plaintext ### Start CoreDNS in Docker ```bash\\n# Start CoreDNS container\\nprovisioning dns docker start # Check mode\\nprovisioning dns docker start --check\\n```plaintext ### Manage Docker Container ```bash\\n# Check status\\nprovisioning dns docker status # View logs\\nprovisioning dns docker logs # Follow logs\\nprovisioning dns docker logs --follow # Restart container\\nprovisioning dns docker restart # Stop container\\nprovisioning dns docker stop # Check health\\nprovisioning dns docker health\\n```plaintext ### Update Docker Image ```bash\\n# Pull latest image\\nprovisioning dns docker pull # Pull specific version\\nprovisioning dns docker pull --version 1.11.1 # Update and restart\\nprovisioning dns docker update\\n```plaintext ### Remove Container ```bash\\n# Remove container (with confirmation)\\nprovisioning dns docker remove # Remove with volumes\\nprovisioning dns docker remove --volumes # Force remove (skip confirmation)\\nprovisioning dns docker remove --force # Check mode\\nprovisioning dns docker remove --check\\n```plaintext ### View Configuration ```bash\\n# Show docker-compose config\\nprovisioning dns docker config\\n```plaintext --- ## Integration ### Automatic Server Registration When dynamic DNS is enabled, servers are automatically registered: ```bash\\n# Create server (automatically registers in DNS)\\nprovisioning server create web-01 --infra myapp # Server gets DNS record: web-01.provisioning.local -> \\n```plaintext ### Manual Registration ```nushell\\nuse lib_provisioning/coredns/integration.nu * # Register server\\nregister-server-in-dns \\"web-01\\" \\"10.0.1.10\\" # Unregister server\\nunregister-server-from-dns \\"web-01\\" # Bulk register\\nbulk-register-servers [ {hostname: \\"web-01\\", ip: \\"10.0.1.10\\"} {hostname: \\"web-02\\", ip: \\"10.0.1.11\\"} {hostname: \\"db-01\\", ip: \\"10.0.2.10\\"}\\n]\\n```plaintext ### Sync Infrastructure with DNS ```bash\\n# Sync all servers in infrastructure with DNS\\nprovisioning dns sync myapp # Check mode\\nprovisioning dns sync myapp --check\\n```plaintext ### Service Registration ```nushell\\nuse lib_provisioning/coredns/integration.nu * # Register service\\nregister-service-in-dns \\"api\\" \\"10.0.1.10\\" # Unregister service\\nunregister-service-from-dns \\"api\\"\\n```plaintext --- ## Query DNS ### Using CLI ```bash\\n# Query A record\\nprovisioning dns query server-01 # Query specific type\\nprovisioning dns query server-01 --type AAAA # Query different server\\nprovisioning dns query server-01 --server 8.8.8.8 --port 53 # Query from local CoreDNS\\nprovisioning dns query server-01 --server 127.0.0.1 --port 5353\\n```plaintext ### Using dig ```bash\\n# Query from local CoreDNS\\ndig @127.0.0.1 -p 5353 server-01.provisioning.local # Query CNAME\\ndig @127.0.0.1 -p 5353 web.provisioning.local CNAME # Query MX\\ndig @127.0.0.1 -p 5353 example.com MX\\n```plaintext --- ## Troubleshooting ### CoreDNS Not Starting **Symptoms:** `dns start` fails or service doesn\'t respond **Solutions:** 1. **Check if port is in use:** ```bash lsof -i :5353 netstat -an | grep 5353 Validate Corefile: provisioning dns config validate Check logs: provisioning dns logs\\ntail -f ~/.provisioning/coredns/coredns.log Verify binary exists: ls -lh ~/.provisioning/bin/coredns\\nprovisioning dns install","breadcrumbs":"CoreDNS Guide » Install CoreDNS Binary","id":"1186","title":"Install CoreDNS Binary"},"1187":{"body":"Symptoms: dig returns SERVFAIL or timeout Solutions: Check CoreDNS is running: provisioning dns status\\nprovisioning dns health Verify zone file exists: ls -lh ~/.provisioning/coredns/zones/\\ncat ~/.provisioning/coredns/zones/provisioning.local.zone Test with dig: dig @127.0.0.1 -p 5353 provisioning.local SOA Check firewall: # macOS\\nsudo pfctl -sr | grep 5353 # Linux\\nsudo iptables -L -n | grep 5353","breadcrumbs":"CoreDNS Guide » DNS Queries Not Working","id":"1187","title":"DNS Queries Not Working"},"1188":{"body":"Symptoms: dns config validate shows errors Solutions: Backup zone file: cp ~/.provisioning/coredns/zones/provisioning.local.zone \\\\ ~/.provisioning/coredns/zones/provisioning.local.zone.backup Regenerate zone: provisioning dns zone create provisioning.local --force Check syntax manually: cat ~/.provisioning/coredns/zones/provisioning.local.zone Increment serial: Edit zone file manually Increase serial number in SOA record","breadcrumbs":"CoreDNS Guide » Zone File Validation Errors","id":"1188","title":"Zone File Validation Errors"},"1189":{"body":"Symptoms: Docker container won\'t start or crashes Solutions: Check Docker logs: provisioning dns docker logs\\ndocker logs provisioning-coredns Verify volumes exist: ls -lh ~/.provisioning/coredns/ Check container status: provisioning dns docker status\\ndocker ps -a | grep coredns Recreate container: provisioning dns docker stop\\nprovisioning dns docker remove --volumes\\nprovisioning dns docker start","breadcrumbs":"CoreDNS Guide » Docker Container Issues","id":"1189","title":"Docker Container Issues"},"119":{"body":"","breadcrumbs":"Prerequisites » Required Software","id":"119","title":"Required Software"},"1190":{"body":"Symptoms: Servers not auto-registered in DNS Solutions: Check if enabled: provisioning dns config show | grep -A 5 dynamic_updates Verify orchestrator running: curl http://localhost:9090/health Check logs for errors: provisioning dns logs | grep -i error Test manual registration: use lib_provisioning/coredns/integration.nu *\\nregister-server-in-dns \\"test-server\\" \\"10.0.0.1\\"","breadcrumbs":"CoreDNS Guide » Dynamic Updates Not Working","id":"1190","title":"Dynamic Updates Not Working"},"1191":{"body":"","breadcrumbs":"CoreDNS Guide » Advanced Topics","id":"1191","title":"Advanced Topics"},"1192":{"body":"Add custom plugins to Corefile: use lib_provisioning/coredns/corefile.nu * # Add plugin to zone\\nadd-corefile-plugin \\\\ \\"~/.provisioning/coredns/Corefile\\" \\\\ \\"provisioning.local\\" \\\\ \\"cache 30\\"\\n```plaintext ### Backup and Restore ```bash\\n# Backup configuration\\ntar czf coredns-backup.tar.gz ~/.provisioning/coredns/ # Restore configuration\\ntar xzf coredns-backup.tar.gz -C ~/\\n```plaintext ### Zone File Backup ```nushell\\nuse lib_provisioning/coredns/zones.nu * # Backup zone\\nbackup-zone-file \\"provisioning.local\\" # Creates: ~/.provisioning/coredns/zones/provisioning.local.zone.YYYYMMDD-HHMMSS.bak\\n```plaintext ### Metrics and Monitoring CoreDNS exposes Prometheus metrics on port 9153: ```bash\\n# View metrics\\ncurl http://localhost:9153/metrics # Common metrics:\\n# - coredns_dns_request_duration_seconds\\n# - coredns_dns_requests_total\\n# - coredns_dns_responses_total\\n```plaintext ### Multi-Zone Setup ```kcl\\ncoredns_config: CoreDNSConfig = { local = { zones = [ \\"provisioning.local\\", \\"workspace.local\\", \\"dev.local\\", \\"staging.local\\", \\"prod.local\\" ] }\\n}\\n```plaintext ### Split-Horizon DNS Configure different zones for internal/external: ```kcl\\ncoredns_config: CoreDNSConfig = { local = { zones = [\\"internal.local\\"] port = 5353 } remote = { zones = [\\"external.com\\"] endpoints = [\\"https://dns.external.com\\"] }\\n}\\n```plaintext --- ## Configuration Reference ### CoreDNSConfig Fields | Field | Type | Default | Description |\\n|-------|------|---------|-------------|\\n| `mode` | `\\"local\\" \\\\| \\"remote\\" \\\\| \\"hybrid\\" \\\\| \\"disabled\\"` | `\\"local\\"` | Deployment mode |\\n| `local` | `LocalCoreDNS?` | - | Local config (required for local mode) |\\n| `remote` | `RemoteCoreDNS?` | - | Remote config (required for remote mode) |\\n| `dynamic_updates` | `DynamicDNS` | - | Dynamic DNS configuration |\\n| `upstream` | `[str]` | `[\\"8.8.8.8\\", \\"1.1.1.1\\"]` | Upstream DNS servers |\\n| `default_ttl` | `int` | `300` | Default TTL (seconds) |\\n| `enable_logging` | `bool` | `True` | Enable query logging |\\n| `enable_metrics` | `bool` | `True` | Enable Prometheus metrics |\\n| `metrics_port` | `int` | `9153` | Metrics port | ### LocalCoreDNS Fields | Field | Type | Default | Description |\\n|-------|------|---------|-------------|\\n| `enabled` | `bool` | `True` | Enable local CoreDNS |\\n| `deployment_type` | `\\"binary\\" \\\\| \\"docker\\"` | `\\"binary\\"` | How to deploy |\\n| `binary_path` | `str` | `\\"~/.provisioning/bin/coredns\\"` | Path to binary |\\n| `config_path` | `str` | `\\"~/.provisioning/coredns/Corefile\\"` | Corefile path |\\n| `zones_path` | `str` | `\\"~/.provisioning/coredns/zones\\"` | Zones directory |\\n| `port` | `int` | `5353` | DNS listening port |\\n| `auto_start` | `bool` | `True` | Auto-start on boot |\\n| `zones` | `[str]` | `[\\"provisioning.local\\"]` | Managed zones | ### DynamicDNS Fields | Field | Type | Default | Description |\\n|-------|------|---------|-------------|\\n| `enabled` | `bool` | `True` | Enable dynamic updates |\\n| `api_endpoint` | `str` | `\\"http://localhost:9090/dns\\"` | Orchestrator API |\\n| `auto_register_servers` | `bool` | `True` | Auto-register on create |\\n| `auto_unregister_servers` | `bool` | `True` | Auto-unregister on delete |\\n| `ttl` | `int` | `300` | TTL for dynamic records |\\n| `update_strategy` | `\\"immediate\\" \\\\| \\"batched\\" \\\\| \\"scheduled\\"` | `\\"immediate\\"` | Update strategy | --- ## Examples ### Complete Setup Example ```bash\\n# 1. Install CoreDNS\\nprovisioning dns install # 2. Generate configuration\\nprovisioning dns config generate # 3. Start service\\nprovisioning dns start # 4. Create custom zone\\nprovisioning dns zone create myapp.local # 5. Add DNS records\\nprovisioning dns record add web-01 A 10.0.1.10\\nprovisioning dns record add web-02 A 10.0.1.11\\nprovisioning dns record add api CNAME web-01.myapp.local --zone myapp.local # 6. Query records\\nprovisioning dns query web-01 --server 127.0.0.1 --port 5353 # 7. Check status\\nprovisioning dns status\\nprovisioning dns health\\n```plaintext ### Docker Deployment Example ```bash\\n# 1. Start CoreDNS in Docker\\nprovisioning dns docker start # 2. Check status\\nprovisioning dns docker status # 3. View logs\\nprovisioning dns docker logs --follow # 4. Add records (container must be running)\\nprovisioning dns record add server-01 A 10.0.1.10 # 5. Query\\ndig @127.0.0.1 -p 5353 server-01.provisioning.local # 6. Stop\\nprovisioning dns docker stop\\n```plaintext --- ## Best Practices 1. **Use TTL wisely** - Lower TTL (300s) for frequently changing records, higher (3600s) for stable\\n2. **Enable logging** - Essential for troubleshooting\\n3. **Regular backups** - Backup zone files before major changes\\n4. **Validate before reload** - Always run `dns config validate` before reloading\\n5. **Monitor metrics** - Track DNS query rates and error rates\\n6. **Use comments** - Add comments to records for documentation\\n7. **Separate zones** - Use different zones for different environments (dev, staging, prod) --- ## See Also - [Architecture Documentation](../architecture/coredns-architecture.md)\\n- [API Reference](../api/dns-api.md)\\n- [Orchestrator Integration](../integration/orchestrator-dns.md)\\n- KCL Schema Reference --- ## Quick Reference **Quick command reference for CoreDNS DNS management** --- ### Installation ```bash\\n# Install CoreDNS binary\\nprovisioning dns install # Install specific version\\nprovisioning dns install 1.11.1\\n```plaintext --- ### Service Management ```bash\\n# Status\\nprovisioning dns status # Start\\nprovisioning dns start # Stop\\nprovisioning dns stop # Restart\\nprovisioning dns restart # Reload (graceful)\\nprovisioning dns reload # Logs\\nprovisioning dns logs\\nprovisioning dns logs --follow\\nprovisioning dns logs --lines 100 # Health\\nprovisioning dns health\\n```plaintext --- ### Zone Management ```bash\\n# List zones\\nprovisioning dns zone list # Create zone\\nprovisioning dns zone create myapp.local # Show zone records\\nprovisioning dns zone show provisioning.local\\nprovisioning dns zone show provisioning.local --format json # Delete zone\\nprovisioning dns zone delete myapp.local\\nprovisioning dns zone delete myapp.local --force\\n```plaintext --- ### Record Management ```bash\\n# Add A record\\nprovisioning dns record add server-01 A 10.0.1.10 # Add with custom TTL\\nprovisioning dns record add server-01 A 10.0.1.10 --ttl 600 # Add with comment\\nprovisioning dns record add server-01 A 10.0.1.10 --comment \\"Web server\\" # Add to specific zone\\nprovisioning dns record add server-01 A 10.0.1.10 --zone myapp.local # Add CNAME\\nprovisioning dns record add web CNAME server-01.provisioning.local # Add MX\\nprovisioning dns record add @ MX mail.example.com --priority 10 # Add TXT\\nprovisioning dns record add @ TXT \\"v=spf1 mx -all\\" # Remove record\\nprovisioning dns record remove server-01\\nprovisioning dns record remove server-01 --zone myapp.local # Update record\\nprovisioning dns record update server-01 A 10.0.1.20 # List records\\nprovisioning dns record list\\nprovisioning dns record list --zone myapp.local\\nprovisioning dns record list --format json\\n```plaintext --- ### DNS Queries ```bash\\n# Query A record\\nprovisioning dns query server-01 # Query CNAME\\nprovisioning dns query web --type CNAME # Query from local CoreDNS\\nprovisioning dns query server-01 --server 127.0.0.1 --port 5353 # Using dig\\ndig @127.0.0.1 -p 5353 server-01.provisioning.local\\ndig @127.0.0.1 -p 5353 provisioning.local SOA\\n```plaintext --- ### Configuration ```bash\\n# Show configuration\\nprovisioning dns config show # Validate configuration\\nprovisioning dns config validate # Generate Corefile\\nprovisioning dns config generate\\n```plaintext --- ### Docker Deployment ```bash\\n# Start Docker container\\nprovisioning dns docker start # Status\\nprovisioning dns docker status # Logs\\nprovisioning dns docker logs\\nprovisioning dns docker logs --follow # Restart\\nprovisioning dns docker restart # Stop\\nprovisioning dns docker stop # Health\\nprovisioning dns docker health # Remove\\nprovisioning dns docker remove\\nprovisioning dns docker remove --volumes\\nprovisioning dns docker remove --force # Pull image\\nprovisioning dns docker pull\\nprovisioning dns docker pull --version 1.11.1 # Update\\nprovisioning dns docker update # Show config\\nprovisioning dns docker config\\n```plaintext --- ### Common Workflows #### Initial Setup ```bash\\n# 1. Install\\nprovisioning dns install # 2. Start\\nprovisioning dns start # 3. Verify\\nprovisioning dns status\\nprovisioning dns health\\n```plaintext #### Add Server ```bash\\n# Add DNS record for new server\\nprovisioning dns record add web-01 A 10.0.1.10 # Verify\\nprovisioning dns query web-01\\n```plaintext #### Create Custom Zone ```bash\\n# 1. Create zone\\nprovisioning dns zone create myapp.local # 2. Add records\\nprovisioning dns record add web-01 A 10.0.1.10 --zone myapp.local\\nprovisioning dns record add api CNAME web-01.myapp.local --zone myapp.local # 3. List records\\nprovisioning dns record list --zone myapp.local # 4. Query\\ndig @127.0.0.1 -p 5353 web-01.myapp.local\\n```plaintext #### Docker Setup ```bash\\n# 1. Start container\\nprovisioning dns docker start # 2. Check status\\nprovisioning dns docker status # 3. Add records\\nprovisioning dns record add server-01 A 10.0.1.10 # 4. Query\\ndig @127.0.0.1 -p 5353 server-01.provisioning.local\\n```plaintext --- ### Troubleshooting ```bash\\n# Check if CoreDNS is running\\nprovisioning dns status\\nps aux | grep coredns # Check port usage\\nlsof -i :5353\\nnetstat -an | grep 5353 # View logs\\nprovisioning dns logs\\ntail -f ~/.provisioning/coredns/coredns.log # Validate configuration\\nprovisioning dns config validate # Test DNS query\\ndig @127.0.0.1 -p 5353 provisioning.local SOA # Restart service\\nprovisioning dns restart # For Docker\\nprovisioning dns docker logs\\nprovisioning dns docker health\\ndocker ps -a | grep coredns\\n```plaintext --- ### File Locations ```bash\\n# Binary\\n~/.provisioning/bin/coredns # Corefile\\n~/.provisioning/coredns/Corefile # Zone files\\n~/.provisioning/coredns/zones/ # Logs\\n~/.provisioning/coredns/coredns.log # PID file\\n~/.provisioning/coredns/coredns.pid # Docker compose\\nprovisioning/config/coredns/docker-compose.yml\\n```plaintext --- ### Configuration Example ```kcl\\nimport provisioning.coredns as dns coredns_config: dns.CoreDNSConfig = { mode = \\"local\\" local = { enabled = True deployment_type = \\"binary\\" # or \\"docker\\" port = 5353 zones = [\\"provisioning.local\\", \\"myapp.local\\"] } dynamic_updates = { enabled = True auto_register_servers = True } upstream = [\\"8.8.8.8\\", \\"1.1.1.1\\"]\\n}\\n```plaintext --- ### Environment Variables ```bash\\n# None required - configuration via KCL\\n```plaintext --- ### Default Values | Setting | Default |\\n|---------|---------|\\n| Port | 5353 |\\n| Zones | [\\"provisioning.local\\"] |\\n| Upstream | [\\"8.8.8.8\\", \\"1.1.1.1\\"] |\\n| TTL | 300 |\\n| Deployment | binary |\\n| Auto-start | true |\\n| Logging | enabled |\\n| Metrics | enabled |\\n| Metrics Port | 9153 | --- ## See Also - [Complete Guide](COREDNS_GUIDE.md) - Full documentation\\n- Implementation Summary - Technical details\\n- KCL Schema - Configuration schema --- **Last Updated**: 2025-10-06\\n**Version**: 1.0.0","breadcrumbs":"CoreDNS Guide » Custom Corefile Plugins","id":"1192","title":"Custom Corefile Plugins"},"1193":{"body":"","breadcrumbs":"Backup Recovery » Backup and Recovery","id":"1193","title":"Backup and Recovery"},"1194":{"body":"","breadcrumbs":"Deployment » Deployment Guide","id":"1194","title":"Deployment Guide"},"1195":{"body":"","breadcrumbs":"Monitoring » Monitoring Guide","id":"1195","title":"Monitoring Guide"},"1196":{"body":"Status : ✅ PRODUCTION READY Version : 1.0.0 Last Verified : 2025-12-09","breadcrumbs":"Production Readiness Checklist » Production Readiness Checklist","id":"1196","title":"Production Readiness Checklist"},"1197":{"body":"The Provisioning Setup System is production-ready for enterprise deployment. All components have been tested, validated, and verified to meet production standards.","breadcrumbs":"Production Readiness Checklist » Executive Summary","id":"1197","title":"Executive Summary"},"1198":{"body":"✅ Code Quality : 100% Nushell 0.109 compliant ✅ Test Coverage : 33/33 tests passing (100% pass rate) ✅ Security : Enterprise-grade security controls ✅ Performance : Sub-second response times ✅ Documentation : Comprehensive user and admin guides ✅ Reliability : Graceful error handling and fallbacks","breadcrumbs":"Production Readiness Checklist » Quality Metrics","id":"1198","title":"Quality Metrics"},"1199":{"body":"","breadcrumbs":"Production Readiness Checklist » Pre-Deployment Verification","id":"1199","title":"Pre-Deployment Verification"},"12":{"body":"provisioning/docs/src/\\n├── README.md (this file) # Documentation hub\\n├── getting-started/ # Getting started guides\\n│ ├── installation-guide.md\\n│ ├── getting-started.md\\n│ └── quickstart-cheatsheet.md\\n├── architecture/ # System architecture\\n│ ├── adr/ # Architecture Decision Records\\n│ ├── design-principles.md\\n│ ├── integration-patterns.md\\n│ ├── system-overview.md\\n│ └── ... (and 10+ more architecture docs)\\n├── infrastructure/ # Infrastructure guides\\n│ ├── cli-reference.md\\n│ ├── workspace-setup.md\\n│ ├── workspace-switching-guide.md\\n│ └── infrastructure-management.md\\n├── api-reference/ # API documentation\\n│ ├── rest-api.md\\n│ ├── websocket.md\\n│ ├── integration-examples.md\\n│ └── sdks.md\\n├── development/ # Developer guides\\n│ ├── README.md\\n│ ├── implementation-guide.md\\n│ ├── quick-provider-guide.md\\n│ ├── taskserv-developer-guide.md\\n│ └── ... (15+ more developer docs)\\n├── guides/ # How-to guides\\n│ ├── from-scratch.md\\n│ ├── update-infrastructure.md\\n│ └── customize-infrastructure.md\\n├── operations/ # Operations guides\\n│ ├── service-management-guide.md\\n│ ├── coredns-guide.md\\n│ └── ... (more operations docs)\\n├── security/ # Security docs\\n├── integration/ # Integration guides\\n├── testing/ # Testing docs\\n├── configuration/ # Configuration docs\\n├── troubleshooting/ # Troubleshooting guides\\n└── quick-reference/ # Quick references\\n```plaintext --- ## Key Concepts ### Infrastructure as Code (IaC) The provisioning platform uses **declarative configuration** to manage infrastructure. Instead of manually creating resources, you define what you want in Nickel configuration files, and the system makes it happen. ### Mode-Based Architecture The system supports four operational modes: - **Solo**: Single developer local development\\n- **Multi-user**: Team collaboration with shared services\\n- **CI/CD**: Automated pipeline execution\\n- **Enterprise**: Production deployment with strict compliance ### Extension System Extensibility through: - **Providers**: Cloud platform integrations (AWS, UpCloud, Local)\\n- **Task Services**: Infrastructure components (Kubernetes, databases, etc.)\\n- **Clusters**: Complete deployment configurations ### OCI-Native Distribution Extensions and packages distributed as OCI artifacts, enabling: - Industry-standard packaging\\n- Efficient caching and bandwidth\\n- Version pinning and rollback\\n- Air-gapped deployments --- ## Documentation by Role ### For New Users 1. Start with **[Installation Guide](getting-started/installation-guide.md)**\\n2. Read **[Getting Started](getting-started/getting-started.md)**\\n3. Follow **[From Scratch Guide](guides/from-scratch.md)**\\n4. Reference **[Quickstart Cheatsheet](guides/quickstart-cheatsheet.md)** ### For Developers 1. Review **[System Overview](architecture/system-overview.md)**\\n2. Study **[Design Principles](architecture/design-principles.md)**\\n3. Read relevant **[ADRs](architecture/)**\\n4. Follow **[Development Guide](development/README.md)**\\n5. Reference **KCL Quick Reference** ### For Operators 1. Understand **[Mode System](infrastructure/mode-system)**\\n2. Learn **[Service Management](operations/service-management-guide.md)**\\n3. Review **[Infrastructure Management](infrastructure/infrastructure-management.md)**\\n4. Study **[OCI Registry](integration/oci-registry-guide.md)** ### For Architects 1. Read **[System Overview](architecture/system-overview.md)**\\n2. Study all **[ADRs](architecture/)**\\n3. Review **[Integration Patterns](architecture/integration-patterns.md)**\\n4. Understand **[Multi-Repo Architecture](architecture/multi-repo-architecture.md)** --- ## System Capabilities ### ✅ Infrastructure Automation - Multi-cloud support (AWS, UpCloud, Local)\\n- Declarative configuration with KCL\\n- Automated dependency resolution\\n- Batch operations with rollback ### ✅ Workflow Orchestration - Hybrid Rust/Nushell orchestration\\n- Checkpoint-based recovery\\n- Parallel execution with limits\\n- Real-time monitoring ### ✅ Test Environments - Containerized testing\\n- Multi-node cluster simulation\\n- Topology templates\\n- Automated cleanup ### ✅ Mode-Based Operation - Solo: Local development\\n- Multi-user: Team collaboration\\n- CI/CD: Automated pipelines\\n- Enterprise: Production deployment ### ✅ Extension Management - OCI-native distribution\\n- Automatic dependency resolution\\n- Version management\\n- Local and remote sources --- ## Key Achievements ### 🚀 Batch Workflow System (v3.1.0) - Provider-agnostic batch operations\\n- Mixed provider support (UpCloud + AWS + local)\\n- Dependency resolution with soft/hard dependencies\\n- Real-time monitoring and rollback ### 🏗️ Hybrid Orchestrator (v3.0.0) - Solves Nushell deep call stack limitations\\n- Preserves all business logic\\n- REST API for external integration\\n- Checkpoint-based state management ### ⚙️ Configuration System (v2.0.0) - Migrated from ENV to config-driven\\n- Hierarchical configuration loading\\n- Variable interpolation\\n- True IaC without hardcoded fallbacks ### 🎯 Modular CLI (v3.2.0) - 84% reduction in main file size\\n- Domain-driven handlers\\n- 80+ shortcuts\\n- Bi-directional help system ### 🧪 Test Environment Service (v3.4.0) - Automated containerized testing\\n- Multi-node cluster topologies\\n- CI/CD integration ready\\n- Template-based configurations ### 🔄 Workspace Switching (v2.0.5) - Centralized workspace management\\n- Single-command workspace switching\\n- Active workspace tracking\\n- User preference system --- ## Technology Stack | Component | Technology | Purpose |\\n|-----------|------------|---------|\\n| **Core CLI** | Nushell 0.107.1 | Shell and scripting |\\n| **Configuration** | KCL 0.11.2 | Type-safe IaC |\\n| **Orchestrator** | Rust | High-performance coordination |\\n| **Templates** | Jinja2 (nu_plugin_tera) | Code generation |\\n| **Secrets** | SOPS 3.10.2 + Age 1.2.1 | Encryption |\\n| **Distribution** | OCI (skopeo/crane/oras) | Artifact management | --- ## Support ### Getting Help - **Documentation**: You\'re reading it!\\n- **Quick Reference**: Run `provisioning sc` or `provisioning guide quickstart`\\n- **Help System**: Run `provisioning help` or `provisioning help`\\n- **Interactive Shell**: Run `provisioning nu` for Nushell REPL ### Reporting Issues - Check **[Troubleshooting Guide](infrastructure/troubleshooting-guide.md)**\\n- Review **[FAQ](troubleshooting/troubleshooting-guide.md)**\\n- Enable debug mode: `provisioning --debug `\\n- Check logs: `provisioning platform logs ` --- ## Contributing This project welcomes contributions! See **[Development Guide](development/README.md)** for: - Development setup\\n- Code style guidelines\\n- Testing requirements\\n- Pull request process --- ## License [Add license information] --- ## Version History | Version | Date | Major Changes |\\n|---------|------|---------------|\\n| **3.5.0** | 2025-10-06 | Mode system, OCI registry, comprehensive documentation |\\n| **3.4.0** | 2025-10-06 | Test environment service |\\n| **3.3.0** | 2025-09-30 | Interactive guides system |\\n| **3.2.0** | 2025-09-30 | Modular CLI refactoring |\\n| **3.1.0** | 2025-09-25 | Batch workflow system |\\n| **3.0.0** | 2025-09-25 | Hybrid orchestrator architecture |\\n| **2.0.5** | 2025-10-02 | Workspace switching system |\\n| **2.0.0** | 2025-09-23 | Configuration system migration | --- **Maintained By**: Provisioning Team\\n**Last Review**: 2025-10-06\\n**Next Review**: 2026-01-06","breadcrumbs":"Home » Documentation Structure","id":"12","title":"Documentation Structure"},"120":{"body":"Software Version Purpose Nushell 0.107.1+ Shell and scripting language KCL 0.11.2+ Configuration language Docker 20.10+ Container runtime (for platform services) SOPS 3.10.2+ Secrets management Age 1.2.1+ Encryption tool","breadcrumbs":"Prerequisites » Core Dependencies","id":"120","title":"Core Dependencies"},"1200":{"body":"Nushell 0.109.0 or higher bash shell available One deployment tool (Docker/Kubernetes/SSH/systemd) 2+ CPU cores (4+ recommended) 4+ GB RAM (8+ recommended) Network connectivity (optional for offline mode)","breadcrumbs":"Production Readiness Checklist » 1. System Requirements ✅","id":"1200","title":"1. System Requirements ✅"},"1201":{"body":"All 9 modules passing syntax validation 46 total issues identified and resolved Nushell 0.109 compatibility verified Code style guidelines followed No hardcoded credentials or secrets","breadcrumbs":"Production Readiness Checklist » 2. Code Quality ✅","id":"1201","title":"2. Code Quality ✅"},"1202":{"body":"Unit tests: 33/33 passing Integration tests: All passing E2E tests: All passing Health check: Operational Deployment validation: Working","breadcrumbs":"Production Readiness Checklist » 3. Testing ✅","id":"1202","title":"3. Testing ✅"},"1203":{"body":"Configuration encryption ready Credential management secure No sensitive data in logs GDPR-compliant audit logging Role-based access control (RBAC) ready","breadcrumbs":"Production Readiness Checklist » 4. Security ✅","id":"1203","title":"4. Security ✅"},"1204":{"body":"User Quick Start Guide Comprehensive Setup Guide Installation Guide Troubleshooting Guide API Documentation","breadcrumbs":"Production Readiness Checklist » 5. Documentation ✅","id":"1204","title":"5. Documentation ✅"},"1205":{"body":"Installation script tested Health check script operational Configuration validation working Backup/restore functionality verified Migration path available","breadcrumbs":"Production Readiness Checklist » 6. Deployment Readiness ✅","id":"1205","title":"6. Deployment Readiness ✅"},"1206":{"body":"","breadcrumbs":"Production Readiness Checklist » Pre-Production Checklist","id":"1206","title":"Pre-Production Checklist"},"1207":{"body":"Team trained on provisioning basics Admin team trained on configuration management Support team trained on troubleshooting Operations team ready for deployment Security team reviewed security controls","breadcrumbs":"Production Readiness Checklist » Team Preparation","id":"1207","title":"Team Preparation"},"1208":{"body":"Target deployment environment prepared Network connectivity verified Required tools installed and tested Backup systems in place Monitoring configured","breadcrumbs":"Production Readiness Checklist » Infrastructure Preparation","id":"1208","title":"Infrastructure Preparation"},"1209":{"body":"Provider credentials securely stored Network configuration planned Workspace structure defined Deployment strategy documented Rollback plan prepared","breadcrumbs":"Production Readiness Checklist » Configuration Preparation","id":"1209","title":"Configuration Preparation"},"121":{"body":"Software Version Purpose Podman 4.0+ Alternative container runtime OrbStack Latest macOS-optimized container runtime K9s 0.50.6+ Kubernetes management interface glow Latest Markdown renderer for guides bat Latest Syntax highlighting for file viewing","breadcrumbs":"Prerequisites » Optional Dependencies","id":"121","title":"Optional Dependencies"},"1210":{"body":"System installed on staging environment All capabilities tested Health checks passing Full deployment scenario tested Failover procedures tested","breadcrumbs":"Production Readiness Checklist » Testing in Production-Like Environment","id":"1210","title":"Testing in Production-Like Environment"},"1211":{"body":"","breadcrumbs":"Production Readiness Checklist » Deployment Steps","id":"1211","title":"Deployment Steps"},"1212":{"body":"# 1. Run installation script\\n./scripts/install-provisioning.sh # 2. Verify installation\\nprovisioning -v # 3. Run health check\\nnu scripts/health-check.nu","breadcrumbs":"Production Readiness Checklist » Phase 1: Installation (30 minutes)","id":"1212","title":"Phase 1: Installation (30 minutes)"},"1213":{"body":"# 1. Run setup wizard\\nprovisioning setup system --interactive # 2. Validate configuration\\nprovisioning setup validate # 3. Test health\\nprovisioning platform health","breadcrumbs":"Production Readiness Checklist » Phase 2: Initial Configuration (15 minutes)","id":"1213","title":"Phase 2: Initial Configuration (15 minutes)"},"1214":{"body":"# 1. Create production workspace\\nprovisioning setup workspace production # 2. Configure providers\\nprovisioning setup provider upcloud --config config.toml # 3. Validate workspace\\nprovisioning setup validate","breadcrumbs":"Production Readiness Checklist » Phase 3: Workspace Setup (10 minutes)","id":"1214","title":"Phase 3: Workspace Setup (10 minutes)"},"1215":{"body":"# 1. Run comprehensive health check\\nprovisioning setup validate --verbose # 2. Test deployment (dry-run)\\nprovisioning server create --check # 3. Verify no errors\\n# Review output and confirm readiness","breadcrumbs":"Production Readiness Checklist » Phase 4: Verification (10 minutes)","id":"1215","title":"Phase 4: Verification (10 minutes)"},"1216":{"body":"","breadcrumbs":"Production Readiness Checklist » Post-Deployment Verification","id":"1216","title":"Post-Deployment Verification"},"1217":{"body":"All services running and healthy Configuration loaded correctly First test deployment successful Monitoring and logging working Backup system operational","breadcrumbs":"Production Readiness Checklist » Immediate (Within 1 hour)","id":"1217","title":"Immediate (Within 1 hour)"},"1218":{"body":"Run health checks daily Monitor error logs Verify backup operations Check workspace synchronization Validate credentials refresh","breadcrumbs":"Production Readiness Checklist » Daily (First week)","id":"1218","title":"Daily (First week)"},"1219":{"body":"Run comprehensive validation Test backup/restore procedures Review audit logs Performance analysis Security review","breadcrumbs":"Production Readiness Checklist » Weekly (First month)","id":"1219","title":"Weekly (First month)"},"122":{"body":"Before proceeding, verify your system has the core dependencies installed:","breadcrumbs":"Prerequisites » Installation Verification","id":"122","title":"Installation Verification"},"1220":{"body":"Weekly health checks Monthly comprehensive validation Quarterly security review Annual disaster recovery test","breadcrumbs":"Production Readiness Checklist » Ongoing (Production)","id":"1220","title":"Ongoing (Production)"},"1221":{"body":"","breadcrumbs":"Production Readiness Checklist » Troubleshooting Reference","id":"1221","title":"Troubleshooting Reference"},"1222":{"body":"Solution : # Check Nushell installation\\nnu --version # Run with debug\\nprovisioning -x setup system --interactive","breadcrumbs":"Production Readiness Checklist » Issue: Setup wizard won\'t start","id":"1222","title":"Issue: Setup wizard won\'t start"},"1223":{"body":"Solution : # Check configuration\\nprovisioning setup validate --verbose # View configuration paths\\nprovisioning info paths # Reset and reconfigure\\nprovisioning setup reset --confirm\\nprovisioning setup system --interactive","breadcrumbs":"Production Readiness Checklist » Issue: Configuration validation fails","id":"1223","title":"Issue: Configuration validation fails"},"1224":{"body":"Solution : # Run detailed health check\\nnu scripts/health-check.nu # Check specific service\\nprovisioning platform status # Restart services if needed\\nprovisioning platform restart","breadcrumbs":"Production Readiness Checklist » Issue: Health check shows warnings","id":"1224","title":"Issue: Health check shows warnings"},"1225":{"body":"Solution : # Dry-run to see what would happen\\nprovisioning server create --check # Check logs\\nprovisioning logs tail -f # Verify provider credentials\\nprovisioning setup validate provider upcloud","breadcrumbs":"Production Readiness Checklist » Issue: Deployment fails","id":"1225","title":"Issue: Deployment fails"},"1226":{"body":"Expected performance on modern hardware (4+ cores, 8+ GB RAM): Operation Expected Time Maximum Time Setup system 2-5 seconds 10 seconds Health check < 3 seconds 5 seconds Configuration validation < 500ms 1 second Server creation < 30 seconds 60 seconds Workspace switch < 100ms 500ms","breadcrumbs":"Production Readiness Checklist » Performance Baselines","id":"1226","title":"Performance Baselines"},"1227":{"body":"","breadcrumbs":"Production Readiness Checklist » Support and Escalation","id":"1227","title":"Support and Escalation"},"1228":{"body":"Review troubleshooting guide Check system health Review logs Restart services if needed","breadcrumbs":"Production Readiness Checklist » Level 1 Support (Team)","id":"1228","title":"Level 1 Support (Team)"},"1229":{"body":"Review configuration Analyze performance metrics Check resource constraints Plan optimization","breadcrumbs":"Production Readiness Checklist » Level 2 Support (Engineering)","id":"1229","title":"Level 2 Support (Engineering)"},"123":{"body":"# Check Nushell version\\nnu --version # Expected output: 0.107.1 or higher","breadcrumbs":"Prerequisites » Nushell","id":"123","title":"Nushell"},"1230":{"body":"Code-level debugging Feature requests Bug fixes Architecture changes","breadcrumbs":"Production Readiness Checklist » Level 3 Support (Development)","id":"1230","title":"Level 3 Support (Development)"},"1231":{"body":"If issues occur post-deployment: # 1. Take backup of current configuration\\nprovisioning setup backup --path rollback-$(date +%Y%m%d-%H%M%S).tar.gz # 2. Stop running deployments\\nprovisioning workflow stop --all # 3. Restore from previous backup\\nprovisioning setup restore --path # 4. Verify restoration\\nprovisioning setup validate --verbose # 5. Run health check\\nnu scripts/health-check.nu","breadcrumbs":"Production Readiness Checklist » Rollback Procedure","id":"1231","title":"Rollback Procedure"},"1232":{"body":"System is production-ready when: ✅ All tests passing ✅ Health checks show no critical issues ✅ Configuration validates successfully ✅ Team trained and ready ✅ Documentation complete ✅ Backup and recovery tested ✅ Monitoring configured ✅ Support procedures established","breadcrumbs":"Production Readiness Checklist » Success Criteria","id":"1232","title":"Success Criteria"},"1233":{"body":"Technical Lead : System validated and tested Operations : Infrastructure ready and monitored Security : Security controls reviewed and approved Management : Deployment approved for production Verification Date : 2025-12-09 Status : ✅ APPROVED FOR PRODUCTION DEPLOYMENT Next Review : 2025-12-16 (Weekly)","breadcrumbs":"Production Readiness Checklist » Sign-Off","id":"1233","title":"Sign-Off"},"1234":{"body":"Version : 1.0.0 Date : 2025-10-08 Audience : Platform Administrators, SREs, Security Team Training Duration : 45-60 minutes Certification : Required annually","breadcrumbs":"Break Glass Training Guide » Break-Glass Emergency Access - Training Guide","id":"1234","title":"Break-Glass Emergency Access - Training Guide"},"1235":{"body":"Break-glass is an emergency access procedure that allows authorized personnel to bypass normal security controls during critical incidents (e.g., production outages, security breaches, data loss).","breadcrumbs":"Break Glass Training Guide » 🚨 What is Break-Glass?","id":"1235","title":"🚨 What is Break-Glass?"},"1236":{"body":"Last Resort Only : Use only when normal access is insufficient Multi-Party Approval : Requires 2+ approvers from different teams Time-Limited : Maximum 4 hours, auto-revokes Enhanced Audit : 7-year retention, immutable logs Real-Time Alerts : Security team notified immediately","breadcrumbs":"Break Glass Training Guide » Key Principles","id":"1236","title":"Key Principles"},"1237":{"body":"When to Use Break-Glass When NOT to Use Roles & Responsibilities Break-Glass Workflow Using the System Examples Auditing & Compliance Post-Incident Review FAQ Emergency Contacts","breadcrumbs":"Break Glass Training Guide » 📋 Table of Contents","id":"1237","title":"📋 Table of Contents"},"1238":{"body":"","breadcrumbs":"Break Glass Training Guide » When to Use Break-Glass","id":"1238","title":"When to Use Break-Glass"},"1239":{"body":"Scenario Example Urgency Production Outage Database cluster unresponsive, affecting all users Critical Security Incident Active breach detected, need immediate containment Critical Data Loss Accidental deletion of critical data, need restore High System Failure Infrastructure failure requiring emergency fixes High Locked Out Normal admin accounts compromised, need recovery High","breadcrumbs":"Break Glass Training Guide » ✅ Valid Emergency Scenarios","id":"1239","title":"✅ Valid Emergency Scenarios"},"124":{"body":"# Check KCL version\\nkcl --version # Expected output: 0.11.2 or higher","breadcrumbs":"Prerequisites » KCL","id":"124","title":"KCL"},"1240":{"body":"Use break-glass if ALL apply: Production systems affected OR security incident Normal access insufficient OR unavailable Immediate action required (cannot wait for approval process) Clear justification for emergency access Incident properly documented","breadcrumbs":"Break Glass Training Guide » Criteria Checklist","id":"1240","title":"Criteria Checklist"},"1241":{"body":"","breadcrumbs":"Break Glass Training Guide » When NOT to Use","id":"1241","title":"When NOT to Use"},"1242":{"body":"Scenario Why Not Alternative Forgot password Not an emergency Use password reset Routine maintenance Can be scheduled Use normal change process Convenience Normal process \\"too slow\\" Follow standard approval Deadline pressure Business pressure ≠ emergency Plan ahead Testing Want to test emergency access Use dev environment","breadcrumbs":"Break Glass Training Guide » ❌ Invalid Scenarios (Do NOT Use Break-Glass)","id":"1242","title":"❌ Invalid Scenarios (Do NOT Use Break-Glass)"},"1243":{"body":"Immediate suspension of break-glass privileges Security team investigation Disciplinary action (up to termination) All actions audited and reviewed","breadcrumbs":"Break Glass Training Guide » Consequences of Misuse","id":"1243","title":"Consequences of Misuse"},"1244":{"body":"","breadcrumbs":"Break Glass Training Guide » Roles & Responsibilities","id":"1244","title":"Roles & Responsibilities"},"1245":{"body":"Who : Platform Admin, SRE on-call, Security Officer Responsibilities : Assess if situation warrants emergency access Provide clear justification and reason Document incident timeline Use access only for stated purpose Revoke access immediately after resolution","breadcrumbs":"Break Glass Training Guide » Requester","id":"1245","title":"Requester"},"1246":{"body":"Who : 2+ from different teams (Security, Platform, Engineering Leadership) Responsibilities : Verify emergency is genuine Assess risk of granting access Review requester\'s justification Monitor usage during active session Participate in post-incident review","breadcrumbs":"Break Glass Training Guide » Approvers","id":"1246","title":"Approvers"},"1247":{"body":"Who : Security Operations team Responsibilities : Monitor all break-glass activations (real-time) Review audit logs during session Alert on suspicious activity Lead post-incident review Update policies based on learnings","breadcrumbs":"Break Glass Training Guide » Security Team","id":"1247","title":"Security Team"},"1248":{"body":"","breadcrumbs":"Break Glass Training Guide » Break-Glass Workflow","id":"1248","title":"Break-Glass Workflow"},"1249":{"body":"┌─────────────────────────────────────────────────────────┐\\n│ 1. Requester submits emergency access request │\\n│ - Reason: \\"Production database cluster down\\" │\\n│ - Justification: \\"Need direct SSH to diagnose\\" │\\n│ - Duration: 2 hours │\\n│ - Resources: [\\"database/*\\"] │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 2. System creates request ID: BG-20251008-001 │\\n│ - Sends notifications to approver pool │\\n│ - Starts approval timeout (1 hour) │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext ### Phase 2: Approval (10-15 minutes) ```plaintext\\n┌─────────────────────────────────────────────────────────┐\\n│ 3. First approver reviews request │\\n│ - Verifies emergency is real │\\n│ - Checks requester\'s justification │\\n│ - Approves with reason │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 4. Second approver (different team) reviews │\\n│ - Independent verification │\\n│ - Approves with reason │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 5. System validates approvals │\\n│ - ✓ Min 2 approvers │\\n│ - ✓ Different teams │\\n│ - ✓ Within approval window │\\n│ - Status → APPROVED │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext ### Phase 3: Activation (1-2 minutes) ```plaintext\\n┌─────────────────────────────────────────────────────────┐\\n│ 6. Requester activates approved session │\\n│ - Receives emergency JWT token │\\n│ - Token valid for 2 hours (or requested duration) │\\n│ - All actions logged with session ID │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 7. Security team notified │\\n│ - Real-time alert: \\"Break-glass activated\\" │\\n│ - Monitoring dashboard shows active session │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext ### Phase 4: Usage (Variable) ```plaintext\\n┌─────────────────────────────────────────────────────────┐\\n│ 8. Requester performs emergency actions │\\n│ - Uses emergency token for access │\\n│ - Every action audited │\\n│ - Security team monitors in real-time │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 9. Background monitoring │\\n│ - Checks for suspicious activity │\\n│ - Enforces inactivity timeout (30 min) │\\n│ - Alerts on unusual patterns │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext ### Phase 5: Revocation (Immediate) ```plaintext\\n┌─────────────────────────────────────────────────────────┐\\n│ 10. Session ends (one of): │\\n│ - Manual revocation by requester │\\n│ - Expiration (max 4 hours) │\\n│ - Inactivity timeout (30 minutes) │\\n│ - Security team revocation │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 11. System audit │\\n│ - All actions logged (7-year retention) │\\n│ - Incident report generated │\\n│ - Post-incident review scheduled │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext --- ## Using the System ### CLI Commands #### 1. Request Emergency Access ```bash\\nprovisioning break-glass request \\\\ \\"Production database cluster unresponsive\\" \\\\ --justification \\"Need direct SSH access to diagnose PostgreSQL failure. All monitoring shows cluster down. Application completely offline affecting 10,000+ users.\\" \\\\ --resources \'[\\"database/*\\", \\"server/db-*\\"]\' \\\\ --duration 2hr # Output:\\n# ✓ Break-glass request created\\n# Request ID: BG-20251008-001\\n# Status: Pending Approval\\n# Approvers needed: 2\\n# Expires: 2025-10-08 11:30:00 (1 hour)\\n#\\n# Notifications sent to:\\n# - security-team@example.com\\n# - platform-admin@example.com\\n```plaintext #### 2. Approve Request (Approver) ```bash\\n# First approver (Security team)\\nprovisioning break-glass approve BG-20251008-001 \\\\ --reason \\"Emergency verified via incident INC-2025-234. Database cluster confirmed down, affecting production.\\" # Output:\\n# ✓ Approval granted\\n# Approver: alice@example.com (Security Team)\\n# Approvals: 1/2\\n# Status: Pending (need 1 more approval)\\n```plaintext ```bash\\n# Second approver (Platform team)\\nprovisioning break-glass approve BG-20251008-001 \\\\ --reason \\"Confirmed with monitoring. PostgreSQL master node unreachable. Emergency access justified.\\" # Output:\\n# ✓ Approval granted\\n# Approver: bob@example.com (Platform Team)\\n# Approvals: 2/2\\n# Status: APPROVED\\n#\\n# Requester can now activate session\\n```plaintext #### 3. Activate Session ```bash\\nprovisioning break-glass activate BG-20251008-001 # Output:\\n# ✓ Emergency session activated\\n# Session ID: BGS-20251008-001\\n# Token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...\\n# Expires: 2025-10-08 12:30:00 (2 hours)\\n# Max inactivity: 30 minutes\\n#\\n# ⚠️ WARNING ⚠️\\n# - All actions are logged and monitored\\n# - Security team has been notified\\n# - Session will auto-revoke after 2 hours\\n# - Use ONLY for stated emergency purpose\\n#\\n# Export token:\\nexport EMERGENCY_TOKEN=\\"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...\\"\\n```plaintext #### 4. Use Emergency Access ```bash\\n# SSH to database server\\nprovisioning ssh connect db-master-01 \\\\ --token $EMERGENCY_TOKEN # Execute emergency commands\\nsudo systemctl status postgresql\\nsudo tail -f /var/log/postgresql/postgresql.log # Diagnose issue...\\n# Fix issue...\\n```plaintext #### 5. Revoke Session ```bash\\n# When done, immediately revoke\\nprovisioning break-glass revoke BGS-20251008-001 \\\\ --reason \\"Database cluster restored. PostgreSQL master node restarted successfully. All services online.\\" # Output:\\n# ✓ Emergency session revoked\\n# Duration: 47 minutes\\n# Actions performed: 23\\n# Audit log: /var/log/provisioning/break-glass/BGS-20251008-001.json\\n#\\n# Post-incident review scheduled: 2025-10-09 10:00am\\n```plaintext ### Web UI (Control Center) #### Request Flow 1. **Navigate**: Control Center → Security → Break-Glass\\n2. **Click**: \\"Request Emergency Access\\"\\n3. **Fill Form**: - Reason: \\"Production database cluster down\\" - Justification: (detailed description) - Duration: 2 hours - Resources: Select from dropdown or wildcard\\n4. **Submit**: Request sent to approvers #### Approver Flow 1. **Receive**: Email/Slack notification\\n2. **Navigate**: Control Center → Break-Glass → Pending Requests\\n3. **Review**: Request details, reason, justification\\n4. **Decision**: Approve or Deny\\n5. **Reason**: Provide approval/denial reason #### Monitor Active Sessions 1. **Navigate**: Control Center → Security → Break-Glass → Active Sessions\\n2. **View**: Real-time dashboard of active sessions - Who, What, When, How long - Actions performed (live) - Inactivity timer\\n3. **Revoke**: Emergency revoke button (if needed) --- ## Examples ### Example 1: Production Database Outage **Scenario**: PostgreSQL cluster unresponsive, affecting all users **Request**: ```bash\\nprovisioning break-glass request \\\\ \\"Production PostgreSQL cluster completely unresponsive\\" \\\\ --justification \\"Database cluster (3 nodes) not responding. All application services offline. 10,000+ users affected. Need direct SSH to diagnose and restore. Monitoring shows all nodes down. Last known state: replication failure during routine backup.\\" \\\\ --resources \'[\\"database/*\\", \\"server/db-prod-*\\"]\' \\\\ --duration 2hr\\n```plaintext **Approval 1** (Security):\\n> \\"Verified incident INC-2025-234. Database monitoring confirms cluster down. Application completely offline. Emergency justified.\\" **Approval 2** (Platform):\\n> \\"Confirmed. PostgreSQL master and replicas unreachable. On-call SRE needs immediate access. Approved.\\" **Actions Taken**: 1. SSH to db-prod-01, db-prod-02, db-prod-03\\n2. Check PostgreSQL status: `systemctl status postgresql`\\n3. Review logs: `/var/log/postgresql/`\\n4. Diagnose: Disk full on master node\\n5. Fix: Clear old WAL files, restart PostgreSQL\\n6. Verify: Cluster restored, replication working\\n7. Revoke access **Outcome**: Cluster restored in 47 minutes. Root cause: Backup retention not working. --- ### Example 2: Security Incident **Scenario**: Suspicious activity detected, need immediate containment **Request**: ```bash\\nprovisioning break-glass request \\\\ \\"Active security breach detected - need immediate containment\\" \\\\ --justification \\"IDS alerts show unauthorized access from IP 203.0.113.42 to production API servers. Multiple failed sudo attempts. Need to isolate affected servers and investigate. Potential data exfiltration in progress.\\" \\\\ --resources \'[\\"server/api-prod-*\\", \\"firewall/*\\", \\"network/*\\"]\' \\\\ --duration 4hr\\n```plaintext **Approval 1** (Security):\\n> \\"Security incident SI-2025-089 confirmed. IDS shows sustained attack from external IP. Immediate containment required. Approved.\\" **Approval 2** (Engineering Director):\\n> \\"Concur with security assessment. Production impact acceptable vs risk of data breach. Approved.\\" **Actions Taken**: 1. Firewall block on 203.0.113.42\\n2. Isolate affected API servers\\n3. Snapshot servers for forensics\\n4. Review access logs\\n5. Identify compromised service account\\n6. Rotate credentials\\n7. Restore from clean backup\\n8. Re-enable servers with patched vulnerability **Outcome**: Breach contained in 3h 15min. No data loss. Vulnerability patched across fleet. --- ### Example 3: Accidental Data Deletion **Scenario**: Critical production data accidentally deleted **Request**: ```bash\\nprovisioning break-glass request \\\\ \\"Critical customer data accidentally deleted from production\\" \\\\ --justification \\"Database migration script ran against production instead of staging. Deleted 50,000+ customer records. Need immediate restore from backup before data loss is noticed. Normal restore process requires change approval (4-6 hours). Data loss window critical.\\" \\\\ --resources \'[\\"database/customers\\", \\"backup/*\\"]\' \\\\ --duration 3hr\\n```plaintext **Approval 1** (Platform):\\n> \\"Verified data deletion in production database. 50,284 records deleted at 10:42am. Backup available from 10:00am (42 minutes ago). Time-critical restore needed. Approved.\\" **Approval 2** (Security):\\n> \\"Risk assessment: Restore from trusted backup less risky than data loss. Emergency justified. Ensure post-incident review of deployment process. Approved.\\" **Actions Taken**: 1. Stop application writes to affected tables\\n2. Identify latest good backup (10:00am)\\n3. Restore deleted records from backup\\n4. Verify data integrity\\n5. Compare record counts\\n6. Re-enable application writes\\n7. Notify affected users (if any noticed) **Outcome**: Data restored in 1h 38min. Only 42 minutes of data lost (from backup to deletion). Zero customer impact. --- ## Auditing & Compliance ### What is Logged Every break-glass session logs: 1. **Request Details**: - Requester identity - Reason and justification - Requested resources - Requested duration - Timestamp 2. **Approval Process**: - Each approver identity - Approval/denial reason - Approval timestamp - Team affiliation 3. **Session Activity**: - Activation timestamp - Every action performed - Resources accessed - Commands executed - Inactivity periods 4. **Revocation**: - Revocation reason - Who revoked (system or manual) - Total duration - Final status ### Retention - **Break-glass logs**: 7 years (immutable)\\n- **Cannot be deleted**: Only anonymized for GDPR\\n- **Exported to SIEM**: Real-time ### Compliance Reports ```bash\\n# Generate break-glass usage report\\nprovisioning break-glass audit \\\\ --from \\"2025-01-01\\" \\\\ --to \\"2025-12-31\\" \\\\ --format pdf \\\\ --output break-glass-2025-report.pdf # Report includes:\\n# - Total break-glass activations\\n# - Average duration\\n# - Most common reasons\\n# - Approval times\\n# - Incidents resolved\\n# - Misuse incidents (if any)\\n```plaintext --- ## Post-Incident Review ### Within 24 Hours **Required attendees**: - Requester\\n- Approvers\\n- Security team\\n- Incident commander **Agenda**: 1. **Timeline Review**: What happened, when\\n2. **Actions Taken**: What was done with emergency access\\n3. **Outcome**: Was issue resolved? Any side effects?\\n4. **Process**: Did break-glass work as intended?\\n5. **Lessons Learned**: What can be improved? ### Review Checklist - [ ] Was break-glass appropriate for this incident?\\n- [ ] Were approvals granted timely?\\n- [ ] Was access used only for stated purpose?\\n- [ ] Were any security policies violated?\\n- [ ] Could incident be prevented in future?\\n- [ ] Do we need policy updates?\\n- [ ] Do we need system changes? ### Output **Incident Report**: ```markdown\\n# Break-Glass Incident Report: BG-20251008-001 **Incident**: Production database cluster outage\\n**Duration**: 47 minutes\\n**Impact**: 10,000+ users, complete service outage ## Timeline\\n- 10:15: Incident detected\\n- 10:17: Break-glass requested\\n- 10:25: Approved (2/2)\\n- 10:27: Activated\\n- 11:02: Database restored\\n- 11:04: Session revoked ## Actions Taken\\n1. SSH access to database servers\\n2. Diagnosed disk full issue\\n3. Cleared old WAL files\\n4. Restarted PostgreSQL\\n5. Verified replication ## Root Cause\\nBackup retention job failed silently for 2 weeks, causing WAL files to accumulate until disk full. ## Prevention\\n- ✅ Add disk space monitoring alerts\\n- ✅ Fix backup retention job\\n- ✅ Test recovery procedures\\n- ✅ Implement WAL archiving to S3 ## Break-Glass Assessment\\n- ✓ Appropriate use\\n- ✓ Timely approvals\\n- ✓ No policy violations\\n- ✓ Access revoked promptly\\n```plaintext --- ## FAQ ### Q: How quickly can break-glass be activated? **A**: Typically 15-20 minutes: - 5 min: Request submission\\n- 10 min: Approvals (2 people)\\n- 2 min: Activation In extreme emergencies, approvers can be on standby. ### Q: Can I use break-glass for scheduled maintenance? **A**: No. Break-glass is for emergencies only. Schedule maintenance through normal change process. ### Q: What if I can\'t get 2 approvers? **A**: System requires 2 approvers from different teams. If unavailable: 1. Escalate to on-call manager\\n2. Contact security team directly\\n3. Use emergency contact list ### Q: Can approvers be from the same team? **A**: No. System enforces team diversity to prevent collusion. ### Q: What if security team revokes my session? **A**: Security team can revoke for: - Suspicious activity\\n- Policy violation\\n- Incident resolved\\n- Misuse detected You\'ll receive immediate notification. Contact security team for details. ### Q: Can I extend an active session? **A**: No. Maximum duration is 4 hours. If you need more time, submit a new request with updated justification. ### Q: What happens if I forget to revoke? **A**: Session auto-revokes after: - Maximum duration (4 hours), OR\\n- Inactivity timeout (30 minutes) Always manually revoke when done. ### Q: Is break-glass monitored? **A**: Yes. Security team monitors in real-time: - Session activation alerts\\n- Action logging\\n- Suspicious activity detection\\n- Compliance verification ### Q: Can I practice break-glass? **A**: Yes, in **development environment only**: ```bash\\nPROVISIONING_ENV=dev provisioning break-glass request \\"Test emergency access procedure\\"\\n```plaintext Never practice in staging or production. --- ## Emergency Contacts ### During Incident | Role | Contact | Response Time |\\n|------|---------|---------------|\\n| **Security On-Call** | +1-555-SECURITY | 5 minutes |\\n| **Platform On-Call** | +1-555-PLATFORM | 5 minutes |\\n| **Engineering Director** | +1-555-ENG-DIR | 15 minutes | ### Escalation Path 1. **L1**: On-call SRE\\n2. **L2**: Platform team lead\\n3. **L3**: Engineering manager\\n4. **L4**: Director of Engineering\\n5. **L5**: CTO ### Communication Channels - **Incident Slack**: `#incidents`\\n- **Security Slack**: `#security-alerts`\\n- **Email**: `security-team@example.com`\\n- **PagerDuty**: Break-glass policy --- ## Training Certification **I certify that I have**: - [ ] Read and understood this training guide\\n- [ ] Understand when to use (and not use) break-glass\\n- [ ] Know the approval workflow\\n- [ ] Can use the CLI commands\\n- [ ] Understand auditing and compliance requirements\\n- [ ] Will follow post-incident review process **Signature**: _________________________\\n**Date**: _________________________\\n**Next Training Due**: _________________________ (1 year) --- **Version**: 1.0.0\\n**Maintained By**: Security Team\\n**Last Updated**: 2025-10-08\\n**Next Review**: 2026-10-08","breadcrumbs":"Break Glass Training Guide » Phase 1: Request (5 minutes)","id":"1249","title":"Phase 1: Request (5 minutes)"},"125":{"body":"# Check Docker version\\ndocker --version # Check Docker is running\\ndocker ps # Expected: Docker version 20.10+ and connection successful","breadcrumbs":"Prerequisites » Docker","id":"125","title":"Docker"},"1250":{"body":"Version : 1.0.0 Date : 2025-10-08 Audience : Platform Administrators, Security Teams Prerequisites : Understanding of Cedar policy language, Provisioning platform architecture","breadcrumbs":"Cedar Policies Production Guide » Cedar Policies Production Guide","id":"1250","title":"Cedar Policies Production Guide"},"1251":{"body":"Introduction Cedar Policy Basics Production Policy Strategy Policy Templates Policy Development Workflow Testing Policies Deployment Monitoring & Auditing Troubleshooting Best Practices","breadcrumbs":"Cedar Policies Production Guide » Table of Contents","id":"1251","title":"Table of Contents"},"1252":{"body":"Cedar policies control who can do what in the Provisioning platform. This guide helps you create, test, and deploy production-ready Cedar policies that balance security with operational efficiency.","breadcrumbs":"Cedar Policies Production Guide » Introduction","id":"1252","title":"Introduction"},"1253":{"body":"Fine-grained : Control access at resource + action level Context-aware : Decisions based on MFA, IP, time, approvals Auditable : Every decision is logged with policy ID Hot-reload : Update policies without restarting services Type-safe : Schema validation prevents errors","breadcrumbs":"Cedar Policies Production Guide » Why Cedar?","id":"1253","title":"Why Cedar?"},"1254":{"body":"","breadcrumbs":"Cedar Policies Production Guide » Cedar Policy Basics","id":"1254","title":"Cedar Policy Basics"},"1255":{"body":"permit ( principal, # Who (user, team, role) action, # What (create, delete, deploy) resource # Where (server, cluster, environment)\\n) when { condition # Context (MFA, IP, time)\\n};\\n```plaintext ### Entities | Type | Examples | Description |\\n|------|----------|-------------|\\n| **User** | `User::\\"alice\\"` | Individual users |\\n| **Team** | `Team::\\"platform-admin\\"` | User groups |\\n| **Role** | `Role::\\"Admin\\"` | Permission levels |\\n| **Resource** | `Server::\\"web-01\\"` | Infrastructure resources |\\n| **Environment** | `Environment::\\"production\\"` | Deployment targets | ### Actions | Category | Actions |\\n|----------|---------|\\n| **Read** | `read`, `list` |\\n| **Write** | `create`, `update`, `delete` |\\n| **Deploy** | `deploy`, `rollback` |\\n| **Admin** | `ssh`, `execute`, `admin` | --- ## Production Policy Strategy ### Security Levels #### Level 1: Development (Permissive) ```cedar\\n// Developers have full access to dev environment\\npermit ( principal in Team::\\"developers\\", action, resource in Environment::\\"development\\"\\n);\\n```plaintext #### Level 2: Staging (MFA Required) ```cedar\\n// All operations require MFA\\npermit ( principal in Team::\\"developers\\", action, resource in Environment::\\"staging\\"\\n) when { context.mfa_verified == true\\n};\\n```plaintext #### Level 3: Production (MFA + Approval) ```cedar\\n// Deployments require MFA + approval\\npermit ( principal in Team::\\"platform-admin\\", action in [Action::\\"deploy\\", Action::\\"delete\\"], resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context has approval_id && context.approval_id.startsWith(\\"APPROVAL-\\")\\n};\\n```plaintext #### Level 4: Critical (Break-Glass Only) ```cedar\\n// Only emergency access\\npermit ( principal, action, resource in Resource::\\"production-database\\"\\n) when { context.emergency_access == true && context.session_approved == true\\n};\\n```plaintext --- ## Policy Templates ### 1. Role-Based Access Control (RBAC) ```cedar\\n// Admin: Full access\\npermit ( principal in Role::\\"Admin\\", action, resource\\n); // Operator: Server management + read clusters\\npermit ( principal in Role::\\"Operator\\", action in [ Action::\\"create\\", Action::\\"update\\", Action::\\"delete\\" ], resource is Server\\n); permit ( principal in Role::\\"Operator\\", action in [Action::\\"read\\", Action::\\"list\\"], resource is Cluster\\n); // Viewer: Read-only everywhere\\npermit ( principal in Role::\\"Viewer\\", action in [Action::\\"read\\", Action::\\"list\\"], resource\\n); // Auditor: Read audit logs only\\npermit ( principal in Role::\\"Auditor\\", action in [Action::\\"read\\", Action::\\"list\\"], resource is AuditLog\\n);\\n```plaintext ### 2. Team-Based Policies ```cedar\\n// Platform team: Infrastructure management\\npermit ( principal in Team::\\"platform\\", action in [ Action::\\"create\\", Action::\\"update\\", Action::\\"delete\\", Action::\\"deploy\\" ], resource in [Server, Cluster, Taskserv]\\n); // Security team: Access control + audit\\npermit ( principal in Team::\\"security\\", action, resource in [User, Role, AuditLog, BreakGlass]\\n); // DevOps team: Application deployments\\npermit ( principal in Team::\\"devops\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context.has_approval == true\\n};\\n```plaintext ### 3. Time-Based Restrictions ```cedar\\n// Deployments only during business hours\\npermit ( principal, action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.time.hour >= 9 && context.time.hour <= 17 && context.time.weekday in [\\"Monday\\", \\"Tuesday\\", \\"Wednesday\\", \\"Thursday\\", \\"Friday\\"]\\n}; // Maintenance window\\npermit ( principal in Team::\\"platform\\", action, resource\\n) when { context.maintenance_window == true\\n};\\n```plaintext ### 4. IP-Based Restrictions ```cedar\\n// Production access only from office network\\npermit ( principal, action, resource in Environment::\\"production\\"\\n) when { context.ip_address.isInRange(\\"10.0.0.0/8\\") || context.ip_address.isInRange(\\"192.168.1.0/24\\")\\n}; // VPN access for remote work\\npermit ( principal, action, resource in Environment::\\"production\\"\\n) when { context.vpn_connected == true && context.mfa_verified == true\\n};\\n```plaintext ### 5. Resource-Specific Policies ```cedar\\n// Database servers: Extra protection\\nforbid ( principal, action == Action::\\"delete\\", resource in Resource::\\"database-*\\"\\n) unless { context.emergency_access == true\\n}; // Critical clusters: Require multiple approvals\\npermit ( principal, action in [Action::\\"update\\", Action::\\"delete\\"], resource in Resource::\\"k8s-production-*\\"\\n) when { context.approval_count >= 2 && context.mfa_verified == true\\n};\\n```plaintext ### 6. Self-Service Policies ```cedar\\n// Users can manage their own MFA devices\\npermit ( principal, action in [Action::\\"create\\", Action::\\"delete\\"], resource is MfaDevice\\n) when { resource.owner == principal\\n}; // Users can view their own audit logs\\npermit ( principal, action == Action::\\"read\\", resource is AuditLog\\n) when { resource.user_id == principal.id\\n};\\n```plaintext --- ## Policy Development Workflow ### Step 1: Define Requirements **Document**: - Who needs access? (roles, teams, individuals)\\n- To what resources? (servers, clusters, environments)\\n- What actions? (read, write, deploy, delete)\\n- Under what conditions? (MFA, IP, time, approvals) **Example Requirements Document**: ```markdown\\n# Requirement: Production Deployment **Who**: DevOps team members\\n**What**: Deploy applications to production\\n**When**: Business hours (9am-5pm Mon-Fri)\\n**Conditions**:\\n- MFA verified\\n- Change request approved\\n- From office network or VPN\\n```plaintext ### Step 2: Write Policy ```cedar\\n@id(\\"prod-deploy-devops\\")\\n@description(\\"DevOps can deploy to production during business hours with approval\\")\\npermit ( principal in Team::\\"devops\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context has approval_id && context.time.hour >= 9 && context.time.hour <= 17 && context.time.weekday in [\\"Monday\\", \\"Tuesday\\", \\"Wednesday\\", \\"Thursday\\", \\"Friday\\"] && (context.ip_address.isInRange(\\"10.0.0.0/8\\") || context.vpn_connected == true)\\n};\\n```plaintext ### Step 3: Validate Syntax ```bash\\n# Use Cedar CLI to validate\\ncedar validate \\\\ --policies provisioning/config/cedar-policies/production.cedar \\\\ --schema provisioning/config/cedar-policies/schema.cedar # Expected output: ✓ Policy is valid\\n```plaintext ### Step 4: Test in Development ```bash\\n# Deploy to development environment first\\ncp production.cedar provisioning/config/cedar-policies/development.cedar # Restart orchestrator to load new policies\\nsystemctl restart provisioning-orchestrator # Test with real requests\\nprovisioning server create test-server --check\\n```plaintext ### Step 5: Review & Approve **Review Checklist**: - [ ] Policy syntax valid\\n- [ ] Policy ID unique\\n- [ ] Description clear\\n- [ ] Conditions appropriate for security level\\n- [ ] Tested in development\\n- [ ] Reviewed by security team\\n- [ ] Documented in change log ### Step 6: Deploy to Production ```bash\\n# Backup current policies\\ncp provisioning/config/cedar-policies/production.cedar \\\\ provisioning/config/cedar-policies/production.cedar.backup.$(date +%Y%m%d) # Deploy new policy\\ncp new-production.cedar provisioning/config/cedar-policies/production.cedar # Hot reload (no restart needed)\\nprovisioning cedar reload # Verify loaded\\nprovisioning cedar list\\n```plaintext --- ## Testing Policies ### Unit Testing Create test cases for each policy: ```yaml\\n# tests/cedar/prod-deploy-devops.yaml\\npolicy_id: prod-deploy-devops test_cases: - name: \\"DevOps can deploy with approval and MFA\\" principal: { type: \\"Team\\", id: \\"devops\\" } action: \\"deploy\\" resource: { type: \\"Environment\\", id: \\"production\\" } context: mfa_verified: true approval_id: \\"APPROVAL-123\\" time: { hour: 10, weekday: \\"Monday\\" } ip_address: \\"10.0.1.5\\" expected: Allow - name: \\"DevOps cannot deploy without MFA\\" principal: { type: \\"Team\\", id: \\"devops\\" } action: \\"deploy\\" resource: { type: \\"Environment\\", id: \\"production\\" } context: mfa_verified: false approval_id: \\"APPROVAL-123\\" time: { hour: 10, weekday: \\"Monday\\" } expected: Deny - name: \\"DevOps cannot deploy outside business hours\\" principal: { type: \\"Team\\", id: \\"devops\\" } action: \\"deploy\\" resource: { type: \\"Environment\\", id: \\"production\\" } context: mfa_verified: true approval_id: \\"APPROVAL-123\\" time: { hour: 22, weekday: \\"Monday\\" } expected: Deny\\n```plaintext Run tests: ```bash\\nprovisioning cedar test tests/cedar/\\n```plaintext ### Integration Testing Test with real API calls: ```bash\\n# Setup test user\\nexport TEST_USER=\\"alice\\"\\nexport TEST_TOKEN=$(provisioning login --user $TEST_USER --output token) # Test allowed action\\ncurl -H \\"Authorization: Bearer $TEST_TOKEN\\" \\\\ http://localhost:9090/api/v1/servers \\\\ -X POST -d \'{\\"name\\": \\"test-server\\"}\' # Expected: 200 OK # Test denied action (without MFA)\\ncurl -H \\"Authorization: Bearer $TEST_TOKEN\\" \\\\ http://localhost:9090/api/v1/servers/prod-server-01 \\\\ -X DELETE # Expected: 403 Forbidden (MFA required)\\n```plaintext ### Load Testing Verify policy evaluation performance: ```bash\\n# Generate load\\nprovisioning cedar bench \\\\ --policies production.cedar \\\\ --requests 10000 \\\\ --concurrency 100 # Expected: <10ms per evaluation\\n```plaintext --- ## Deployment ### Development → Staging → Production ```bash\\n#!/bin/bash\\n# deploy-policies.sh ENVIRONMENT=$1 # dev, staging, prod # Validate policies\\ncedar validate \\\\ --policies provisioning/config/cedar-policies/$ENVIRONMENT.cedar \\\\ --schema provisioning/config/cedar-policies/schema.cedar if [ $? -ne 0 ]; then echo \\"❌ Policy validation failed\\" exit 1\\nfi # Backup current policies\\nBACKUP_DIR=\\"provisioning/config/cedar-policies/backups/$ENVIRONMENT\\"\\nmkdir -p $BACKUP_DIR\\ncp provisioning/config/cedar-policies/$ENVIRONMENT.cedar \\\\ $BACKUP_DIR/$ENVIRONMENT.cedar.$(date +%Y%m%d-%H%M%S) # Deploy new policies\\nscp provisioning/config/cedar-policies/$ENVIRONMENT.cedar \\\\ $ENVIRONMENT-orchestrator:/etc/provisioning/cedar-policies/production.cedar # Hot reload on remote\\nssh $ENVIRONMENT-orchestrator \\"provisioning cedar reload\\" echo \\"✅ Policies deployed to $ENVIRONMENT\\"\\n```plaintext ### Rollback Procedure ```bash\\n# List backups\\nls -ltr provisioning/config/cedar-policies/backups/production/ # Restore previous version\\ncp provisioning/config/cedar-policies/backups/production/production.cedar.20251008-143000 \\\\ provisioning/config/cedar-policies/production.cedar # Reload\\nprovisioning cedar reload # Verify\\nprovisioning cedar list\\n```plaintext --- ## Monitoring & Auditing ### Monitor Authorization Decisions ```bash\\n# Query denied requests (last 24 hours)\\nprovisioning audit query \\\\ --action authorization_denied \\\\ --from \\"24h\\" \\\\ --out table # Expected output:\\n# ┌─────────┬────────┬──────────┬────────┬────────────────┐\\n# │ Time │ User │ Action │ Resour │ Reason │\\n# ├─────────┼────────┼──────────┼────────┼────────────────┤\\n# │ 10:15am │ bob │ deploy │ prod │ MFA not verif │\\n# │ 11:30am │ alice │ delete │ db-01 │ No approval │\\n# └─────────┴────────┴──────────┴────────┴────────────────┘\\n```plaintext ### Alert on Suspicious Activity ```yaml\\n# alerts/cedar-policies.yaml\\nalerts: - name: \\"High Denial Rate\\" query: \\"authorization_denied\\" threshold: 10 window: \\"5m\\" action: \\"notify:security-team\\" - name: \\"Policy Bypass Attempt\\" query: \\"action:deploy AND result:denied\\" user: \\"critical-users\\" action: \\"page:oncall\\"\\n```plaintext ### Policy Usage Statistics ```bash\\n# Which policies are most used?\\nprovisioning cedar stats --top 10 # Example output:\\n# Policy ID | Uses | Allows | Denies\\n# ----------------------|-------|--------|-------\\n# prod-deploy-devops | 1,234 | 1,100 | 134\\n# admin-full-access | 892 | 892 | 0\\n# viewer-read-only | 5,421 | 5,421 | 0\\n```plaintext --- ## Troubleshooting ### Policy Not Applying **Symptom**: Policy changes not taking effect **Solutions**: 1. Verify hot reload: ```bash provisioning cedar reload provisioning cedar list # Should show updated timestamp Check orchestrator logs: journalctl -u provisioning-orchestrator -f | grep cedar Restart orchestrator: systemctl restart provisioning-orchestrator","breadcrumbs":"Cedar Policies Production Guide » Core Concepts","id":"1255","title":"Core Concepts"},"1256":{"body":"Symptom : User denied access when policy should allow Debug : # Enable debug mode\\nexport PROVISIONING_DEBUG=1 # View authorization decision\\nprovisioning audit query \\\\ --user alice \\\\ --action deploy \\\\ --from \\"1h\\" \\\\ --out json | jq \'.authorization\' # Shows which policy evaluated, context used, reason for denial\\n```plaintext ### Policy Conflicts **Symptom**: Multiple policies match, unclear which applies **Resolution**: - Cedar uses **deny-override**: If any `forbid` matches, request denied\\n- Use `@priority` annotations (higher number = higher priority)\\n- Make policies more specific to avoid conflicts ```cedar\\n@priority(100)\\npermit ( principal in Role::\\"Admin\\", action, resource\\n); @priority(50)\\nforbid ( principal, action == Action::\\"delete\\", resource is Database\\n); // Admin can do anything EXCEPT delete databases\\n```plaintext --- ## Best Practices ### 1. Start Restrictive, Loosen Gradually ```cedar\\n// ❌ BAD: Too permissive initially\\npermit (principal, action, resource); // ✅ GOOD: Explicit allow, expand as needed\\npermit ( principal in Role::\\"Admin\\", action in [Action::\\"read\\", Action::\\"list\\"], resource\\n);\\n```plaintext ### 2. Use Annotations ```cedar\\n@id(\\"prod-deploy-mfa\\")\\n@description(\\"Production deployments require MFA verification\\")\\n@owner(\\"platform-team\\")\\n@reviewed(\\"2025-10-08\\")\\n@expires(\\"2026-10-08\\")\\npermit ( principal in Team::\\"platform-admin\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true\\n};\\n```plaintext ### 3. Principle of Least Privilege Give users **minimum permissions** needed: ```cedar\\n// ❌ BAD: Overly broad\\npermit (principal in Team::\\"developers\\", action, resource); // ✅ GOOD: Specific permissions\\npermit ( principal in Team::\\"developers\\", action in [Action::\\"read\\", Action::\\"create\\", Action::\\"update\\"], resource in Environment::\\"development\\"\\n);\\n```plaintext ### 4. Document Context Requirements ```cedar\\n// Context required for this policy:\\n// - mfa_verified: boolean (from JWT claims)\\n// - approval_id: string (from request header)\\n// - ip_address: IpAddr (from connection)\\npermit ( principal in Role::\\"Operator\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context has approval_id && context.ip_address.isInRange(\\"10.0.0.0/8\\")\\n};\\n```plaintext ### 5. Separate Policies by Concern **File organization**: ```plaintext\\ncedar-policies/\\n├── schema.cedar # Entity/action definitions\\n├── rbac.cedar # Role-based policies\\n├── teams.cedar # Team-based policies\\n├── time-restrictions.cedar # Time-based policies\\n├── ip-restrictions.cedar # Network-based policies\\n├── production.cedar # Production-specific\\n└── development.cedar # Development-specific\\n```plaintext ### 6. Version Control ```bash\\n# Git commit each policy change\\ngit add provisioning/config/cedar-policies/production.cedar\\ngit commit -m \\"feat(cedar): Add MFA requirement for prod deployments - Require MFA for all production deployments\\n- Applies to devops and platform-admin teams\\n- Effective 2025-10-08 Policy ID: prod-deploy-mfa\\nReviewed by: security-team\\nTicket: SEC-1234\\" git push\\n```plaintext ### 7. Regular Policy Audits **Quarterly review**: - [ ] Remove unused policies\\n- [ ] Tighten overly permissive policies\\n- [ ] Update for new resources/actions\\n- [ ] Verify team memberships current\\n- [ ] Test break-glass procedures --- ## Quick Reference ### Common Policy Patterns ```cedar\\n# Allow all\\npermit (principal, action, resource); # Deny all\\nforbid (principal, action, resource); # Role-based\\npermit (principal in Role::\\"Admin\\", action, resource); # Team-based\\npermit (principal in Team::\\"platform\\", action, resource); # Resource-based\\npermit (principal, action, resource in Environment::\\"production\\"); # Action-based\\npermit (principal, action in [Action::\\"read\\", Action::\\"list\\"], resource); # Condition-based\\npermit (principal, action, resource) when { context.mfa_verified == true }; # Complex\\npermit ( principal in Team::\\"devops\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context has approval_id && context.time.hour >= 9 && context.time.hour <= 17\\n};\\n```plaintext ### Useful Commands ```bash\\n# Validate policies\\nprovisioning cedar validate # Reload policies (hot reload)\\nprovisioning cedar reload # List active policies\\nprovisioning cedar list # Test policies\\nprovisioning cedar test tests/ # Query denials\\nprovisioning audit query --action authorization_denied # Policy statistics\\nprovisioning cedar stats\\n```plaintext --- ## Support - **Documentation**: `docs/architecture/CEDAR_AUTHORIZATION_IMPLEMENTATION.md`\\n- **Policy Examples**: `provisioning/config/cedar-policies/`\\n- **Issues**: Report to platform-team\\n- **Emergency**: Use break-glass procedure --- **Version**: 1.0.0\\n**Maintained By**: Platform Team\\n**Last Updated**: 2025-10-08","breadcrumbs":"Cedar Policies Production Guide » Unexpected Denials","id":"1256","title":"Unexpected Denials"},"1257":{"body":"Document Version : 1.0.0 Last Updated : 2025-10-08 Target Audience : Platform Administrators, Security Team Prerequisites : Control Center deployed, admin user created","breadcrumbs":"MFA Admin Setup Guide » MFA Admin Setup Guide - Production Operations Manual","id":"1257","title":"MFA Admin Setup Guide - Production Operations Manual"},"1258":{"body":"Overview MFA Requirements Admin Enrollment Process TOTP Setup (Authenticator Apps) WebAuthn Setup (Hardware Keys) Enforcing MFA via Cedar Policies Backup Codes Management Recovery Procedures Troubleshooting Best Practices Audit and Compliance","breadcrumbs":"MFA Admin Setup Guide » 📋 Table of Contents","id":"1258","title":"📋 Table of Contents"},"1259":{"body":"","breadcrumbs":"MFA Admin Setup Guide » Overview","id":"1259","title":"Overview"},"126":{"body":"# Check SOPS version\\nsops --version # Expected output: 3.10.2 or higher","breadcrumbs":"Prerequisites » SOPS","id":"126","title":"SOPS"},"1260":{"body":"Multi-Factor Authentication (MFA) adds a second layer of security beyond passwords. Admins must provide: Something they know : Password Something they have : TOTP code (authenticator app) or WebAuthn device (YubiKey, Touch ID)","breadcrumbs":"MFA Admin Setup Guide » What is MFA?","id":"1260","title":"What is MFA?"},"1261":{"body":"Administrators have elevated privileges including: Server creation/deletion Production deployments Secret management User management Break-glass approval MFA protects against : Password compromise (phishing, leaks, brute force) Unauthorized access to critical systems Compliance violations (SOC2, ISO 27001)","breadcrumbs":"MFA Admin Setup Guide » Why MFA for Admins?","id":"1261","title":"Why MFA for Admins?"},"1262":{"body":"Method Type Examples Recommended For TOTP Software Google Authenticator, Authy, 1Password All admins (primary) WebAuthn/FIDO2 Hardware YubiKey, Touch ID, Windows Hello High-security admins Backup Codes One-time 10 single-use codes Emergency recovery","breadcrumbs":"MFA Admin Setup Guide » MFA Methods Supported","id":"1262","title":"MFA Methods Supported"},"1263":{"body":"","breadcrumbs":"MFA Admin Setup Guide » MFA Requirements","id":"1263","title":"MFA Requirements"},"1264":{"body":"All administrators MUST enable MFA for: Production environment access Server creation/deletion operations Deployment to production clusters Secret access (KMS, dynamic secrets) Break-glass approval User management operations","breadcrumbs":"MFA Admin Setup Guide » Mandatory MFA Enforcement","id":"1264","title":"Mandatory MFA Enforcement"},"1265":{"body":"Development : MFA optional (not recommended) Staging : MFA recommended, not enforced Production : MFA mandatory (enforced by Cedar policies)","breadcrumbs":"MFA Admin Setup Guide » Grace Period","id":"1265","title":"Grace Period"},"1266":{"body":"Week 1-2: Pilot Program ├─ Platform admins enable MFA ├─ Document issues and refine process └─ Create training materials Week 3-4: Full Deployment ├─ All admins enable MFA ├─ Cedar policies enforce MFA for production └─ Monitor compliance Week 5+: Maintenance ├─ Regular MFA device audits ├─ Backup code rotation └─ User support for MFA issues\\n```plaintext --- ## Admin Enrollment Process ### Step 1: Initial Login (Password Only) ```bash\\n# Login with username/password\\nprovisioning login --user admin@example.com --workspace production # Response (partial token, MFA not yet verified):\\n{ \\"status\\": \\"mfa_required\\", \\"partial_token\\": \\"eyJhbGci...\\", # Limited access token \\"message\\": \\"MFA enrollment required for production access\\"\\n}\\n```plaintext **Partial token limitations**: - Cannot access production resources\\n- Can only access MFA enrollment endpoints\\n- Expires in 15 minutes ### Step 2: Choose MFA Method ```bash\\n# Check available MFA methods\\nprovisioning mfa methods # Output:\\nAvailable MFA Methods: • TOTP (Authenticator apps) - Recommended for all users • WebAuthn (Hardware keys) - Recommended for high-security roles • Backup Codes - Emergency recovery only # Check current MFA status\\nprovisioning mfa status # Output:\\nMFA Status: TOTP: Not enrolled WebAuthn: Not enrolled Backup Codes: Not generated MFA Required: Yes (production workspace)\\n```plaintext ### Step 3: Enroll MFA Device Choose one or both methods (TOTP + WebAuthn recommended): - [TOTP Setup](#totp-setup-authenticator-apps)\\n- [WebAuthn Setup](#webauthn-setup-hardware-keys) ### Step 4: Verify and Activate After enrollment, login again with MFA: ```bash\\n# Login (returns partial token)\\nprovisioning login --user admin@example.com --workspace production # Verify MFA code (returns full access token)\\nprovisioning mfa verify 123456 # Response:\\n{ \\"status\\": \\"authenticated\\", \\"access_token\\": \\"eyJhbGci...\\", # Full access token (15min) \\"refresh_token\\": \\"eyJhbGci...\\", # Refresh token (7 days) \\"mfa_verified\\": true, \\"expires_in\\": 900\\n}\\n```plaintext --- ## TOTP Setup (Authenticator Apps) ### Supported Authenticator Apps | App | Platform | Notes |\\n|-----|----------|-------|\\n| **Google Authenticator** | iOS, Android | Simple, widely used |\\n| **Authy** | iOS, Android, Desktop | Cloud backup, multi-device |\\n| **1Password** | All platforms | Integrated with password manager |\\n| **Microsoft Authenticator** | iOS, Android | Enterprise integration |\\n| **Bitwarden** | All platforms | Open source | ### Step-by-Step TOTP Enrollment #### 1. Initiate TOTP Enrollment ```bash\\nprovisioning mfa totp enroll\\n```plaintext **Output**: ```plaintext\\n╔════════════════════════════════════════════════════════════╗\\n║ TOTP ENROLLMENT ║\\n╚════════════════════════════════════════════════════════════╝ Scan this QR code with your authenticator app: █████████████████████████████████\\n█████████████████████████████████\\n████ ▄▄▄▄▄ █▀ █▀▀██ ▄▄▄▄▄ ████\\n████ █ █ █▀▄ ▀ ▄█ █ █ ████\\n████ █▄▄▄█ █ ▀▀ ▀▀█ █▄▄▄█ ████\\n████▄▄▄▄▄▄▄█ █▀█ ▀ █▄▄▄▄▄▄████\\n█████████████████████████████████\\n█████████████████████████████████ Manual entry (if QR code doesn\'t work): Secret: JBSWY3DPEHPK3PXP Account: admin@example.com Issuer: Provisioning Platform TOTP Configuration: Algorithm: SHA1 Digits: 6 Period: 30 seconds\\n```plaintext #### 2. Add to Authenticator App **Option A: Scan QR Code (Recommended)** 1. Open authenticator app (Google Authenticator, Authy, etc.)\\n2. Tap \\"+\\" or \\"Add Account\\"\\n3. Select \\"Scan QR Code\\"\\n4. Point camera at QR code displayed in terminal\\n5. Account added automatically **Option B: Manual Entry** 1. Open authenticator app\\n2. Tap \\"+\\" or \\"Add Account\\"\\n3. Select \\"Enter a setup key\\" or \\"Manual entry\\"\\n4. Enter: - **Account name**: - **Key**: `JBSWY3DPEHPK3PXP` (secret shown above) - **Type of key**: Time-based\\n5. Save account #### 3. Verify TOTP Code ```bash\\n# Get current code from authenticator app (6 digits, changes every 30s)\\n# Example code: 123456 provisioning mfa totp verify 123456\\n```plaintext **Success Response**: ```plaintext\\n✓ TOTP verified successfully! Backup Codes (SAVE THESE SECURELY): 1. A3B9-C2D7-E1F4 2. G8H5-J6K3-L9M2 3. N4P7-Q1R8-S5T2 4. U6V3-W9X1-Y7Z4 5. A2B8-C5D1-E9F3 6. G7H4-J2K6-L8M1 7. N3P9-Q5R2-S7T4 8. U1V6-W3X8-Y2Z5 9. A9B4-C7D2-E5F1 10. G3H8-J1K5-L6M9 ⚠ Store backup codes in a secure location (password manager, encrypted file)\\n⚠ Each code can only be used once\\n⚠ These codes allow access if you lose your authenticator device TOTP enrollment complete. MFA is now active for your account.\\n```plaintext #### 4. Save Backup Codes **Critical**: Store backup codes in a secure location: ```bash\\n# Copy backup codes to password manager or encrypted file\\n# NEVER store in plaintext, email, or cloud storage # Example: Store in encrypted file\\nprovisioning mfa backup-codes --save-encrypted ~/secure/mfa-backup-codes.enc # Or display again (requires existing MFA verification)\\nprovisioning mfa backup-codes --show\\n```plaintext #### 5. Test TOTP Login ```bash\\n# Logout to test full login flow\\nprovisioning logout # Login with password (returns partial token)\\nprovisioning login --user admin@example.com --workspace production # Get current TOTP code from authenticator app\\n# Verify with TOTP code (returns full access token)\\nprovisioning mfa verify 654321 # ✓ Full access granted\\n```plaintext --- ## WebAuthn Setup (Hardware Keys) ### Supported WebAuthn Devices | Device Type | Examples | Security Level |\\n|-------------|----------|----------------|\\n| **USB Security Keys** | YubiKey 5, SoloKey, Titan Key | Highest |\\n| **NFC Keys** | YubiKey 5 NFC, Google Titan | High (mobile compatible) |\\n| **Biometric** | Touch ID (macOS), Windows Hello, Face ID | High (convenience) |\\n| **Platform Authenticators** | Built-in laptop/phone biometrics | Medium-High | ### Step-by-Step WebAuthn Enrollment #### 1. Check WebAuthn Support ```bash\\n# Verify WebAuthn support on your system\\nprovisioning mfa webauthn check # Output:\\nWebAuthn Support: ✓ Browser: Chrome 120.0 (WebAuthn supported) ✓ Platform: macOS 14.0 (Touch ID available) ✓ USB: YubiKey 5 NFC detected\\n```plaintext #### 2. Initiate WebAuthn Registration ```bash\\nprovisioning mfa webauthn register --device-name \\"YubiKey-Admin-Primary\\"\\n```plaintext **Output**: ```plaintext\\n╔════════════════════════════════════════════════════════════╗\\n║ WEBAUTHN DEVICE REGISTRATION ║\\n╚════════════════════════════════════════════════════════════╝ Device Name: YubiKey-Admin-Primary\\nRelying Party: provisioning.example.com ⚠ Please insert your security key and touch it when it blinks Waiting for device interaction...\\n```plaintext #### 3. Complete Device Registration **For USB Security Keys (YubiKey, SoloKey)**: 1. Insert USB key into computer\\n2. Terminal shows \\"Touch your security key\\"\\n3. Touch the gold/silver contact on the key (it will blink)\\n4. Registration completes **For Touch ID (macOS)**: 1. Terminal shows \\"Touch ID prompt will appear\\"\\n2. Touch ID dialog appears on screen\\n3. Place finger on Touch ID sensor\\n4. Registration completes **For Windows Hello**: 1. Terminal shows \\"Windows Hello prompt\\"\\n2. Windows Hello biometric prompt appears\\n3. Complete biometric scan (fingerprint/face)\\n4. Registration completes **Success Response**: ```plaintext\\n✓ WebAuthn device registered successfully! Device Details: Name: YubiKey-Admin-Primary Type: USB Security Key AAGUID: 2fc0579f-8113-47ea-b116-bb5a8db9202a Credential ID: kZj8C3bx... Registered: 2025-10-08T14:32:10Z You can now use this device for authentication.\\n```plaintext #### 4. Register Additional Devices (Optional) **Best Practice**: Register 2+ WebAuthn devices (primary + backup) ```bash\\n# Register backup YubiKey\\nprovisioning mfa webauthn register --device-name \\"YubiKey-Admin-Backup\\" # Register Touch ID (for convenience on personal laptop)\\nprovisioning mfa webauthn register --device-name \\"MacBook-TouchID\\"\\n```plaintext #### 5. List Registered Devices ```bash\\nprovisioning mfa webauthn list # Output:\\nRegistered WebAuthn Devices: 1. YubiKey-Admin-Primary (USB Security Key) Registered: 2025-10-08T14:32:10Z Last Used: 2025-10-08T14:32:10Z 2. YubiKey-Admin-Backup (USB Security Key) Registered: 2025-10-08T14:35:22Z Last Used: Never 3. MacBook-TouchID (Platform Authenticator) Registered: 2025-10-08T14:40:15Z Last Used: 2025-10-08T15:20:05Z Total: 3 devices\\n```plaintext #### 6. Test WebAuthn Login ```bash\\n# Logout to test\\nprovisioning logout # Login with password (partial token)\\nprovisioning login --user admin@example.com --workspace production # Authenticate with WebAuthn\\nprovisioning mfa webauthn verify # Output:\\n⚠ Insert and touch your security key\\n[Touch YubiKey when it blinks] ✓ WebAuthn verification successful\\n✓ Full access granted\\n```plaintext --- ## Enforcing MFA via Cedar Policies ### Production MFA Enforcement Policy **Location**: `provisioning/config/cedar-policies/production.cedar` ```cedar\\n// Production operations require MFA verification\\npermit ( principal, action in [ Action::\\"server:create\\", Action::\\"server:delete\\", Action::\\"cluster:deploy\\", Action::\\"secret:read\\", Action::\\"user:manage\\" ], resource in Environment::\\"production\\"\\n) when { // MFA MUST be verified context.mfa_verified == true\\n}; // Admin role requires MFA for ALL production actions\\npermit ( principal in Role::\\"Admin\\", action, resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true\\n}; // Break-glass approval requires MFA\\npermit ( principal, action == Action::\\"break_glass:approve\\", resource\\n) when { context.mfa_verified == true && principal.role in [Role::\\"Admin\\", Role::\\"SecurityLead\\"]\\n};\\n```plaintext ### Development/Staging Policies (MFA Recommended, Not Required) **Location**: `provisioning/config/cedar-policies/development.cedar` ```cedar\\n// Development: MFA recommended but not enforced\\npermit ( principal, action, resource in Environment::\\"dev\\"\\n) when { // MFA not required for dev, but logged if missing true\\n}; // Staging: MFA recommended for destructive operations\\npermit ( principal, action in [Action::\\"server:delete\\", Action::\\"cluster:delete\\"], resource in Environment::\\"staging\\"\\n) when { // Allow without MFA but log warning context.mfa_verified == true || context has mfa_warning_acknowledged\\n};\\n```plaintext ### Policy Deployment ```bash\\n# Validate Cedar policies\\nprovisioning cedar validate --policies config/cedar-policies/ # Test policies with sample requests\\nprovisioning cedar test --policies config/cedar-policies/ \\\\ --test-file tests/cedar-test-cases.yaml # Deploy to production (requires MFA + approval)\\nprovisioning cedar deploy production --policies config/cedar-policies/production.cedar # Verify policy is active\\nprovisioning cedar status production\\n```plaintext ### Testing MFA Enforcement ```bash\\n# Test 1: Production access WITHOUT MFA (should fail)\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning server create web-01 --plan medium --check # Expected: Authorization denied (MFA not verified) # Test 2: Production access WITH MFA (should succeed)\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify 123456\\nprovisioning server create web-01 --plan medium --check # Expected: Server creation initiated\\n```plaintext --- ## Backup Codes Management ### Generating Backup Codes Backup codes are automatically generated during first MFA enrollment: ```bash\\n# View existing backup codes (requires MFA verification)\\nprovisioning mfa backup-codes --show # Regenerate backup codes (invalidates old ones)\\nprovisioning mfa backup-codes --regenerate # Output:\\n⚠ WARNING: Regenerating backup codes will invalidate all existing codes.\\nContinue? (yes/no): yes New Backup Codes: 1. X7Y2-Z9A4-B6C1 2. D3E8-F5G2-H9J4 3. K6L1-M7N3-P8Q2 4. R4S9-T6U1-V3W7 5. X2Y5-Z8A3-B9C4 6. D7E1-F4G6-H2J8 7. K5L9-M3N6-P1Q4 8. R8S2-T5U7-V9W3 9. X4Y6-Z1A8-B3C5 10. D9E2-F7G4-H6J1 ✓ Backup codes regenerated successfully\\n⚠ Save these codes in a secure location\\n```plaintext ### Using Backup Codes **When to use backup codes**: - Lost authenticator device (phone stolen, broken)\\n- WebAuthn key not available (traveling, left at office)\\n- Authenticator app not working (time sync issue) **Login with backup code**: ```bash\\n# Login (partial token)\\nprovisioning login --user admin@example.com --workspace production # Use backup code instead of TOTP/WebAuthn\\nprovisioning mfa verify-backup X7Y2-Z9A4-B6C1 # Output:\\n✓ Backup code verified\\n⚠ Backup code consumed (9 remaining)\\n⚠ Enroll a new MFA device as soon as possible\\n✓ Full access granted (temporary)\\n```plaintext ### Backup Code Storage Best Practices **✅ DO**: - Store in password manager (1Password, Bitwarden, LastPass)\\n- Print and store in physical safe\\n- Encrypt and store in secure cloud storage (with encryption key stored separately)\\n- Share with trusted IT team member (encrypted) **❌ DON\'T**: - Email to yourself\\n- Store in plaintext file on laptop\\n- Save in browser notes/bookmarks\\n- Share via Slack/Teams/unencrypted chat\\n- Screenshot and save to Photos **Example: Encrypted Storage**: ```bash\\n# Encrypt backup codes with Age\\nprovisioning mfa backup-codes --export | \\\\ age -p -o ~/secure/mfa-backup-codes.age # Decrypt when needed\\nage -d ~/secure/mfa-backup-codes.age\\n```plaintext --- ## Recovery Procedures ### Scenario 1: Lost Authenticator Device (TOTP) **Situation**: Phone stolen/broken, authenticator app not accessible **Recovery Steps**: ```bash\\n# Step 1: Use backup code to login\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify-backup X7Y2-Z9A4-B6C1 # Step 2: Remove old TOTP enrollment\\nprovisioning mfa totp unenroll # Step 3: Enroll new TOTP device\\nprovisioning mfa totp enroll\\n# [Scan QR code with new phone/authenticator app]\\nprovisioning mfa totp verify 654321 # Step 4: Generate new backup codes\\nprovisioning mfa backup-codes --regenerate\\n```plaintext ### Scenario 2: Lost WebAuthn Key (YubiKey) **Situation**: YubiKey lost, stolen, or damaged **Recovery Steps**: ```bash\\n# Step 1: Login with alternative method (TOTP or backup code)\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify 123456 # TOTP from authenticator app # Step 2: List registered WebAuthn devices\\nprovisioning mfa webauthn list # Step 3: Remove lost device\\nprovisioning mfa webauthn remove \\"YubiKey-Admin-Primary\\" # Output:\\n⚠ Remove WebAuthn device \\"YubiKey-Admin-Primary\\"?\\nThis cannot be undone. (yes/no): yes ✓ Device removed # Step 4: Register new WebAuthn device\\nprovisioning mfa webauthn register --device-name \\"YubiKey-Admin-Replacement\\"\\n```plaintext ### Scenario 3: All MFA Methods Lost **Situation**: Lost phone (TOTP), lost YubiKey, no backup codes **Recovery Steps** (Requires Admin Assistance): ```bash\\n# User contacts Security Team / Platform Admin # Admin performs MFA reset (requires 2+ admin approval)\\nprovisioning admin mfa-reset admin@example.com \\\\ --reason \\"Employee lost all MFA devices (phone + YubiKey)\\" \\\\ --ticket SUPPORT-12345 # Output:\\n⚠ MFA Reset Request Created Reset Request ID: MFA-RESET-20251008-001\\nUser: admin@example.com\\nReason: Employee lost all MFA devices (phone + YubiKey)\\nTicket: SUPPORT-12345 Required Approvals: 2\\nApprovers: 0/2 # Two other admins approve (with their own MFA)\\nprovisioning admin mfa-reset approve MFA-RESET-20251008-001 \\\\ --reason \\"Verified via video call + employee badge\\" # After 2 approvals, MFA is reset\\n✓ MFA reset approved (2/2 approvals)\\n✓ User admin@example.com can now re-enroll MFA devices # User re-enrolls TOTP and WebAuthn\\nprovisioning mfa totp enroll\\nprovisioning mfa webauthn register --device-name \\"YubiKey-New\\"\\n```plaintext ### Scenario 4: Backup Codes Depleted **Situation**: Used 9 out of 10 backup codes **Recovery Steps**: ```bash\\n# Login with last backup code\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify-backup D9E2-F7G4-H6J1 # Output:\\n⚠ WARNING: This is your LAST backup code!\\n✓ Backup code verified\\n⚠ Regenerate backup codes immediately! # Immediately regenerate backup codes\\nprovisioning mfa backup-codes --regenerate # Save new codes securely\\n```plaintext --- ## Troubleshooting ### Issue 1: \\"Invalid TOTP code\\" Error **Symptoms**: ```plaintext\\nprovisioning mfa verify 123456\\n✗ Error: Invalid TOTP code\\n```plaintext **Possible Causes**: 1. **Time sync issue** (most common)\\n2. Wrong secret key entered during enrollment\\n3. Code expired (30-second window) **Solutions**: ```bash\\n# Check time sync (device clock must be accurate)\\n# macOS:\\nsudo sntp -sS time.apple.com # Linux:\\nsudo ntpdate pool.ntp.org # Verify TOTP configuration\\nprovisioning mfa totp status # Output:\\nTOTP Configuration: Algorithm: SHA1 Digits: 6 Period: 30 seconds Time Window: ±1 period (90 seconds total) # Check system time vs NTP\\ndate && curl -s http://worldtimeapi.org/api/ip | grep datetime # If time is off by >30 seconds, sync time and retry\\n```plaintext ### Issue 2: WebAuthn Not Detected **Symptoms**: ```plaintext\\nprovisioning mfa webauthn register\\n✗ Error: No WebAuthn authenticator detected\\n```plaintext **Solutions**: ```bash\\n# Check USB connection (for hardware keys)\\n# macOS:\\nsystem_profiler SPUSBDataType | grep -i yubikey # Linux:\\nlsusb | grep -i yubico # Check browser WebAuthn support\\nprovisioning mfa webauthn check # Try different USB port (USB-A vs USB-C) # For Touch ID: Ensure finger is enrolled in System Preferences\\n# For Windows Hello: Ensure biometrics are configured in Settings\\n```plaintext ### Issue 3: \\"MFA Required\\" Despite Verification **Symptoms**: ```plaintext\\nprovisioning server create web-01\\n✗ Error: Authorization denied (MFA verification required)\\n```plaintext **Cause**: Access token expired (15 min) or MFA verification not in token claims **Solution**: ```bash\\n# Check token expiration\\nprovisioning auth status # Output:\\nAuthentication Status: Logged in: Yes User: admin@example.com Access Token: Expired (issued 16 minutes ago) MFA Verified: Yes (but token expired) # Re-authenticate (will prompt for MFA again)\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify 654321 # Verify MFA claim in token\\nprovisioning auth decode-token # Output (JWT claims):\\n{ \\"sub\\": \\"admin@example.com\\", \\"role\\": \\"Admin\\", \\"mfa_verified\\": true, # ← Must be true \\"mfa_method\\": \\"totp\\", \\"iat\\": 1696766400, \\"exp\\": 1696767300\\n}\\n```plaintext ### Issue 4: QR Code Not Displaying **Symptoms**: QR code appears garbled or doesn\'t display in terminal **Solutions**: ```bash\\n# Use manual entry instead\\nprovisioning mfa totp enroll --manual # Output (no QR code):\\nManual TOTP Setup: Secret: JBSWY3DPEHPK3PXP Account: admin@example.com Issuer: Provisioning Platform Enter this secret manually in your authenticator app. # Or export QR code to image file\\nprovisioning mfa totp enroll --qr-image ~/mfa-qr.png\\nopen ~/mfa-qr.png # View in image viewer\\n```plaintext ### Issue 5: Backup Code Not Working **Symptoms**: ```plaintext\\nprovisioning mfa verify-backup X7Y2-Z9A4-B6C1\\n✗ Error: Invalid or already used backup code\\n```plaintext **Possible Causes**: 1. Code already used (single-use only)\\n2. Backup codes regenerated (old codes invalidated)\\n3. Typo in code entry **Solutions**: ```bash\\n# Check backup code status (requires alternative login method)\\nprovisioning mfa backup-codes --status # Output:\\nBackup Codes Status: Total Generated: 10 Used: 3 Remaining: 7 Last Used: 2025-10-05T10:15:30Z # Contact admin for MFA reset if all codes used\\n# Or use alternative MFA method (TOTP, WebAuthn)\\n```plaintext --- ## Best Practices ### For Individual Admins #### 1. Use Multiple MFA Methods **✅ Recommended Setup**: - **Primary**: TOTP (Google Authenticator, Authy)\\n- **Backup**: WebAuthn (YubiKey or Touch ID)\\n- **Emergency**: Backup codes (stored securely) ```bash\\n# Enroll all three\\nprovisioning mfa totp enroll\\nprovisioning mfa webauthn register --device-name \\"YubiKey-Primary\\"\\nprovisioning mfa backup-codes --save-encrypted ~/secure/codes.enc\\n```plaintext #### 2. Secure Backup Code Storage ```bash\\n# Store in password manager (1Password example)\\nprovisioning mfa backup-codes --show | \\\\ op item create --category \\"Secure Note\\" \\\\ --title \\"Provisioning MFA Backup Codes\\" \\\\ --vault \\"Work\\" # Or encrypted file\\nprovisioning mfa backup-codes --export | \\\\ age -p -o ~/secure/mfa-backup-codes.age\\n```plaintext #### 3. Regular Device Audits ```bash\\n# Monthly: Review registered devices\\nprovisioning mfa devices --all # Remove unused/old devices\\nprovisioning mfa webauthn remove \\"Old-YubiKey\\"\\nprovisioning mfa totp remove \\"Old-Phone\\"\\n```plaintext #### 4. Test Recovery Procedures ```bash\\n# Quarterly: Test backup code login\\nprovisioning logout\\nprovisioning login --user admin@example.com --workspace dev\\nprovisioning mfa verify-backup [test-code] # Verify backup codes are accessible\\ncat ~/secure/mfa-backup-codes.enc | age -d\\n```plaintext ### For Security Teams #### 1. MFA Enrollment Verification ```bash\\n# Generate MFA enrollment report\\nprovisioning admin mfa-report --format csv > mfa-enrollment.csv # Output (CSV):\\n# User,MFA_Enabled,TOTP,WebAuthn,Backup_Codes,Last_MFA_Login,Role\\n# admin@example.com,Yes,Yes,Yes,10,2025-10-08T14:00:00Z,Admin\\n# dev@example.com,No,No,No,0,Never,Developer\\n```plaintext #### 2. Enforce MFA Deadlines ```bash\\n# Set MFA enrollment deadline\\nprovisioning admin mfa-deadline set 2025-11-01 \\\\ --roles Admin,Developer \\\\ --environment production # Send reminder emails\\nprovisioning admin mfa-remind \\\\ --users-without-mfa \\\\ --template \\"MFA enrollment required by Nov 1\\"\\n```plaintext #### 3. Monitor MFA Usage ```bash\\n# Audit: Find production logins without MFA\\nprovisioning audit query \\\\ --action \\"auth:login\\" \\\\ --filter \'mfa_verified == false && environment == \\"production\\"\' \\\\ --since 7d # Alert on repeated MFA failures\\nprovisioning monitoring alert create \\\\ --name \\"MFA Brute Force\\" \\\\ --condition \\"mfa_failures > 5 in 5min\\" \\\\ --action \\"notify security-team\\"\\n```plaintext #### 4. MFA Reset Policy **MFA Reset Requirements**: - User verification (video call + ID check)\\n- Support ticket created (incident tracking)\\n- 2+ admin approvals (different teams)\\n- Time-limited reset window (24 hours)\\n- Mandatory re-enrollment before production access ```bash\\n# MFA reset workflow\\nprovisioning admin mfa-reset create user@example.com \\\\ --reason \\"Lost all devices\\" \\\\ --ticket SUPPORT-12345 \\\\ --expires-in 24h # Requires 2 approvals\\nprovisioning admin mfa-reset approve MFA-RESET-001\\n```plaintext ### For Platform Admins #### 1. Cedar Policy Best Practices ```cedar\\n// Require MFA for high-risk actions\\npermit ( principal, action in [ Action::\\"server:delete\\", Action::\\"cluster:delete\\", Action::\\"secret:delete\\", Action::\\"user:delete\\" ], resource\\n) when { context.mfa_verified == true && context.mfa_age_seconds < 300 // MFA verified within last 5 minutes\\n};\\n```plaintext #### 2. MFA Grace Periods (For Rollout) ```bash\\n# Development: No MFA required\\nexport PROVISIONING_MFA_REQUIRED=false # Staging: MFA recommended (warnings only)\\nexport PROVISIONING_MFA_REQUIRED=warn # Production: MFA mandatory (strict enforcement)\\nexport PROVISIONING_MFA_REQUIRED=true\\n```plaintext #### 3. Backup Admin Account **Emergency Admin** (break-glass scenario): - Separate admin account with MFA enrollment\\n- Credentials stored in physical safe\\n- Only used when primary admins locked out\\n- Requires incident report after use ```bash\\n# Create emergency admin\\nprovisioning admin create emergency-admin@example.com \\\\ --role EmergencyAdmin \\\\ --mfa-required true \\\\ --max-concurrent-sessions 1 # Print backup codes and store in safe\\nprovisioning mfa backup-codes --show --user emergency-admin@example.com > emergency-codes.txt\\n# [Print and store in physical safe]\\n```plaintext --- ## Audit and Compliance ### MFA Audit Logging All MFA events are logged to the audit system: ```bash\\n# View MFA enrollment events\\nprovisioning audit query \\\\ --action-type \\"mfa:*\\" \\\\ --since 30d # Output (JSON):\\n[ { \\"timestamp\\": \\"2025-10-08T14:32:10Z\\", \\"action\\": \\"mfa:totp:enroll\\", \\"user\\": \\"admin@example.com\\", \\"result\\": \\"success\\", \\"device_type\\": \\"totp\\", \\"ip_address\\": \\"203.0.113.42\\" }, { \\"timestamp\\": \\"2025-10-08T14:35:22Z\\", \\"action\\": \\"mfa:webauthn:register\\", \\"user\\": \\"admin@example.com\\", \\"result\\": \\"success\\", \\"device_name\\": \\"YubiKey-Admin-Primary\\", \\"ip_address\\": \\"203.0.113.42\\" }\\n]\\n```plaintext ### Compliance Reports #### SOC2 Compliance (Access Control) ```bash\\n# Generate SOC2 access control report\\nprovisioning compliance report soc2 \\\\ --control \\"CC6.1\\" \\\\ --period \\"2025-Q3\\" # Output:\\nSOC2 Trust Service Criteria - CC6.1 (Logical Access) MFA Enforcement: ✓ MFA enabled for 100% of production admins (15/15) ✓ MFA verified for 98.7% of production logins (2,453/2,485) ✓ MFA policies enforced via Cedar authorization ✓ Failed MFA attempts logged and monitored Evidence: - Cedar policy: production.cedar (lines 15-25) - Audit logs: mfa-verification-logs-2025-q3.json - Enrollment report: mfa-enrollment-status.csv\\n```plaintext #### ISO 27001 Compliance (A.9.4.2 - Secure Log-on) ```bash\\n# ISO 27001 A.9.4.2 compliance report\\nprovisioning compliance report iso27001 \\\\ --control \\"A.9.4.2\\" \\\\ --format pdf \\\\ --output iso27001-a942-mfa-report.pdf # Report Sections:\\n# 1. MFA Implementation Details\\n# 2. Enrollment Procedures\\n# 3. Audit Trail\\n# 4. Policy Enforcement\\n# 5. Recovery Procedures\\n```plaintext #### GDPR Compliance (MFA Data Handling) ```bash\\n# GDPR data subject request (MFA data export)\\nprovisioning compliance gdpr export admin@example.com \\\\ --include mfa # Output (JSON):\\n{ \\"user\\": \\"admin@example.com\\", \\"mfa_data\\": { \\"totp_enrolled\\": true, \\"totp_enrollment_date\\": \\"2025-10-08T14:32:10Z\\", \\"webauthn_devices\\": [ { \\"name\\": \\"YubiKey-Admin-Primary\\", \\"registered\\": \\"2025-10-08T14:35:22Z\\", \\"last_used\\": \\"2025-10-08T16:20:05Z\\" } ], \\"backup_codes_remaining\\": 7, \\"mfa_login_history\\": [...] # Last 90 days }\\n} # GDPR deletion (MFA data removal after account deletion)\\nprovisioning compliance gdpr delete admin@example.com --include-mfa\\n```plaintext ### MFA Metrics Dashboard ```bash\\n# Generate MFA metrics\\nprovisioning admin mfa-metrics --period 30d # Output:\\nMFA Metrics (Last 30 Days) Enrollment: Total Users: 42 MFA Enabled: 38 (90.5%) TOTP Only: 22 (57.9%) WebAuthn Only: 3 (7.9%) Both TOTP + WebAuthn: 13 (34.2%) No MFA: 4 (9.5%) ⚠ Authentication: Total Logins: 3,847 MFA Verified: 3,802 (98.8%) MFA Failed: 45 (1.2%) Backup Code Used: 7 (0.2%) Devices: TOTP Devices: 35 WebAuthn Devices: 47 Backup Codes Remaining (avg): 8.3 Incidents: MFA Resets: 2 Lost Devices: 3 Lockouts: 1\\n```plaintext --- ## Quick Reference Card ### Daily Admin Operations ```bash\\n# Login with MFA\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify 123456 # Check MFA status\\nprovisioning mfa status # View registered devices\\nprovisioning mfa devices\\n```plaintext ### MFA Management ```bash\\n# TOTP\\nprovisioning mfa totp enroll # Enroll TOTP\\nprovisioning mfa totp verify 123456 # Verify TOTP code\\nprovisioning mfa totp unenroll # Remove TOTP # WebAuthn\\nprovisioning mfa webauthn register --device-name \\"YubiKey\\" # Register key\\nprovisioning mfa webauthn list # List devices\\nprovisioning mfa webauthn remove \\"YubiKey\\" # Remove device # Backup Codes\\nprovisioning mfa backup-codes --show # View codes\\nprovisioning mfa backup-codes --regenerate # Generate new codes\\nprovisioning mfa verify-backup X7Y2-Z9A4-B6C1 # Use backup code\\n```plaintext ### Emergency Procedures ```bash\\n# Lost device recovery (use backup code)\\nprovisioning login --user admin@example.com\\nprovisioning mfa verify-backup [code]\\nprovisioning mfa totp enroll # Re-enroll new device # MFA reset (admin only)\\nprovisioning admin mfa-reset user@example.com --reason \\"Lost all devices\\" # Check MFA compliance\\nprovisioning admin mfa-report\\n```plaintext --- ## Summary Checklist ### For New Admins - [ ] Complete initial login with password\\n- [ ] Enroll TOTP (Google Authenticator, Authy)\\n- [ ] Verify TOTP code successfully\\n- [ ] Save backup codes in password manager\\n- [ ] Register WebAuthn device (YubiKey or Touch ID)\\n- [ ] Test full login flow with MFA\\n- [ ] Store backup codes in secure location\\n- [ ] Verify production access works with MFA ### For Security Team - [ ] Deploy Cedar MFA enforcement policies\\n- [ ] Verify 100% admin MFA enrollment\\n- [ ] Configure MFA audit logging\\n- [ ] Setup MFA compliance reports (SOC2, ISO 27001)\\n- [ ] Document MFA reset procedures\\n- [ ] Train admins on MFA usage\\n- [ ] Create emergency admin account (break-glass)\\n- [ ] Schedule quarterly MFA audits ### For Platform Team - [ ] Configure MFA settings in `config/mfa.toml`\\n- [ ] Deploy Cedar policies with MFA requirements\\n- [ ] Setup monitoring for MFA failures\\n- [ ] Configure alerts for MFA bypass attempts\\n- [ ] Document MFA architecture in ADR\\n- [ ] Test MFA enforcement in all environments\\n- [ ] Verify audit logs capture MFA events\\n- [ ] Create runbooks for MFA incidents --- ## Support and Resources ### Documentation - **MFA Implementation**: `/docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`\\n- **Cedar Policies**: `/docs/operations/CEDAR_POLICIES_PRODUCTION_GUIDE.md`\\n- **Break-Glass**: `/docs/operations/BREAK_GLASS_TRAINING_GUIDE.md`\\n- **Audit Logging**: `/docs/architecture/AUDIT_LOGGING_IMPLEMENTATION.md` ### Configuration Files - **MFA Config**: `provisioning/config/mfa.toml`\\n- **Cedar Policies**: `provisioning/config/cedar-policies/production.cedar`\\n- **Control Center**: `provisioning/platform/control-center/config.toml` ### CLI Help ```bash\\nprovisioning mfa help # MFA command help\\nprovisioning mfa totp --help # TOTP-specific help\\nprovisioning mfa webauthn --help # WebAuthn-specific help\\n```plaintext ### Contact - **Security Team**: \\n- **Platform Team**: \\n- **Support Ticket**: --- **Document Status**: ✅ Complete\\n**Review Date**: 2025-11-08\\n**Maintained By**: Security Team, Platform Team","breadcrumbs":"MFA Admin Setup Guide » Timeline for Rollout","id":"1266","title":"Timeline for Rollout"},"1267":{"body":"A Rust-based orchestrator service that coordinates infrastructure provisioning workflows with pluggable storage backends and comprehensive migration tools. Source : provisioning/platform/orchestrator/","breadcrumbs":"Orchestrator » Provisioning Orchestrator","id":"1267","title":"Provisioning Orchestrator"},"1268":{"body":"The orchestrator implements a hybrid multi-storage approach: Rust Orchestrator : Handles coordination, queuing, and parallel execution Nushell Scripts : Execute the actual provisioning logic Pluggable Storage : Multiple storage backends with seamless migration REST API : HTTP interface for workflow submission and monitoring","breadcrumbs":"Orchestrator » Architecture","id":"1268","title":"Architecture"},"1269":{"body":"Multi-Storage Backends : Filesystem, SurrealDB Embedded, and SurrealDB Server options Task Queue : Priority-based task scheduling with retry logic Seamless Migration : Move data between storage backends with zero downtime Feature Flags : Compile-time backend selection for minimal dependencies Parallel Execution : Multiple tasks can run concurrently Status Tracking : Real-time task status and progress monitoring Advanced Features : Authentication, audit logging, and metrics (SurrealDB) Nushell Integration : Seamless execution of existing provisioning scripts RESTful API : HTTP endpoints for workflow management Test Environment Service : Automated containerized testing for taskservs, servers, and clusters Multi-Node Support : Test complex topologies including Kubernetes and etcd clusters Docker Integration : Automated container lifecycle management via Docker API","breadcrumbs":"Orchestrator » Key Features","id":"1269","title":"Key Features"},"127":{"body":"# Check Age version\\nage --version # Expected output: 1.2.1 or higher","breadcrumbs":"Prerequisites » Age","id":"127","title":"Age"},"1270":{"body":"","breadcrumbs":"Orchestrator » Quick Start","id":"1270","title":"Quick Start"},"1271":{"body":"Default Build (Filesystem Only) : cd provisioning/platform/orchestrator\\ncargo build --release\\ncargo run -- --port 8080 --data-dir ./data With SurrealDB Support : cargo build --release --features surrealdb # Run with SurrealDB embedded\\ncargo run --features surrealdb -- --storage-type surrealdb-embedded --data-dir ./data # Run with SurrealDB server\\ncargo run --features surrealdb -- --storage-type surrealdb-server \\\\ --surrealdb-url ws://localhost:8000 \\\\ --surrealdb-username admin --surrealdb-password secret","breadcrumbs":"Orchestrator » Build and Run","id":"1271","title":"Build and Run"},"1272":{"body":"curl -X POST http://localhost:8080/workflows/servers/create \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"infra\\": \\"production\\", \\"settings\\": \\"./settings.yaml\\", \\"servers\\": [\\"web-01\\", \\"web-02\\"], \\"check_mode\\": false, \\"wait\\": true }\'","breadcrumbs":"Orchestrator » Submit Workflow","id":"1272","title":"Submit Workflow"},"1273":{"body":"","breadcrumbs":"Orchestrator » API Endpoints","id":"1273","title":"API Endpoints"},"1274":{"body":"GET /health - Service health status GET /tasks - List all tasks GET /tasks/{id} - Get specific task status","breadcrumbs":"Orchestrator » Core Endpoints","id":"1274","title":"Core Endpoints"},"1275":{"body":"POST /workflows/servers/create - Submit server creation workflow POST /workflows/taskserv/create - Submit taskserv creation workflow POST /workflows/cluster/create - Submit cluster creation workflow","breadcrumbs":"Orchestrator » Workflow Endpoints","id":"1275","title":"Workflow Endpoints"},"1276":{"body":"POST /test/environments/create - Create test environment GET /test/environments - List all test environments GET /test/environments/{id} - Get environment details POST /test/environments/{id}/run - Run tests in environment DELETE /test/environments/{id} - Cleanup test environment GET /test/environments/{id}/logs - Get environment logs","breadcrumbs":"Orchestrator » Test Environment Endpoints","id":"1276","title":"Test Environment Endpoints"},"1277":{"body":"The orchestrator includes a comprehensive test environment service for automated containerized testing.","breadcrumbs":"Orchestrator » Test Environment Service","id":"1277","title":"Test Environment Service"},"1278":{"body":"1. Single Taskserv Test individual taskserv in isolated container. 2. Server Simulation Test complete server configurations with multiple taskservs. 3. Cluster Topology Test multi-node cluster configurations (Kubernetes, etcd, etc.).","breadcrumbs":"Orchestrator » Test Environment Types","id":"1278","title":"Test Environment Types"},"1279":{"body":"# Quick test\\nprovisioning test quick kubernetes # Single taskserv test\\nprovisioning test env single postgres --auto-start --auto-cleanup # Server simulation\\nprovisioning test env server web-01 [containerd kubernetes cilium] --auto-start # Cluster from template\\nprovisioning test topology load kubernetes_3node | test env cluster kubernetes","breadcrumbs":"Orchestrator » Nushell CLI Integration","id":"1279","title":"Nushell CLI Integration"},"128":{"body":"","breadcrumbs":"Prerequisites » Installing Missing Dependencies","id":"128","title":"Installing Missing Dependencies"},"1280":{"body":"Predefined multi-node cluster topologies: kubernetes_3node : 3-node HA Kubernetes cluster kubernetes_single : All-in-one Kubernetes node etcd_cluster : 3-member etcd cluster containerd_test : Standalone containerd testing postgres_redis : Database stack testing","breadcrumbs":"Orchestrator » Topology Templates","id":"1280","title":"Topology Templates"},"1281":{"body":"Feature Filesystem SurrealDB Embedded SurrealDB Server Dependencies None Local database Remote server Auth/RBAC Basic Advanced Advanced Real-time No Yes Yes Scalability Limited Medium High Complexity Low Medium High Best For Development Production Distributed","breadcrumbs":"Orchestrator » Storage Backends","id":"1281","title":"Storage Backends"},"1282":{"body":"User Guide : Test Environment Guide Architecture : Orchestrator Architecture Feature Summary : Orchestrator Features","breadcrumbs":"Orchestrator » Related Documentation","id":"1282","title":"Related Documentation"},"1283":{"body":"","breadcrumbs":"Orchestrator System » Hybrid Orchestrator Architecture (v3.0.0)","id":"1283","title":"Hybrid Orchestrator Architecture (v3.0.0)"},"1284":{"body":"A production-ready hybrid Rust/Nushell orchestrator has been implemented to solve deep call stack limitations while preserving all Nushell business logic.","breadcrumbs":"Orchestrator System » 🚀 Orchestrator Implementation Completed (2025-09-25)","id":"1284","title":"🚀 Orchestrator Implementation Completed (2025-09-25)"},"1285":{"body":"Rust Orchestrator : High-performance coordination layer with REST API Nushell Business Logic : All existing scripts preserved and enhanced File-based Persistence : Reliable task queue using lightweight file storage Priority Processing : Intelligent task scheduling with retry logic Deep Call Stack Solution : Eliminates template.nu:71 \\"Type not supported\\" errors","breadcrumbs":"Orchestrator System » Architecture Overview","id":"1285","title":"Architecture Overview"},"1286":{"body":"# Start orchestrator in background\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background --provisioning-path \\"/usr/local/bin/provisioning\\" # Check orchestrator status\\n./scripts/start-orchestrator.nu --check # Stop orchestrator\\n./scripts/start-orchestrator.nu --stop # View logs\\ntail -f ./data/orchestrator.log","breadcrumbs":"Orchestrator System » Orchestrator Management","id":"1286","title":"Orchestrator Management"},"1287":{"body":"The orchestrator provides comprehensive workflow management:","breadcrumbs":"Orchestrator System » Workflow System","id":"1287","title":"Workflow System"},"1288":{"body":"# Submit server creation workflow\\nnu -c \\"use core/nulib/workflows/server_create.nu *; server_create_workflow \'wuji\' \'\' [] --check\\" # Traditional orchestrated server creation\\nprovisioning servers create --orchestrated --check","breadcrumbs":"Orchestrator System » Server Workflows","id":"1288","title":"Server Workflows"},"1289":{"body":"# Create taskserv workflow\\nnu -c \\"use core/nulib/workflows/taskserv.nu *; taskserv create \'kubernetes\' \'wuji\' --check\\" # Other taskserv operations\\nnu -c \\"use core/nulib/workflows/taskserv.nu *; taskserv delete \'kubernetes\' \'wuji\' --check\\"\\nnu -c \\"use core/nulib/workflows/taskserv.nu *; taskserv generate \'kubernetes\' \'wuji\'\\"\\nnu -c \\"use core/nulib/workflows/taskserv.nu *; taskserv check-updates\\"","breadcrumbs":"Orchestrator System » Taskserv Workflows","id":"1289","title":"Taskserv Workflows"},"129":{"body":"# Install Homebrew if not already installed\\n/bin/bash -c \\"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\\" # Install Nushell\\nbrew install nushell # Install KCL\\nbrew install kcl # Install Docker Desktop\\nbrew install --cask docker # Install SOPS\\nbrew install sops # Install Age\\nbrew install age # Optional: Install extras\\nbrew install k9s glow bat","breadcrumbs":"Prerequisites » macOS (using Homebrew)","id":"129","title":"macOS (using Homebrew)"},"1290":{"body":"# Create cluster workflow\\nnu -c \\"use core/nulib/workflows/cluster.nu *; cluster create \'buildkit\' \'wuji\' --check\\" # Delete cluster workflow\\nnu -c \\"use core/nulib/workflows/cluster.nu *; cluster delete \'buildkit\' \'wuji\' --check\\"","breadcrumbs":"Orchestrator System » Cluster Workflows","id":"1290","title":"Cluster Workflows"},"1291":{"body":"# List all workflows\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow list\\" # Get workflow statistics\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow stats\\" # Monitor workflow in real-time\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow monitor \\" # Check orchestrator health\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow orchestrator\\" # Get specific workflow status\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow status \\"","breadcrumbs":"Orchestrator System » Workflow Management","id":"1291","title":"Workflow Management"},"1292":{"body":"The orchestrator exposes HTTP endpoints for external integration: Health : GET http://localhost:9090/v1/health List Tasks : GET http://localhost:9090/v1/tasks Task Status : GET http://localhost:9090/v1/tasks/{id} Server Workflow : POST http://localhost:9090/v1/workflows/servers/create Taskserv Workflow : POST http://localhost:9090/v1/workflows/taskserv/create Cluster Workflow : POST http://localhost:9090/v1/workflows/cluster/create","breadcrumbs":"Orchestrator System » REST API Endpoints","id":"1292","title":"REST API Endpoints"},"1293":{"body":"A comprehensive Cedar policy engine implementation with advanced security features, compliance checking, and anomaly detection. Source : provisioning/platform/control-center/","breadcrumbs":"Control Center » Control Center - Cedar Policy Engine","id":"1293","title":"Control Center - Cedar Policy Engine"},"1294":{"body":"","breadcrumbs":"Control Center » Key Features","id":"1294","title":"Key Features"},"1295":{"body":"Policy Evaluation : High-performance policy evaluation with context injection Versioning : Complete policy versioning with rollback capabilities Templates : Configuration-driven policy templates with variable substitution Validation : Comprehensive policy validation with syntax and semantic checking","breadcrumbs":"Control Center » Cedar Policy Engine","id":"1295","title":"Cedar Policy Engine"},"1296":{"body":"JWT Authentication : Secure token-based authentication Multi-Factor Authentication : MFA support for sensitive operations Role-Based Access Control : Flexible RBAC with policy integration Session Management : Secure session handling with timeouts","breadcrumbs":"Control Center » Security & Authentication","id":"1296","title":"Security & Authentication"},"1297":{"body":"SOC2 Type II : Complete SOC2 compliance validation HIPAA : Healthcare data protection compliance Audit Trail : Comprehensive audit logging and reporting Impact Analysis : Policy change impact assessment","breadcrumbs":"Control Center » Compliance Framework","id":"1297","title":"Compliance Framework"},"1298":{"body":"Statistical Analysis : Multiple statistical methods (Z-Score, IQR, Isolation Forest) Real-time Detection : Continuous monitoring of policy evaluations Alert Management : Configurable alerting through multiple channels Baseline Learning : Adaptive baseline calculation for improved accuracy","breadcrumbs":"Control Center » Anomaly Detection","id":"1298","title":"Anomaly Detection"},"1299":{"body":"SurrealDB Integration : High-performance graph database backend Policy Storage : Versioned policy storage with metadata Metrics Storage : Policy evaluation metrics and analytics Compliance Records : Complete compliance audit trails","breadcrumbs":"Control Center » Storage & Persistence","id":"1299","title":"Storage & Persistence"},"13":{"body":"This guide will help you install Infrastructure Automation on your machine and get it ready for use.","breadcrumbs":"Installation Guide » Installation Guide","id":"13","title":"Installation Guide"},"130":{"body":"# Update package list\\nsudo apt update # Install prerequisites\\nsudo apt install -y curl git build-essential # Install Nushell (from GitHub releases)\\ncurl -LO https://github.com/nushell/nushell/releases/download/0.107.1/nu-0.107.1-x86_64-linux-musl.tar.gz\\ntar xzf nu-0.107.1-x86_64-linux-musl.tar.gz\\nsudo mv nu /usr/local/bin/ # Install KCL\\ncurl -LO https://github.com/kcl-lang/cli/releases/download/v0.11.2/kcl-v0.11.2-linux-amd64.tar.gz\\ntar xzf kcl-v0.11.2-linux-amd64.tar.gz\\nsudo mv kcl /usr/local/bin/ # Install Docker\\nsudo apt install -y docker.io\\nsudo systemctl enable --now docker\\nsudo usermod -aG docker $USER # Install SOPS\\ncurl -LO https://github.com/getsops/sops/releases/download/v3.10.2/sops-v3.10.2.linux.amd64\\nchmod +x sops-v3.10.2.linux.amd64\\nsudo mv sops-v3.10.2.linux.amd64 /usr/local/bin/sops # Install Age\\nsudo apt install -y age","breadcrumbs":"Prerequisites » Ubuntu/Debian","id":"130","title":"Ubuntu/Debian"},"1300":{"body":"","breadcrumbs":"Control Center » Quick Start","id":"1300","title":"Quick Start"},"1301":{"body":"cd provisioning/platform/control-center\\ncargo build --release","breadcrumbs":"Control Center » Installation","id":"1301","title":"Installation"},"1302":{"body":"Copy and edit the configuration: cp config.toml.example config.toml Configuration example: [database]\\nurl = \\"surreal://localhost:8000\\"\\nusername = \\"root\\"\\npassword = \\"your-password\\" [auth]\\njwt_secret = \\"your-super-secret-key\\"\\nrequire_mfa = true [compliance.soc2]\\nenabled = true [anomaly]\\nenabled = true\\ndetection_threshold = 2.5","breadcrumbs":"Control Center » Configuration","id":"1302","title":"Configuration"},"1303":{"body":"./target/release/control-center server --port 8080","breadcrumbs":"Control Center » Start Server","id":"1303","title":"Start Server"},"1304":{"body":"curl -X POST http://localhost:8080/policies/evaluate \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"principal\\": {\\"id\\": \\"user123\\", \\"roles\\": [\\"Developer\\"]}, \\"action\\": {\\"id\\": \\"access\\"}, \\"resource\\": {\\"id\\": \\"sensitive-db\\", \\"classification\\": \\"confidential\\"}, \\"context\\": {\\"mfa_enabled\\": true, \\"location\\": \\"US\\"} }\'","breadcrumbs":"Control Center » Test Policy Evaluation","id":"1304","title":"Test Policy Evaluation"},"1305":{"body":"","breadcrumbs":"Control Center » Policy Examples","id":"1305","title":"Policy Examples"},"1306":{"body":"permit( principal, action == Action::\\"access\\", resource\\n) when { resource has classification && resource.classification in [\\"sensitive\\", \\"confidential\\"] && principal has mfa_enabled && principal.mfa_enabled == true\\n};","breadcrumbs":"Control Center » Multi-Factor Authentication Policy","id":"1306","title":"Multi-Factor Authentication Policy"},"1307":{"body":"permit( principal, action in [Action::\\"deploy\\", Action::\\"modify\\", Action::\\"delete\\"], resource\\n) when { resource has environment && resource.environment == \\"production\\" && principal has approval && principal.approval.approved_by in [\\"ProductionAdmin\\", \\"SRE\\"]\\n};","breadcrumbs":"Control Center » Production Approval Policy","id":"1307","title":"Production Approval Policy"},"1308":{"body":"permit( principal, action, resource\\n) when { context has geo && context.geo has country && context.geo.country in [\\"US\\", \\"CA\\", \\"GB\\", \\"DE\\"]\\n};","breadcrumbs":"Control Center » Geographic Restrictions","id":"1308","title":"Geographic Restrictions"},"1309":{"body":"","breadcrumbs":"Control Center » CLI Commands","id":"1309","title":"CLI Commands"},"131":{"body":"# Install Nushell\\nsudo dnf install -y nushell # Install KCL (from releases)\\ncurl -LO https://github.com/kcl-lang/cli/releases/download/v0.11.2/kcl-v0.11.2-linux-amd64.tar.gz\\ntar xzf kcl-v0.11.2-linux-amd64.tar.gz\\nsudo mv kcl /usr/local/bin/ # Install Docker\\nsudo dnf install -y docker\\nsudo systemctl enable --now docker\\nsudo usermod -aG docker $USER # Install SOPS\\nsudo dnf install -y sops # Install Age\\nsudo dnf install -y age","breadcrumbs":"Prerequisites » Fedora/RHEL","id":"131","title":"Fedora/RHEL"},"1310":{"body":"# Validate policies\\ncontrol-center policy validate policies/ # Test policy with test data\\ncontrol-center policy test policies/mfa.cedar tests/data/mfa_test.json # Analyze policy impact\\ncontrol-center policy impact policies/new_policy.cedar","breadcrumbs":"Control Center » Policy Management","id":"1310","title":"Policy Management"},"1311":{"body":"# Check SOC2 compliance\\ncontrol-center compliance soc2 # Check HIPAA compliance\\ncontrol-center compliance hipaa # Generate compliance report\\ncontrol-center compliance report --format html","breadcrumbs":"Control Center » Compliance Checking","id":"1311","title":"Compliance Checking"},"1312":{"body":"","breadcrumbs":"Control Center » API Endpoints","id":"1312","title":"API Endpoints"},"1313":{"body":"POST /policies/evaluate - Evaluate policy decision GET /policies - List all policies POST /policies - Create new policy PUT /policies/{id} - Update policy DELETE /policies/{id} - Delete policy","breadcrumbs":"Control Center » Policy Evaluation","id":"1313","title":"Policy Evaluation"},"1314":{"body":"GET /policies/{id}/versions - List policy versions GET /policies/{id}/versions/{version} - Get specific version POST /policies/{id}/rollback/{version} - Rollback to version","breadcrumbs":"Control Center » Policy Versions","id":"1314","title":"Policy Versions"},"1315":{"body":"GET /compliance/soc2 - SOC2 compliance check GET /compliance/hipaa - HIPAA compliance check GET /compliance/report - Generate compliance report","breadcrumbs":"Control Center » Compliance","id":"1315","title":"Compliance"},"1316":{"body":"GET /anomalies - List detected anomalies GET /anomalies/{id} - Get anomaly details POST /anomalies/detect - Trigger anomaly detection","breadcrumbs":"Control Center » Anomaly Detection","id":"1316","title":"Anomaly Detection"},"1317":{"body":"","breadcrumbs":"Control Center » Architecture","id":"1317","title":"Architecture"},"1318":{"body":"Policy Engine (src/policies/engine.rs) Cedar policy evaluation Context injection Caching and optimization Storage Layer (src/storage/) SurrealDB integration Policy versioning Metrics storage Compliance Framework (src/compliance/) SOC2 checker HIPAA validator Report generation Anomaly Detection (src/anomaly/) Statistical analysis Real-time monitoring Alert management Authentication (src/auth.rs) JWT token management Password hashing Session handling","breadcrumbs":"Control Center » Core Components","id":"1318","title":"Core Components"},"1319":{"body":"The system follows PAP (Project Architecture Principles) with: No hardcoded values : All behavior controlled via configuration Dynamic loading : Policies and rules loaded from configuration Template-based : Policy generation through templates Environment-aware : Different configs for dev/test/prod","breadcrumbs":"Control Center » Configuration-Driven Design","id":"1319","title":"Configuration-Driven Design"},"132":{"body":"","breadcrumbs":"Prerequisites » Network Requirements","id":"132","title":"Network Requirements"},"1320":{"body":"","breadcrumbs":"Control Center » Deployment","id":"1320","title":"Deployment"},"1321":{"body":"FROM rust:1.75 as builder\\nWORKDIR /app\\nCOPY . .\\nRUN cargo build --release FROM debian:bookworm-slim\\nRUN apt-get update && apt-get install -y ca-certificates\\nCOPY --from=builder /app/target/release/control-center /usr/local/bin/\\nEXPOSE 8080\\nCMD [\\"control-center\\", \\"server\\"]","breadcrumbs":"Control Center » Docker","id":"1321","title":"Docker"},"1322":{"body":"apiVersion: apps/v1\\nkind: Deployment\\nmetadata: name: control-center\\nspec: replicas: 3 template: spec: containers: - name: control-center image: control-center:latest ports: - containerPort: 8080 env: - name: DATABASE_URL value: \\"surreal://surrealdb:8000\\"","breadcrumbs":"Control Center » Kubernetes","id":"1322","title":"Kubernetes"},"1323":{"body":"Architecture : Cedar Authorization User Guide : Authentication Layer","breadcrumbs":"Control Center » Related Documentation","id":"1323","title":"Related Documentation"},"1324":{"body":"Interactive Ratatui-based installer for the Provisioning Platform with Nushell fallback for automation. Source : provisioning/platform/installer/ Status : COMPLETE - All 7 UI screens implemented (1,480 lines)","breadcrumbs":"Installer » Provisioning Platform Installer","id":"1324","title":"Provisioning Platform Installer"},"1325":{"body":"Rich Interactive TUI : Beautiful Ratatui interface with real-time feedback Headless Mode : Automation-friendly with Nushell scripts One-Click Deploy : Single command to deploy entire platform Platform Agnostic : Supports Docker, Podman, Kubernetes, OrbStack Live Progress : Real-time deployment progress and logs Health Checks : Automatic service health verification","breadcrumbs":"Installer » Features","id":"1325","title":"Features"},"1326":{"body":"cd provisioning/platform/installer\\ncargo build --release\\ncargo install --path .\\n```plaintext ## Usage ### Interactive TUI (Default) ```bash\\nprovisioning-installer\\n```plaintext The TUI guides you through: 1. Platform detection (Docker, Podman, K8s, OrbStack)\\n2. Deployment mode selection (Solo, Multi-User, CI/CD, Enterprise)\\n3. Service selection (check/uncheck services)\\n4. Configuration (domain, ports, secrets)\\n5. Live deployment with progress tracking\\n6. Success screen with access URLs ### Headless Mode (Automation) ```bash\\n# Quick deploy with auto-detection\\nprovisioning-installer --headless --mode solo --yes # Fully specified\\nprovisioning-installer \\\\ --headless \\\\ --platform orbstack \\\\ --mode solo \\\\ --services orchestrator,control-center,coredns \\\\ --domain localhost \\\\ --yes # Use existing config file\\nprovisioning-installer --headless --config my-deployment.toml --yes\\n```plaintext ### Configuration Generation ```bash\\n# Generate config without deploying\\nprovisioning-installer --config-only # Deploy later with generated config\\nprovisioning-installer --headless --config ~/.provisioning/installer-config.toml --yes\\n```plaintext ## Deployment Platforms ### Docker Compose ```bash\\nprovisioning-installer --platform docker --mode solo\\n```plaintext **Requirements**: Docker 20.10+, docker-compose 2.0+ ### OrbStack (macOS) ```bash\\nprovisioning-installer --platform orbstack --mode solo\\n```plaintext **Requirements**: OrbStack installed, 4GB RAM, 2 CPU cores ### Podman (Rootless) ```bash\\nprovisioning-installer --platform podman --mode solo\\n```plaintext **Requirements**: Podman 4.0+, systemd ### Kubernetes ```bash\\nprovisioning-installer --platform kubernetes --mode enterprise\\n```plaintext **Requirements**: kubectl configured, Helm 3.0+ ## Deployment Modes ### Solo Mode (Development) - **Services**: 5 core services\\n- **Resources**: 2 CPU cores, 4GB RAM, 20GB disk\\n- **Use case**: Single developer, local testing ### Multi-User Mode (Team) - **Services**: 7 services\\n- **Resources**: 4 CPU cores, 8GB RAM, 50GB disk\\n- **Use case**: Team collaboration, shared infrastructure ### CI/CD Mode (Automation) - **Services**: 8-10 services\\n- **Resources**: 8 CPU cores, 16GB RAM, 100GB disk\\n- **Use case**: Automated pipelines, webhooks ### Enterprise Mode (Production) - **Services**: 15+ services\\n- **Resources**: 16 CPU cores, 32GB RAM, 500GB disk\\n- **Use case**: Production deployments, full observability ## CLI Options ```plaintext\\nprovisioning-installer [OPTIONS] OPTIONS: --headless Run in headless mode (no TUI) --mode Deployment mode [solo|multi-user|cicd|enterprise] --platform Target platform [docker|podman|kubernetes|orbstack] --services Comma-separated list of services --domain Domain/hostname (default: localhost) --yes, -y Skip confirmation prompts --config-only Generate config without deploying --config Use existing config file -h, --help Print help -V, --version Print version\\n```plaintext ## CI/CD Integration ### GitLab CI ```yaml\\ndeploy_platform: stage: deploy script: - provisioning-installer --headless --mode cicd --platform kubernetes --yes only: - main\\n```plaintext ### GitHub Actions ```yaml\\n- name: Deploy Provisioning Platform run: | provisioning-installer --headless --mode cicd --platform docker --yes\\n```plaintext ## Nushell Scripts (Fallback) If the Rust binary is unavailable: ```bash\\ncd provisioning/platform/installer/scripts\\nnu deploy.nu --mode solo --platform orbstack --yes\\n```plaintext ## Related Documentation - **Deployment Guide**: [Platform Deployment](../guides/from-scratch.md)\\n- **Architecture**: [Platform Overview](../architecture/ARCHITECTURE_OVERVIEW.md)","breadcrumbs":"Installer » Installation","id":"1326","title":"Installation"},"1327":{"body":"","breadcrumbs":"Installer System » Provisioning Platform Installer (v3.5.0)","id":"1327","title":"Provisioning Platform Installer (v3.5.0)"},"1328":{"body":"A comprehensive installer system supporting interactive, headless, and unattended deployment modes with automatic configuration management via TOML and MCP integration.","breadcrumbs":"Installer System » 🚀 Flexible Installation and Configuration System","id":"1328","title":"🚀 Flexible Installation and Configuration System"},"1329":{"body":"","breadcrumbs":"Installer System » Installation Modes","id":"1329","title":"Installation Modes"},"133":{"body":"If running platform services, ensure these ports are available: Service Port Protocol Purpose Orchestrator 8080 HTTP Workflow API Control Center 9090 HTTP Policy engine KMS Service 8082 HTTP Key management API Server 8083 HTTP REST API Extension Registry 8084 HTTP Extension discovery OCI Registry 5000 HTTP Artifact storage","breadcrumbs":"Prerequisites » Firewall Ports","id":"133","title":"Firewall Ports"},"1330":{"body":"Beautiful terminal user interface with step-by-step guidance. provisioning-installer Features : 7 interactive screens with progress tracking Real-time validation and error feedback Visual feedback for each configuration step Beautiful formatting with color and styling Nushell fallback for unsupported terminals Screens : Welcome and prerequisites check Deployment mode selection Infrastructure provider selection Configuration details Resource allocation (CPU, memory) Security settings Review and confirm","breadcrumbs":"Installer System » 1. Interactive TUI Mode","id":"1330","title":"1. Interactive TUI Mode"},"1331":{"body":"CLI-only installation without interactive prompts, suitable for scripting. provisioning-installer --headless --mode solo --yes Features : Fully automated CLI options All settings via command-line flags No user interaction required Perfect for CI/CD pipelines Verbose output with progress tracking Common Usage : # Solo deployment\\nprovisioning-installer --headless --mode solo --provider upcloud --yes # Multi-user deployment\\nprovisioning-installer --headless --mode multiuser --cpu 4 --memory 8192 --yes # CI/CD mode\\nprovisioning-installer --headless --mode cicd --config ci-config.toml --yes","breadcrumbs":"Installer System » 2. Headless Mode","id":"1331","title":"2. Headless Mode"},"1332":{"body":"Zero-interaction mode using pre-defined configuration files, ideal for infrastructure automation. provisioning-installer --unattended --config config.toml Features : Load all settings from TOML file Complete automation for GitOps workflows No user interaction or prompts Suitable for production deployments Comprehensive logging and audit trails","breadcrumbs":"Installer System » 3. Unattended Mode","id":"1332","title":"3. Unattended Mode"},"1333":{"body":"Each mode configures resource allocation and features appropriately: Mode CPUs Memory Use Case Solo 2 4GB Single user development MultiUser 4 8GB Team development, testing CICD 8 16GB CI/CD pipelines, testing Enterprise 16 32GB Production deployment","breadcrumbs":"Installer System » Deployment Modes","id":"1333","title":"Deployment Modes"},"1334":{"body":"","breadcrumbs":"Installer System » Configuration System","id":"1334","title":"Configuration System"},"1335":{"body":"Define installation parameters in TOML format for unattended mode: [installation]\\nmode = \\"solo\\" # solo, multiuser, cicd, enterprise\\nprovider = \\"upcloud\\" # upcloud, aws, etc. [resources]\\ncpu = 2000 # millicores\\nmemory = 4096 # MB\\ndisk = 50 # GB [security]\\nenable_mfa = true\\nenable_audit = true\\ntls_enabled = true [mcp]\\nenabled = true\\nendpoint = \\"http://localhost:9090\\"","breadcrumbs":"Installer System » TOML Configuration","id":"1335","title":"TOML Configuration"},"1336":{"body":"Settings are loaded in this order (highest priority wins): CLI Arguments - Direct command-line flags Environment Variables - PROVISIONING_* variables Configuration File - TOML file specified via --config MCP Integration - AI-powered intelligent defaults Built-in Defaults - System defaults","breadcrumbs":"Installer System » Configuration Loading Priority","id":"1336","title":"Configuration Loading Priority"},"1337":{"body":"Model Context Protocol integration provides intelligent configuration: 7 AI-Powered Settings Tools : Resource recommendation engine Provider selection helper Security policy suggester Performance optimizer Compliance checker Network configuration advisor Monitoring setup assistant # Use MCP for intelligent config suggestion\\nprovisioning-installer --unattended --mcp-suggest > config.toml","breadcrumbs":"Installer System » MCP Integration","id":"1337","title":"MCP Integration"},"1338":{"body":"","breadcrumbs":"Installer System » Deployment Automation","id":"1338","title":"Deployment Automation"},"1339":{"body":"Complete deployment automation scripts for popular container runtimes: # Docker deployment\\n./provisioning/platform/installer/deploy/docker.nu --config config.toml # Podman deployment\\n./provisioning/platform/installer/deploy/podman.nu --config config.toml # Kubernetes deployment\\n./provisioning/platform/installer/deploy/kubernetes.nu --config config.toml # OrbStack deployment\\n./provisioning/platform/installer/deploy/orbstack.nu --config config.toml","breadcrumbs":"Installer System » Nushell Scripts","id":"1339","title":"Nushell Scripts"},"134":{"body":"The platform requires outbound internet access to: Download dependencies and updates Pull container images Access cloud provider APIs (AWS, UpCloud) Fetch extension packages","breadcrumbs":"Prerequisites » External Connectivity","id":"134","title":"External Connectivity"},"1340":{"body":"Infrastructure components can query MCP and install themselves: # Taskservs auto-install with dependencies\\ntaskserv install-self kubernetes\\ntaskserv install-self prometheus\\ntaskserv install-self cilium","breadcrumbs":"Installer System » Self-Installation","id":"1340","title":"Self-Installation"},"1341":{"body":"# Show interactive installer\\nprovisioning-installer # Show help\\nprovisioning-installer --help # Show available modes\\nprovisioning-installer --list-modes # Show available providers\\nprovisioning-installer --list-providers # List available templates\\nprovisioning-installer --list-templates # Validate configuration file\\nprovisioning-installer --validate --config config.toml # Dry-run (check without installing)\\nprovisioning-installer --config config.toml --check # Full unattended installation\\nprovisioning-installer --unattended --config config.toml # Headless with specific settings\\nprovisioning-installer --headless --mode solo --provider upcloud --cpu 2 --memory 4096 --yes","breadcrumbs":"Installer System » Command Reference","id":"1341","title":"Command Reference"},"1342":{"body":"","breadcrumbs":"Installer System » Integration Examples","id":"1342","title":"Integration Examples"},"1343":{"body":"# Define in Git\\ncat > infrastructure/installer.toml << EOF\\n[installation]\\nmode = \\"multiuser\\"\\nprovider = \\"upcloud\\" [resources]\\ncpu = 4\\nmemory = 8192\\nEOF # Deploy via CI/CD\\nprovisioning-installer --unattended --config infrastructure/installer.toml","breadcrumbs":"Installer System » GitOps Workflow","id":"1343","title":"GitOps Workflow"},"1344":{"body":"# Call installer as part of Terraform provisioning\\nresource \\"null_resource\\" \\"provisioning_installer\\" { provisioner \\"local-exec\\" { command = \\"provisioning-installer --unattended --config ${var.config_file}\\" }\\n}","breadcrumbs":"Installer System » Terraform Integration","id":"1344","title":"Terraform Integration"},"1345":{"body":"- name: Run provisioning installer shell: provisioning-installer --unattended --config /tmp/config.toml vars: ansible_python_interpreter: /usr/bin/python3","breadcrumbs":"Installer System » Ansible Integration","id":"1345","title":"Ansible Integration"},"1346":{"body":"Pre-built templates available in provisioning/config/installer-templates/: solo-dev.toml - Single developer setup team-test.toml - Team testing environment cicd-pipeline.toml - CI/CD integration enterprise-prod.toml - Production deployment kubernetes-ha.toml - High-availability Kubernetes multicloud.toml - Multi-provider setup","breadcrumbs":"Installer System » Configuration Templates","id":"1346","title":"Configuration Templates"},"1347":{"body":"User Guide : user/provisioning-installer-guide.md Deployment Guide : operations/installer-deployment-guide.md Configuration Guide : infrastructure/installer-configuration-guide.md","breadcrumbs":"Installer System » Documentation","id":"1347","title":"Documentation"},"1348":{"body":"# Show installer help\\nprovisioning-installer --help # Show detailed documentation\\nprovisioning help installer # Validate your configuration\\nprovisioning-installer --validate --config your-config.toml # Get configuration suggestions from MCP\\nprovisioning-installer --config-suggest","breadcrumbs":"Installer System » Help and Support","id":"1348","title":"Help and Support"},"1349":{"body":"If Ratatui TUI is not available, the installer automatically falls back to: Interactive Nushell prompt system Same functionality, text-based interface Full feature parity with TUI version","breadcrumbs":"Installer System » Nushell Fallback","id":"1349","title":"Nushell Fallback"},"135":{"body":"If you plan to use cloud providers, prepare credentials:","breadcrumbs":"Prerequisites » Cloud Provider Credentials (Optional)","id":"135","title":"Cloud Provider Credentials (Optional)"},"1350":{"body":"A comprehensive REST API server for remote provisioning operations, enabling thin clients and CI/CD pipeline integration. Source : provisioning/platform/provisioning-server/","breadcrumbs":"Provisioning Server » Provisioning API Server","id":"1350","title":"Provisioning API Server"},"1351":{"body":"Comprehensive REST API : Complete provisioning operations via HTTP JWT Authentication : Secure token-based authentication RBAC System : Role-based access control (Admin, Operator, Developer, Viewer) Async Operations : Long-running tasks with status tracking Nushell Integration : Direct execution of provisioning CLI commands Audit Logging : Complete operation tracking for compliance Metrics : Prometheus-compatible metrics endpoint CORS Support : Configurable cross-origin resource sharing Health Checks : Built-in health and readiness endpoints","breadcrumbs":"Provisioning Server » Features","id":"1351","title":"Features"},"1352":{"body":"┌─────────────────┐\\n│ REST Client │\\n│ (curl, CI/CD) │\\n└────────┬────────┘ │ HTTPS/JWT ▼\\n┌─────────────────┐\\n│ API Gateway │\\n│ - Routes │\\n│ - Auth │\\n│ - RBAC │\\n└────────┬────────┘ │ ▼\\n┌─────────────────┐\\n│ Async Task Mgr │\\n│ - Queue │\\n│ - Status │\\n└────────┬────────┘ │ ▼\\n┌─────────────────┐\\n│ Nushell Exec │\\n│ - CLI wrapper │\\n│ - Timeout │\\n└─────────────────┘\\n```plaintext ## Installation ```bash\\ncd provisioning/platform/provisioning-server\\ncargo build --release\\n```plaintext ## Configuration Create `config.toml`: ```toml\\n[server]\\nhost = \\"0.0.0.0\\"\\nport = 8083\\ncors_enabled = true [auth]\\njwt_secret = \\"your-secret-key-here\\"\\ntoken_expiry_hours = 24\\nrefresh_token_expiry_hours = 168 [provisioning]\\ncli_path = \\"/usr/local/bin/provisioning\\"\\ntimeout_seconds = 300\\nmax_concurrent_operations = 10 [logging]\\nlevel = \\"info\\"\\njson_format = false\\n```plaintext ## Usage ### Starting the Server ```bash\\n# Using config file\\nprovisioning-server --config config.toml # Custom settings\\nprovisioning-server \\\\ --host 0.0.0.0 \\\\ --port 8083 \\\\ --jwt-secret \\"my-secret\\" \\\\ --cli-path \\"/usr/local/bin/provisioning\\" \\\\ --log-level debug\\n```plaintext ### Authentication #### Login ```bash\\ncurl -X POST http://localhost:8083/v1/auth/login \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"username\\": \\"admin\\", \\"password\\": \\"admin123\\" }\'\\n```plaintext Response: ```json\\n{ \\"token\\": \\"eyJhbGc...\\", \\"refresh_token\\": \\"eyJhbGc...\\", \\"expires_in\\": 86400\\n}\\n```plaintext #### Using Token ```bash\\nexport TOKEN=\\"eyJhbGc...\\" curl -X GET http://localhost:8083/v1/servers \\\\ -H \\"Authorization: Bearer $TOKEN\\"\\n```plaintext ## API Endpoints ### Authentication - `POST /v1/auth/login` - User login\\n- `POST /v1/auth/refresh` - Refresh access token ### Servers - `GET /v1/servers` - List all servers\\n- `POST /v1/servers/create` - Create new server\\n- `DELETE /v1/servers/{id}` - Delete server\\n- `GET /v1/servers/{id}/status` - Get server status ### Taskservs - `GET /v1/taskservs` - List all taskservs\\n- `POST /v1/taskservs/create` - Create taskserv\\n- `DELETE /v1/taskservs/{id}` - Delete taskserv\\n- `GET /v1/taskservs/{id}/status` - Get taskserv status ### Workflows - `POST /v1/workflows/submit` - Submit workflow\\n- `GET /v1/workflows/{id}` - Get workflow details\\n- `GET /v1/workflows/{id}/status` - Get workflow status\\n- `POST /v1/workflows/{id}/cancel` - Cancel workflow ### Operations - `GET /v1/operations` - List all operations\\n- `GET /v1/operations/{id}` - Get operation status\\n- `POST /v1/operations/{id}/cancel` - Cancel operation ### System - `GET /health` - Health check (no auth required)\\n- `GET /v1/version` - Version information\\n- `GET /v1/metrics` - Prometheus metrics ## RBAC Roles ### Admin Role Full system access including all operations, workspace management, and system administration. ### Operator Role Infrastructure operations including create/delete servers, taskservs, clusters, and workflow management. ### Developer Role Read access plus SSH to servers, view workflows and operations. ### Viewer Role Read-only access to all resources and status information. ## Security Best Practices 1. **Change Default Credentials**: Update all default usernames/passwords\\n2. **Use Strong JWT Secret**: Generate secure random string (32+ characters)\\n3. **Enable TLS**: Use HTTPS in production\\n4. **Restrict CORS**: Configure specific allowed origins\\n5. **Enable mTLS**: For client certificate authentication\\n6. **Regular Token Rotation**: Implement token refresh strategy\\n7. **Audit Logging**: Enable audit logs for compliance ## CI/CD Integration ### GitHub Actions ```yaml\\n- name: Deploy Infrastructure run: | TOKEN=$(curl -X POST https://api.example.com/v1/auth/login \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"username\\":\\"${{ secrets.API_USER }}\\",\\"password\\":\\"${{ secrets.API_PASS }}\\"}\' \\\\ | jq -r \'.token\') curl -X POST https://api.example.com/v1/servers/create \\\\ -H \\"Authorization: Bearer $TOKEN\\" \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"workspace\\": \\"production\\", \\"provider\\": \\"upcloud\\", \\"plan\\": \\"2xCPU-4GB\\"}\'\\n```plaintext ## Related Documentation - **API Reference**: [REST API Documentation](../api/rest-api.md)\\n- **Architecture**: [API Gateway Integration](../architecture/integration-patterns.md)","breadcrumbs":"Provisioning Server » Architecture","id":"1352","title":"Architecture"},"1353":{"body":"This comprehensive guide covers creating, managing, and maintaining infrastructure using Infrastructure Automation.","breadcrumbs":"Infrastructure Management » Infrastructure Management Guide","id":"1353","title":"Infrastructure Management Guide"},"1354":{"body":"Infrastructure lifecycle management Server provisioning and management Task service installation and configuration Cluster deployment and orchestration Scaling and optimization strategies Monitoring and maintenance procedures Cost management and optimization","breadcrumbs":"Infrastructure Management » What You\'ll Learn","id":"1354","title":"What You\'ll Learn"},"1355":{"body":"","breadcrumbs":"Infrastructure Management » Infrastructure Concepts","id":"1355","title":"Infrastructure Concepts"},"1356":{"body":"Component Description Examples Servers Virtual machines or containers Web servers, databases, workers Task Services Software installed on servers Kubernetes, Docker, databases Clusters Groups of related services Web clusters, database clusters Networks Connectivity between resources VPCs, subnets, load balancers Storage Persistent data storage Block storage, object storage","breadcrumbs":"Infrastructure Management » Infrastructure Components","id":"1356","title":"Infrastructure Components"},"1357":{"body":"Plan → Create → Deploy → Monitor → Scale → Update → Retire\\n```plaintext Each phase has specific commands and considerations. ## Server Management ### Understanding Server Configuration Servers are defined in KCL configuration files: ```kcl\\n# Example server configuration\\nimport models.server servers: [ server.Server { name = \\"web-01\\" provider = \\"aws\\" # aws, upcloud, local plan = \\"t3.medium\\" # Instance type/plan os = \\"ubuntu-22.04\\" # Operating system zone = \\"us-west-2a\\" # Availability zone # Network configuration vpc = \\"main\\" subnet = \\"web\\" security_groups = [\\"web\\", \\"ssh\\"] # Storage configuration storage = { root_size = \\"50GB\\" additional = [ {name = \\"data\\", size = \\"100GB\\", type = \\"gp3\\"} ] } # Task services to install taskservs = [ \\"containerd\\", \\"kubernetes\\", \\"monitoring\\" ] # Tags for organization tags = { environment = \\"production\\" team = \\"platform\\" cost_center = \\"engineering\\" } }\\n]\\n```plaintext ### Server Lifecycle Commands #### Creating Servers ```bash\\n# Plan server creation (dry run)\\nprovisioning server create --infra my-infra --check # Create servers\\nprovisioning server create --infra my-infra # Create with specific parameters\\nprovisioning server create --infra my-infra --wait --yes # Create single server type\\nprovisioning server create web --infra my-infra\\n```plaintext #### Managing Existing Servers ```bash\\n# List all servers\\nprovisioning server list --infra my-infra # Show detailed server information\\nprovisioning show servers --infra my-infra # Show specific server\\nprovisioning show servers web-01 --infra my-infra # Get server status\\nprovisioning server status web-01 --infra my-infra\\n```plaintext #### Server Operations ```bash\\n# Start/stop servers\\nprovisioning server start web-01 --infra my-infra\\nprovisioning server stop web-01 --infra my-infra # Restart servers\\nprovisioning server restart web-01 --infra my-infra # Resize server\\nprovisioning server resize web-01 --plan t3.large --infra my-infra # Update server configuration\\nprovisioning server update web-01 --infra my-infra\\n```plaintext #### SSH Access ```bash\\n# SSH to server\\nprovisioning server ssh web-01 --infra my-infra # SSH with specific user\\nprovisioning server ssh web-01 --user admin --infra my-infra # Execute command on server\\nprovisioning server exec web-01 \\"systemctl status kubernetes\\" --infra my-infra # Copy files to/from server\\nprovisioning server copy local-file.txt web-01:/tmp/ --infra my-infra\\nprovisioning server copy web-01:/var/log/app.log ./logs/ --infra my-infra\\n```plaintext #### Server Deletion ```bash\\n# Plan server deletion (dry run)\\nprovisioning server delete --infra my-infra --check # Delete specific server\\nprovisioning server delete web-01 --infra my-infra # Delete with confirmation\\nprovisioning server delete web-01 --infra my-infra --yes # Delete but keep storage\\nprovisioning server delete web-01 --infra my-infra --keepstorage\\n```plaintext ## Task Service Management ### Understanding Task Services Task services are software components installed on servers: - **Container Runtimes**: containerd, cri-o, docker\\n- **Orchestration**: kubernetes, nomad\\n- **Networking**: cilium, calico, haproxy\\n- **Storage**: rook-ceph, longhorn, nfs\\n- **Databases**: postgresql, mysql, mongodb\\n- **Monitoring**: prometheus, grafana, alertmanager ### Task Service Configuration ```kcl\\n# Task service configuration example\\ntaskservs: { kubernetes: { version = \\"1.28\\" network_plugin = \\"cilium\\" ingress_controller = \\"nginx\\" storage_class = \\"gp3\\" # Cluster configuration cluster = { name = \\"production\\" pod_cidr = \\"10.244.0.0/16\\" service_cidr = \\"10.96.0.0/12\\" } # Node configuration nodes = { control_plane = [\\"master-01\\", \\"master-02\\", \\"master-03\\"] workers = [\\"worker-01\\", \\"worker-02\\", \\"worker-03\\"] } } postgresql: { version = \\"15\\" port = 5432 max_connections = 200 shared_buffers = \\"256MB\\" # High availability replication = { enabled = true replicas = 2 sync_mode = \\"synchronous\\" } # Backup configuration backup = { enabled = true schedule = \\"0 2 * * *\\" # Daily at 2 AM retention = \\"30d\\" } }\\n}\\n```plaintext ### Task Service Commands #### Installing Services ```bash\\n# Install single service\\nprovisioning taskserv create kubernetes --infra my-infra # Install multiple services\\nprovisioning taskserv create containerd kubernetes cilium --infra my-infra # Install with specific version\\nprovisioning taskserv create kubernetes --version 1.28 --infra my-infra # Install on specific servers\\nprovisioning taskserv create postgresql --servers db-01,db-02 --infra my-infra\\n```plaintext #### Managing Services ```bash\\n# List available services\\nprovisioning taskserv list # List installed services\\nprovisioning taskserv list --infra my-infra --installed # Show service details\\nprovisioning taskserv show kubernetes --infra my-infra # Check service status\\nprovisioning taskserv status kubernetes --infra my-infra # Check service health\\nprovisioning taskserv health kubernetes --infra my-infra\\n```plaintext #### Service Operations ```bash\\n# Start/stop services\\nprovisioning taskserv start kubernetes --infra my-infra\\nprovisioning taskserv stop kubernetes --infra my-infra # Restart services\\nprovisioning taskserv restart kubernetes --infra my-infra # Update services\\nprovisioning taskserv update kubernetes --infra my-infra # Configure services\\nprovisioning taskserv configure kubernetes --config cluster.yaml --infra my-infra\\n```plaintext #### Service Removal ```bash\\n# Remove service\\nprovisioning taskserv delete kubernetes --infra my-infra # Remove with data cleanup\\nprovisioning taskserv delete postgresql --cleanup-data --infra my-infra # Remove from specific servers\\nprovisioning taskserv delete kubernetes --servers worker-03 --infra my-infra\\n```plaintext ### Version Management ```bash\\n# Check for updates\\nprovisioning taskserv check-updates --infra my-infra # Check specific service updates\\nprovisioning taskserv check-updates kubernetes --infra my-infra # Show available versions\\nprovisioning taskserv versions kubernetes # Upgrade to latest version\\nprovisioning taskserv upgrade kubernetes --infra my-infra # Upgrade to specific version\\nprovisioning taskserv upgrade kubernetes --version 1.29 --infra my-infra\\n```plaintext ## Cluster Management ### Understanding Clusters Clusters are collections of services that work together to provide functionality: ```kcl\\n# Cluster configuration example\\nclusters: { web_cluster: { name = \\"web-application\\" description = \\"Web application cluster\\" # Services in the cluster services = [ { name = \\"nginx\\" replicas = 3 image = \\"nginx:1.24\\" ports = [80, 443] } { name = \\"app\\" replicas = 5 image = \\"myapp:latest\\" ports = [8080] } ] # Load balancer configuration load_balancer = { type = \\"application\\" health_check = \\"/health\\" ssl_cert = \\"wildcard.example.com\\" } # Auto-scaling auto_scaling = { min_replicas = 2 max_replicas = 10 target_cpu = 70 target_memory = 80 } }\\n}\\n```plaintext ### Cluster Commands #### Creating Clusters ```bash\\n# Create cluster\\nprovisioning cluster create web-cluster --infra my-infra # Create with specific configuration\\nprovisioning cluster create web-cluster --config cluster.yaml --infra my-infra # Create and deploy\\nprovisioning cluster create web-cluster --deploy --infra my-infra\\n```plaintext #### Managing Clusters ```bash\\n# List available clusters\\nprovisioning cluster list # List deployed clusters\\nprovisioning cluster list --infra my-infra --deployed # Show cluster details\\nprovisioning cluster show web-cluster --infra my-infra # Get cluster status\\nprovisioning cluster status web-cluster --infra my-infra\\n```plaintext #### Cluster Operations ```bash\\n# Deploy cluster\\nprovisioning cluster deploy web-cluster --infra my-infra # Scale cluster\\nprovisioning cluster scale web-cluster --replicas 10 --infra my-infra # Update cluster\\nprovisioning cluster update web-cluster --infra my-infra # Rolling update\\nprovisioning cluster update web-cluster --rolling --infra my-infra\\n```plaintext #### Cluster Deletion ```bash\\n# Delete cluster\\nprovisioning cluster delete web-cluster --infra my-infra # Delete with data cleanup\\nprovisioning cluster delete web-cluster --cleanup --infra my-infra\\n```plaintext ## Network Management ### Network Configuration ```kcl\\n# Network configuration\\nnetwork: { vpc = { cidr = \\"10.0.0.0/16\\" enable_dns = true enable_dhcp = true } subnets = [ { name = \\"web\\" cidr = \\"10.0.1.0/24\\" zone = \\"us-west-2a\\" public = true } { name = \\"app\\" cidr = \\"10.0.2.0/24\\" zone = \\"us-west-2b\\" public = false } { name = \\"data\\" cidr = \\"10.0.3.0/24\\" zone = \\"us-west-2c\\" public = false } ] security_groups = [ { name = \\"web\\" rules = [ {protocol = \\"tcp\\", port = 80, source = \\"0.0.0.0/0\\"} {protocol = \\"tcp\\", port = 443, source = \\"0.0.0.0/0\\"} ] } { name = \\"app\\" rules = [ {protocol = \\"tcp\\", port = 8080, source = \\"10.0.1.0/24\\"} ] } ] load_balancers = [ { name = \\"web-lb\\" type = \\"application\\" scheme = \\"internet-facing\\" subnets = [\\"web\\"] targets = [\\"web-01\\", \\"web-02\\"] } ]\\n}\\n```plaintext ### Network Commands ```bash\\n# Show network configuration\\nprovisioning network show --infra my-infra # Create network resources\\nprovisioning network create --infra my-infra # Update network configuration\\nprovisioning network update --infra my-infra # Test network connectivity\\nprovisioning network test --infra my-infra\\n```plaintext ## Storage Management ### Storage Configuration ```kcl\\n# Storage configuration\\nstorage: { # Block storage volumes = [ { name = \\"app-data\\" size = \\"100GB\\" type = \\"gp3\\" encrypted = true } ] # Object storage buckets = [ { name = \\"app-assets\\" region = \\"us-west-2\\" versioning = true encryption = \\"AES256\\" } ] # Backup configuration backup = { schedule = \\"0 1 * * *\\" # Daily at 1 AM retention = { daily = 7 weekly = 4 monthly = 12 } }\\n}\\n```plaintext ### Storage Commands ```bash\\n# Create storage resources\\nprovisioning storage create --infra my-infra # List storage\\nprovisioning storage list --infra my-infra # Backup data\\nprovisioning storage backup --infra my-infra # Restore from backup\\nprovisioning storage restore --backup latest --infra my-infra\\n```plaintext ## Monitoring and Observability ### Monitoring Setup ```bash\\n# Install monitoring stack\\nprovisioning taskserv create prometheus --infra my-infra\\nprovisioning taskserv create grafana --infra my-infra\\nprovisioning taskserv create alertmanager --infra my-infra # Configure monitoring\\nprovisioning taskserv configure prometheus --config monitoring.yaml --infra my-infra\\n```plaintext ### Health Checks ```bash\\n# Check overall infrastructure health\\nprovisioning health check --infra my-infra # Check specific components\\nprovisioning health check servers --infra my-infra\\nprovisioning health check taskservs --infra my-infra\\nprovisioning health check clusters --infra my-infra # Continuous monitoring\\nprovisioning health monitor --infra my-infra --watch\\n```plaintext ### Metrics and Alerting ```bash\\n# Get infrastructure metrics\\nprovisioning metrics get --infra my-infra # Set up alerts\\nprovisioning alerts create --config alerts.yaml --infra my-infra # List active alerts\\nprovisioning alerts list --infra my-infra\\n```plaintext ## Cost Management ### Cost Monitoring ```bash\\n# Show current costs\\nprovisioning cost show --infra my-infra # Cost breakdown by component\\nprovisioning cost breakdown --infra my-infra # Cost trends\\nprovisioning cost trends --period 30d --infra my-infra # Set cost alerts\\nprovisioning cost alert --threshold 1000 --infra my-infra\\n```plaintext ### Cost Optimization ```bash\\n# Analyze cost optimization opportunities\\nprovisioning cost optimize --infra my-infra # Show unused resources\\nprovisioning cost unused --infra my-infra # Right-size recommendations\\nprovisioning cost recommendations --infra my-infra\\n```plaintext ## Scaling Strategies ### Manual Scaling ```bash\\n# Scale servers\\nprovisioning server scale --count 5 --infra my-infra # Scale specific service\\nprovisioning taskserv scale kubernetes --nodes 3 --infra my-infra # Scale cluster\\nprovisioning cluster scale web-cluster --replicas 10 --infra my-infra\\n```plaintext ### Auto-scaling Configuration ```kcl\\n# Auto-scaling configuration\\nauto_scaling: { servers = { min_count = 2 max_count = 10 # Scaling metrics cpu_threshold = 70 memory_threshold = 80 # Scaling behavior scale_up_cooldown = \\"5m\\" scale_down_cooldown = \\"10m\\" } clusters = { web_cluster = { min_replicas = 3 max_replicas = 20 metrics = [ {type = \\"cpu\\", target = 70} {type = \\"memory\\", target = 80} {type = \\"requests\\", target = 1000} ] } }\\n}\\n```plaintext ## Disaster Recovery ### Backup Strategies ```bash\\n# Full infrastructure backup\\nprovisioning backup create --type full --infra my-infra # Incremental backup\\nprovisioning backup create --type incremental --infra my-infra # Schedule automated backups\\nprovisioning backup schedule --daily --time \\"02:00\\" --infra my-infra\\n```plaintext ### Recovery Procedures ```bash\\n# List available backups\\nprovisioning backup list --infra my-infra # Restore infrastructure\\nprovisioning restore --backup latest --infra my-infra # Partial restore\\nprovisioning restore --backup latest --components servers --infra my-infra # Test restore (dry run)\\nprovisioning restore --backup latest --test --infra my-infra\\n```plaintext ## Advanced Infrastructure Patterns ### Multi-Region Deployment ```kcl\\n# Multi-region configuration\\nregions: { primary = { name = \\"us-west-2\\" servers = [\\"web-01\\", \\"web-02\\", \\"db-01\\"] availability_zones = [\\"us-west-2a\\", \\"us-west-2b\\"] } secondary = { name = \\"us-east-1\\" servers = [\\"web-03\\", \\"web-04\\", \\"db-02\\"] availability_zones = [\\"us-east-1a\\", \\"us-east-1b\\"] } # Cross-region replication replication = { database = { primary = \\"us-west-2\\" replicas = [\\"us-east-1\\"] sync_mode = \\"async\\" } storage = { sync_schedule = \\"*/15 * * * *\\" # Every 15 minutes } }\\n}\\n```plaintext ### Blue-Green Deployment ```bash\\n# Create green environment\\nprovisioning generate infra --from production --name production-green # Deploy to green\\nprovisioning server create --infra production-green\\nprovisioning taskserv create --infra production-green\\nprovisioning cluster deploy --infra production-green # Switch traffic to green\\nprovisioning network switch --from production --to production-green # Decommission blue\\nprovisioning server delete --infra production --yes\\n```plaintext ### Canary Deployment ```bash\\n# Create canary environment\\nprovisioning cluster create web-cluster-canary --replicas 1 --infra my-infra # Route small percentage of traffic\\nprovisioning network route --target web-cluster-canary --weight 10 --infra my-infra # Monitor canary metrics\\nprovisioning metrics monitor web-cluster-canary --infra my-infra # Promote or rollback\\nprovisioning cluster promote web-cluster-canary --infra my-infra\\n# or\\nprovisioning cluster rollback web-cluster-canary --infra my-infra\\n```plaintext ## Troubleshooting Infrastructure ### Common Issues #### Server Creation Failures ```bash\\n# Check provider status\\nprovisioning provider status aws # Validate server configuration\\nprovisioning server validate web-01 --infra my-infra # Check quota limits\\nprovisioning provider quota --infra my-infra # Debug server creation\\nprovisioning --debug server create web-01 --infra my-infra\\n```plaintext #### Service Installation Failures ```bash\\n# Check service prerequisites\\nprovisioning taskserv check kubernetes --infra my-infra # Validate service configuration\\nprovisioning taskserv validate kubernetes --infra my-infra # Check service logs\\nprovisioning taskserv logs kubernetes --infra my-infra # Debug service installation\\nprovisioning --debug taskserv create kubernetes --infra my-infra\\n```plaintext #### Network Connectivity Issues ```bash\\n# Test network connectivity\\nprovisioning network test --infra my-infra # Check security groups\\nprovisioning network security-groups --infra my-infra # Trace network path\\nprovisioning network trace --from web-01 --to db-01 --infra my-infra\\n```plaintext ### Performance Optimization ```bash\\n# Analyze performance bottlenecks\\nprovisioning performance analyze --infra my-infra # Get performance recommendations\\nprovisioning performance recommendations --infra my-infra # Monitor resource utilization\\nprovisioning performance monitor --infra my-infra --duration 1h\\n```plaintext ## Testing Infrastructure The provisioning system includes a comprehensive **Test Environment Service** for automated testing of infrastructure components before deployment. ### Why Test Infrastructure? Testing infrastructure before production deployment helps: - **Validate taskserv configurations** before installing on production servers\\n- **Test integration** between multiple taskservs\\n- **Verify cluster topologies** (Kubernetes, etcd, etc.) before deployment\\n- **Catch configuration errors** early in the development cycle\\n- **Ensure compatibility** between components ### Test Environment Types #### 1. Single Taskserv Testing Test individual taskservs in isolated containers: ```bash\\n# Quick test (create, run, cleanup automatically)\\nprovisioning test quick kubernetes # Single taskserv with custom resources\\nprovisioning test env single postgres \\\\ --cpu 2000 \\\\ --memory 4096 \\\\ --auto-start \\\\ --auto-cleanup # Test with specific infrastructure context\\nprovisioning test env single redis --infra my-infra\\n```plaintext #### 2. Server Simulation Test complete server configurations with multiple taskservs: ```bash\\n# Simulate web server with multiple taskservs\\nprovisioning test env server web-01 [containerd kubernetes cilium] \\\\ --auto-start # Simulate database server\\nprovisioning test env server db-01 [postgres redis] \\\\ --infra prod-stack \\\\ --auto-start\\n```plaintext #### 3. Multi-Node Cluster Testing Test complex cluster topologies before production deployment: ```bash\\n# Test 3-node Kubernetes cluster\\nprovisioning test topology load kubernetes_3node | \\\\ test env cluster kubernetes --auto-start # Test etcd cluster\\nprovisioning test topology load etcd_cluster | \\\\ test env cluster etcd --auto-start # Test single-node Kubernetes\\nprovisioning test topology load kubernetes_single | \\\\ test env cluster kubernetes --auto-start\\n```plaintext ### Managing Test Environments ```bash\\n# List all test environments\\nprovisioning test env list # Check environment status\\nprovisioning test env status # View environment logs\\nprovisioning test env logs # Cleanup environment when done\\nprovisioning test env cleanup \\n```plaintext ### Available Topology Templates Pre-configured multi-node cluster templates: | Template | Description | Use Case |\\n|----------|-------------|----------|\\n| `kubernetes_3node` | 3-node HA K8s cluster | Production-like K8s testing |\\n| `kubernetes_single` | All-in-one K8s node | Development K8s testing |\\n| `etcd_cluster` | 3-member etcd cluster | Distributed consensus testing |\\n| `containerd_test` | Standalone containerd | Container runtime testing |\\n| `postgres_redis` | Database stack | Database integration testing | ### Test Environment Workflow Typical testing workflow: ```bash\\n# 1. Test new taskserv before deploying\\nprovisioning test quick kubernetes # 2. If successful, test server configuration\\nprovisioning test env server k8s-node [containerd kubernetes cilium] \\\\ --auto-start # 3. Test complete cluster topology\\nprovisioning test topology load kubernetes_3node | \\\\ test env cluster kubernetes --auto-start # 4. Deploy to production\\nprovisioning server create --infra production\\nprovisioning taskserv create kubernetes --infra production\\n```plaintext ### CI/CD Integration Integrate infrastructure testing into CI/CD pipelines: ```yaml\\n# GitLab CI example\\ntest-infrastructure: stage: test script: # Start orchestrator - ./scripts/start-orchestrator.nu --background # Test critical taskservs - provisioning test quick kubernetes - provisioning test quick postgres - provisioning test quick redis # Test cluster topology - provisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start artifacts: when: on_failure paths: - test-logs/\\n```plaintext ### Prerequisites Test environments require: 1. **Docker Running**: Test environments use Docker containers ```bash docker ps # Should work without errors Orchestrator Running : The orchestrator manages test containers cd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background","breadcrumbs":"Infrastructure Management » Infrastructure Lifecycle","id":"1357","title":"Infrastructure Lifecycle"},"1358":{"body":"Custom Topology Testing Create custom topology configurations: # custom-topology.toml\\n[my_cluster]\\nname = \\"Custom Test Cluster\\"\\ncluster_type = \\"custom\\" [[my_cluster.nodes]]\\nname = \\"node-01\\"\\nrole = \\"primary\\"\\ntaskservs = [\\"postgres\\", \\"redis\\"]\\n[my_cluster.nodes.resources]\\ncpu_millicores = 2000\\nmemory_mb = 4096 [[my_cluster.nodes]]\\nname = \\"node-02\\"\\nrole = \\"replica\\"\\ntaskservs = [\\"postgres\\"]\\n[my_cluster.nodes.resources]\\ncpu_millicores = 1000\\nmemory_mb = 2048\\n```plaintext Load and test custom topology: ```bash\\nprovisioning test env cluster custom-app custom-topology.toml --auto-start\\n```plaintext #### Integration Testing Test taskserv dependencies: ```bash\\n# Test Kubernetes dependencies in order\\nprovisioning test quick containerd\\nprovisioning test quick etcd\\nprovisioning test quick kubernetes\\nprovisioning test quick cilium # Test complete stack\\nprovisioning test env server k8s-stack \\\\ [containerd etcd kubernetes cilium] \\\\ --auto-start\\n```plaintext ### Documentation For complete test environment documentation: - **Test Environment Guide**: `docs/user/test-environment-guide.md`\\n- **Detailed Usage**: `docs/user/test-environment-usage.md`\\n- **Orchestrator README**: `provisioning/platform/orchestrator/README.md` ## Best Practices ### 1. Infrastructure Design - **Principle of Least Privilege**: Grant minimal necessary access\\n- **Defense in Depth**: Multiple layers of security\\n- **High Availability**: Design for failure resilience\\n- **Scalability**: Plan for growth from the start ### 2. Operational Excellence ```bash\\n# Always validate before applying changes\\nprovisioning validate config --infra my-infra # Use check mode for dry runs\\nprovisioning server create --check --infra my-infra # Monitor continuously\\nprovisioning health monitor --infra my-infra # Regular backups\\nprovisioning backup schedule --daily --infra my-infra\\n```plaintext ### 3. Security ```bash\\n# Regular security updates\\nprovisioning taskserv update --security-only --infra my-infra # Encrypt sensitive data\\nprovisioning sops settings.k --infra my-infra # Audit access\\nprovisioning audit logs --infra my-infra\\n```plaintext ### 4. Cost Optimization ```bash\\n# Regular cost reviews\\nprovisioning cost analyze --infra my-infra # Right-size resources\\nprovisioning cost optimize --apply --infra my-infra # Use reserved instances for predictable workloads\\nprovisioning server reserve --infra my-infra\\n```plaintext ## Next Steps Now that you understand infrastructure management: 1. **Learn about extensions**: [Extension Development Guide](extension-development.md)\\n2. **Master configuration**: [Configuration Guide](configuration.md)\\n3. **Explore advanced examples**: [Examples and Tutorials](examples/)\\n4. **Set up monitoring and alerting**\\n5. **Implement automated scaling**\\n6. **Plan disaster recovery procedures** You now have the knowledge to build and manage robust, scalable cloud infrastructure!","breadcrumbs":"Infrastructure Management » Advanced Testing","id":"1358","title":"Advanced Testing"},"1359":{"body":"","breadcrumbs":"Infrastructure from Code Guide » Infrastructure-from-Code (IaC) Guide","id":"1359","title":"Infrastructure-from-Code (IaC) Guide"},"136":{"body":"AWS Access Key ID AWS Secret Access Key Configured via ~/.aws/credentials or environment variables","breadcrumbs":"Prerequisites » AWS","id":"136","title":"AWS"},"1360":{"body":"The Infrastructure-from-Code system automatically detects technologies in your project and infers infrastructure requirements based on organization-specific rules. It consists of three main commands: detect : Scan a project and identify technologies complete : Analyze gaps and recommend infrastructure components ifc : Full-pipeline orchestration (workflow)","breadcrumbs":"Infrastructure from Code Guide » Overview","id":"1360","title":"Overview"},"1361":{"body":"","breadcrumbs":"Infrastructure from Code Guide » Quick Start","id":"1361","title":"Quick Start"},"1362":{"body":"Scan a project directory for detected technologies: provisioning detect /path/to/project --out json\\n```plaintext **Output Example:** ```json\\n{ \\"detections\\": [ {\\"technology\\": \\"nodejs\\", \\"confidence\\": 0.95}, {\\"technology\\": \\"postgres\\", \\"confidence\\": 0.92} ], \\"overall_confidence\\": 0.93\\n}\\n```plaintext ### 2. Analyze Infrastructure Gaps Get a completeness assessment and recommendations: ```bash\\nprovisioning complete /path/to/project --out json\\n```plaintext **Output Example:** ```json\\n{ \\"completeness\\": 1.0, \\"changes_needed\\": 2, \\"is_safe\\": true, \\"change_summary\\": \\"+ Adding: postgres-backup, pg-monitoring\\"\\n}\\n```plaintext ### 3. Run Full Workflow Orchestrate detection → completion → assessment pipeline: ```bash\\nprovisioning ifc /path/to/project --org default\\n```plaintext **Output:** ```plaintext\\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\\n🔄 Infrastructure-from-Code Workflow\\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ STEP 1: Technology Detection\\n────────────────────────────\\n✓ Detected 2 technologies STEP 2: Infrastructure Completion\\n─────────────────────────────────\\n✓ Completeness: 1% ✅ Workflow Complete\\n```plaintext ## Command Reference ### detect Scan and detect technologies in a project. **Usage:** ```bash\\nprovisioning detect [PATH] [OPTIONS]\\n```plaintext **Arguments:** - `PATH`: Project directory to analyze (default: current directory) **Options:** - `-o, --out TEXT`: Output format - `text`, `json`, `yaml` (default: `text`)\\n- `-C, --high-confidence-only`: Only show detections with confidence > 0.8\\n- `--pretty`: Pretty-print JSON/YAML output\\n- `-x, --debug`: Enable debug output **Examples:** ```bash\\n# Detect with default text output\\nprovisioning detect /path/to/project # Get JSON output for parsing\\nprovisioning detect /path/to/project --out json | jq \'.detections\' # Show only high-confidence detections\\nprovisioning detect /path/to/project --high-confidence-only # Pretty-printed YAML output\\nprovisioning detect /path/to/project --out yaml --pretty\\n```plaintext ### complete Analyze infrastructure completeness and recommend changes. **Usage:** ```bash\\nprovisioning complete [PATH] [OPTIONS]\\n```plaintext **Arguments:** - `PATH`: Project directory to analyze (default: current directory) **Options:** - `-o, --out TEXT`: Output format - `text`, `json`, `yaml` (default: `text`)\\n- `-c, --check`: Check mode (report only, no changes)\\n- `--pretty`: Pretty-print JSON/YAML output\\n- `-x, --debug`: Enable debug output **Examples:** ```bash\\n# Analyze completeness\\nprovisioning complete /path/to/project # Get detailed JSON report\\nprovisioning complete /path/to/project --out json # Check mode (dry-run, no changes)\\nprovisioning complete /path/to/project --check\\n```plaintext ### ifc (workflow) Run the full Infrastructure-from-Code pipeline. **Usage:** ```bash\\nprovisioning ifc [PATH] [OPTIONS]\\n```plaintext **Arguments:** - `PATH`: Project directory to process (default: current directory) **Options:** - `--org TEXT`: Organization name for rule loading (default: `default`)\\n- `-o, --out TEXT`: Output format - `text`, `json` (default: `text`)\\n- `--apply`: Apply recommendations (future feature)\\n- `-v, --verbose`: Verbose output with timing\\n- `--pretty`: Pretty-print output\\n- `-x, --debug`: Enable debug output **Examples:** ```bash\\n# Run workflow with default rules\\nprovisioning ifc /path/to/project # Run with organization-specific rules\\nprovisioning ifc /path/to/project --org acme-corp # Verbose output with timing\\nprovisioning ifc /path/to/project --verbose # JSON output for automation\\nprovisioning ifc /path/to/project --out json\\n```plaintext ## Organization-Specific Inference Rules Customize how infrastructure is inferred for your organization. ### Understanding Inference Rules An inference rule tells the system: \\"If we detect technology X, we should recommend taskservice Y.\\" **Rule Structure:** ```yaml\\nversion: \\"1.0.0\\"\\norganization: \\"your-org\\"\\nrules: - name: \\"rule-name\\" technology: [\\"detected-tech\\"] infers: \\"required-taskserv\\" confidence: 0.85 reason: \\"Why this taskserv is needed\\" required: true\\n```plaintext ### Creating Custom Rules Create an organization-specific rules file: ```bash\\n# ACME Corporation rules\\ncat > $PROVISIONING/config/inference-rules/acme-corp.yaml << \'EOF\'\\nversion: \\"1.0.0\\"\\norganization: \\"acme-corp\\"\\ndescription: \\"ACME Corporation infrastructure standards\\" rules: - name: \\"nodejs-to-redis\\" technology: [\\"nodejs\\", \\"express\\"] infers: \\"redis\\" confidence: 0.85 reason: \\"Node.js applications need caching\\" required: false - name: \\"postgres-to-backup\\" technology: [\\"postgres\\"] infers: \\"postgres-backup\\" confidence: 0.95 reason: \\"All databases require backup strategy\\" required: true - name: \\"all-services-monitoring\\" technology: [\\"nodejs\\", \\"python\\", \\"postgres\\"] infers: \\"monitoring\\" confidence: 0.90 reason: \\"ACME requires monitoring on production services\\" required: true\\nEOF\\n```plaintext Then use them: ```bash\\nprovisioning ifc /path/to/project --org acme-corp\\n```plaintext ### Default Rules If no organization rules are found, the system uses sensible defaults: - Node.js + Express → Redis (caching)\\n- Node.js → Nginx (reverse proxy)\\n- Database → Backup (data protection)\\n- Docker → Kubernetes (orchestration)\\n- Python → Gunicorn (WSGI server)\\n- PostgreSQL → Monitoring (production safety) ## Output Formats ### Text Output (Default) Human-readable format with visual indicators: ```plaintext\\nSTEP 1: Technology Detection\\n────────────────────────────\\n✓ Detected 2 technologies STEP 2: Infrastructure Completion\\n─────────────────────────────────\\n✓ Completeness: 1%\\n```plaintext ### JSON Output Structured format for automation and parsing: ```bash\\nprovisioning detect /path/to/project --out json | jq \'.detections[0]\'\\n```plaintext Output: ```json\\n{ \\"technology\\": \\"nodejs\\", \\"confidence\\": 0.8333333134651184, \\"evidence_count\\": 1\\n}\\n```plaintext ### YAML Output Alternative structured format: ```bash\\nprovisioning detect /path/to/project --out yaml\\n```plaintext ## Practical Examples ### Example 1: Node.js + PostgreSQL Project ```bash\\n# Step 1: Detect\\n$ provisioning detect my-app\\n✓ Detected: nodejs, express, postgres, docker # Step 2: Complete\\n$ provisioning complete my-app\\n✓ Changes needed: 3 - redis (caching) - nginx (reverse proxy) - pg-backup (database backup) # Step 3: Full workflow\\n$ provisioning ifc my-app --org acme-corp\\n```plaintext ### Example 2: Python Django Project ```bash\\n$ provisioning detect django-app --out json\\n{ \\"detections\\": [ {\\"technology\\": \\"python\\", \\"confidence\\": 0.95}, {\\"technology\\": \\"django\\", \\"confidence\\": 0.92} ]\\n} # Inferred requirements (with gunicorn, monitoring, backup)\\n```plaintext ### Example 3: Microservices Architecture ```bash\\n$ provisioning ifc microservices/ --org mycompany --verbose\\n🔍 Processing microservices/ - service-a: nodejs + postgres - service-b: python + redis - service-c: go + mongodb ✓ Detected common patterns\\n✓ Applied 12 inference rules\\n✓ Generated deployment plan\\n```plaintext ## Integration with Automation ### CI/CD Pipeline Example ```bash\\n#!/bin/bash\\n# Check infrastructure completeness in CI/CD PROJECT_PATH=${1:-.}\\nCOMPLETENESS=$(provisioning complete $PROJECT_PATH --out json | jq \'.completeness\') if (( $(echo \\"$COMPLETENESS < 0.9\\" | bc -l) )); then echo \\"❌ Infrastructure completeness too low: $COMPLETENESS\\" exit 1\\nfi echo \\"✅ Infrastructure is complete: $COMPLETENESS\\"\\n```plaintext ### Configuration as Code Integration ```bash\\n# Generate JSON for infrastructure config\\nprovisioning detect /path/to/project --out json > infra-report.json # Use in your config processing\\ncat infra-report.json | jq \'.detections[]\' | while read -r tech; do echo \\"Processing technology: $tech\\"\\ndone\\n```plaintext ## Troubleshooting ### \\"Detector binary not found\\" **Solution:** Ensure the provisioning project is properly built: ```bash\\ncd $PROVISIONING/platform\\ncargo build --release --bin provisioning-detector\\n```plaintext ### No technologies detected **Check:** 1. Project path is correct: `provisioning detect /actual/path`\\n2. Project contains recognizable technologies (package.json, Dockerfile, requirements.txt, etc.)\\n3. Use `--debug` flag for more details: `provisioning detect /path --debug` ### Organization rules not being applied **Check:** 1. Rules file exists: `$PROVISIONING/config/inference-rules/{org}.yaml`\\n2. Organization name is correct: `provisioning ifc /path --org myorg`\\n3. Verify rules structure with: `cat $PROVISIONING/config/inference-rules/myorg.yaml` ## Advanced Usage ### Custom Rule Template Generate a template for a new organization: ```bash\\n# Template will be created with proper structure\\nprovisioning rules create --org neworg\\n```plaintext ### Validate Rule Files ```bash\\n# Check for syntax errors\\nprovisioning rules validate /path/to/rules.yaml\\n```plaintext ### Export Rules for Integration Export as Rust code for embedding: ```bash\\nprovisioning rules export myorg --format rust > rules.rs\\n```plaintext ## Best Practices 1. **Organize by Organization**: Keep separate rules for different organizations\\n2. **High Confidence First**: Start with rules you\'re confident about (confidence > 0.8)\\n3. **Document Reasons**: Always fill in the `reason` field for maintainability\\n4. **Test Locally**: Run on sample projects before applying organization-wide\\n5. **Version Control**: Commit inference rules to version control\\n6. **Review Changes**: Always inspect recommendations with `--check` first ## Related Commands ```bash\\n# View available taskservs that can be inferred\\nprovisioning taskserv list # Create inferred infrastructure\\nprovisioning taskserv create {inferred-name} # View current configuration\\nprovisioning env | grep PROVISIONING\\n```plaintext ## Support and Documentation - **Full CLI Help**: `provisioning help`\\n- **Specific Command Help**: `provisioning help detect`\\n- **Configuration Guide**: See `CONFIG_ENCRYPTION_GUIDE.md`\\n- **Task Services**: See `SERVICE_MANAGEMENT_GUIDE.md` --- ## Quick Reference ### 3-Step Workflow ```bash\\n# 1. Detect technologies\\nprovisioning detect /path/to/project # 2. Analyze infrastructure gaps\\nprovisioning complete /path/to/project # 3. Run full workflow (detect + complete)\\nprovisioning ifc /path/to/project --org myorg\\n```plaintext ### Common Commands | Task | Command |\\n|------|---------|\\n| **Detect technologies** | `provisioning detect /path` |\\n| **Get JSON output** | `provisioning detect /path --out json` |\\n| **Check completeness** | `provisioning complete /path` |\\n| **Dry-run (check mode)** | `provisioning complete /path --check` |\\n| **Full workflow** | `provisioning ifc /path --org myorg` |\\n| **Verbose output** | `provisioning ifc /path --verbose` |\\n| **Debug mode** | `provisioning detect /path --debug` | ### Output Formats ```bash\\n# Text (human-readable)\\nprovisioning detect /path --out text # JSON (for automation)\\nprovisioning detect /path --out json | jq \'.detections\' # YAML (for configuration)\\nprovisioning detect /path --out yaml\\n```plaintext ### Organization Rules #### Use Organization Rules ```bash\\nprovisioning ifc /path --org acme-corp\\n```plaintext #### Create Rules File ```bash\\nmkdir -p $PROVISIONING/config/inference-rules\\ncat > $PROVISIONING/config/inference-rules/myorg.yaml << \'EOF\'\\nversion: \\"1.0.0\\"\\norganization: \\"myorg\\"\\nrules: - name: \\"nodejs-to-redis\\" technology: [\\"nodejs\\"] infers: \\"redis\\" confidence: 0.85 reason: \\"Caching layer\\" required: false\\nEOF\\n```plaintext ### Example: Node.js + PostgreSQL ```bash\\n$ provisioning detect myapp\\n✓ Detected: nodejs, postgres $ provisioning complete myapp\\n✓ Changes: +redis, +nginx, +pg-backup $ provisioning ifc myapp --org default\\n✓ Detection: 2 technologies\\n✓ Completion: recommended changes\\n✅ Workflow complete\\n```plaintext ### CI/CD Integration ```bash\\n#!/bin/bash\\n# Check infrastructure is complete before deploy\\nCOMPLETENESS=$(provisioning complete . --out json | jq \'.completeness\') if (( $(echo \\"$COMPLETENESS < 0.9\\" | bc -l) )); then echo \\"Infrastructure incomplete: $COMPLETENESS\\" exit 1\\nfi\\n```plaintext ### JSON Output Examples #### Detect Output ```json\\n{ \\"detections\\": [ {\\"technology\\": \\"nodejs\\", \\"confidence\\": 0.95}, {\\"technology\\": \\"postgres\\", \\"confidence\\": 0.92} ], \\"overall_confidence\\": 0.93\\n}\\n```plaintext #### Complete Output ```json\\n{ \\"completeness\\": 1.0, \\"changes_needed\\": 2, \\"is_safe\\": true, \\"change_summary\\": \\"+ redis, + monitoring\\"\\n}\\n```plaintext ### Flag Reference | Flag | Short | Purpose |\\n|------|-------|---------|\\n| `--out TEXT` | `-o` | Output format: text, json, yaml |\\n| `--debug` | `-x` | Enable debug output |\\n| `--pretty` | | Pretty-print JSON/YAML |\\n| `--check` | `-c` | Dry-run (detect/complete) |\\n| `--org TEXT` | | Organization name (ifc) |\\n| `--verbose` | `-v` | Verbose output (ifc) |\\n| `--apply` | | Apply changes (ifc, future) | ### Troubleshooting | Issue | Solution |\\n|-------|----------|\\n| \\"Detector binary not found\\" | `cd $PROVISIONING/platform && cargo build --release` |\\n| No technologies detected | Check file types (.py, .js, go.mod, package.json, etc.) |\\n| Organization rules not found | Verify file exists: `$PROVISIONING/config/inference-rules/{org}.yaml` |\\n| Invalid path error | Use absolute path: `provisioning detect /full/path` | ### Environment Variables | Variable | Purpose |\\n|----------|---------|\\n| `$PROVISIONING` | Path to provisioning root |\\n| `$PROVISIONING_ORG` | Default organization (optional) | ### Default Inference Rules - Node.js + Express → Redis (caching)\\n- Node.js → Nginx (reverse proxy)\\n- Database → Backup (data protection)\\n- Docker → Kubernetes (orchestration)\\n- Python → Gunicorn (WSGI)\\n- PostgreSQL → Monitoring (production) ### Useful Aliases ```bash\\n# Add to shell config\\nalias detect=\'provisioning detect\'\\nalias complete=\'provisioning complete\'\\nalias ifc=\'provisioning ifc\' # Usage\\ndetect /my/project\\ncomplete /my/project\\nifc /my/project --org myorg\\n```plaintext ### Tips & Tricks **Parse JSON in bash:** ```bash\\nprovisioning detect . --out json | \\\\ jq \'.detections[] | .technology\' | \\\\ sort | uniq\\n```plaintext **Watch for changes:** ```bash\\nwatch -n 5 \'provisioning complete . --out json | jq \\".completeness\\"\'\\n```plaintext **Generate reports:** ```bash\\nprovisioning detect . --out yaml > detection-report.yaml\\nprovisioning complete . --out yaml > completion-report.yaml\\n```plaintext **Validate all organizations:** ```bash\\nfor org in $PROVISIONING/config/inference-rules/*.yaml; do org_name=$(basename \\"$org\\" .yaml) echo \\"Testing $org_name...\\" provisioning ifc . --org \\"$org_name\\" --check\\ndone\\n```plaintext ### Related Guides - Full guide: `docs/user/INFRASTRUCTURE_FROM_CODE_GUIDE.md`\\n- Inference rules: `docs/user/INFRASTRUCTURE_FROM_CODE_GUIDE.md#organization-specific-inference-rules`\\n- Service management: `docs/user/SERVICE_MANAGEMENT_QUICKREF.md`\\n- Configuration: `docs/user/CONFIG_ENCRYPTION_QUICKREF.md`","breadcrumbs":"Infrastructure from Code Guide » 1. Detect Technologies in Your Project","id":"1362","title":"1. Detect Technologies in Your Project"},"1363":{"body":"","breadcrumbs":"Batch Workflow System » Batch Workflow System (v3.1.0 - TOKEN-OPTIMIZED ARCHITECTURE)","id":"1363","title":"Batch Workflow System (v3.1.0 - TOKEN-OPTIMIZED ARCHITECTURE)"},"1364":{"body":"A comprehensive batch workflow system has been implemented using 10 token-optimized agents achieving 85-90% token efficiency over monolithic approaches. The system enables provider-agnostic batch operations with mixed provider support (UpCloud + AWS + local).","breadcrumbs":"Batch Workflow System » 🚀 Batch Workflow System Completed (2025-09-25)","id":"1364","title":"🚀 Batch Workflow System Completed (2025-09-25)"},"1365":{"body":"Provider-Agnostic Design : Single workflows supporting multiple cloud providers KCL Schema Integration : Type-safe workflow definitions with comprehensive validation Dependency Resolution : Topological sorting with soft/hard dependency support State Management : Checkpoint-based recovery with rollback capabilities Real-time Monitoring : Live workflow progress tracking and health monitoring Token Optimization : 85-90% efficiency using parallel specialized agents","breadcrumbs":"Batch Workflow System » Key Achievements","id":"1365","title":"Key Achievements"},"1366":{"body":"# Submit batch workflow from KCL definition\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch submit workflows/example_batch.k\\" # Monitor batch workflow progress\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch monitor \\" # List batch workflows with filtering\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch list --status Running\\" # Get detailed batch status\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch status \\" # Initiate rollback for failed workflow\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch rollback \\" # Show batch workflow statistics\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch stats\\"","breadcrumbs":"Batch Workflow System » Batch Workflow Commands","id":"1366","title":"Batch Workflow Commands"},"1367":{"body":"Batch workflows are defined using KCL schemas in kcl/workflows.k: # Example batch workflow with mixed providers\\nbatch_workflow: BatchWorkflow = { name = \\"multi_cloud_deployment\\" version = \\"1.0.0\\" storage_backend = \\"surrealdb\\" # or \\"filesystem\\" parallel_limit = 5 rollback_enabled = True operations = [ { id = \\"upcloud_servers\\" type = \\"server_batch\\" provider = \\"upcloud\\" dependencies = [] server_configs = [ {name = \\"web-01\\", plan = \\"1xCPU-2GB\\", zone = \\"de-fra1\\"}, {name = \\"web-02\\", plan = \\"1xCPU-2GB\\", zone = \\"us-nyc1\\"} ] }, { id = \\"aws_taskservs\\" type = \\"taskserv_batch\\" provider = \\"aws\\" dependencies = [\\"upcloud_servers\\"] taskservs = [\\"kubernetes\\", \\"cilium\\", \\"containerd\\"] } ]\\n}","breadcrumbs":"Batch Workflow System » KCL Workflow Schema","id":"1367","title":"KCL Workflow Schema"},"1368":{"body":"Extended orchestrator API for batch workflow management: Submit Batch : POST http://localhost:9090/v1/workflows/batch/submit Batch Status : GET http://localhost:9090/v1/workflows/batch/{id} List Batches : GET http://localhost:9090/v1/workflows/batch Monitor Progress : GET http://localhost:9090/v1/workflows/batch/{id}/progress Initiate Rollback : POST http://localhost:9090/v1/workflows/batch/{id}/rollback Batch Statistics : GET http://localhost:9090/v1/workflows/batch/stats","breadcrumbs":"Batch Workflow System » REST API Endpoints (Batch Operations)","id":"1368","title":"REST API Endpoints (Batch Operations)"},"1369":{"body":"Provider Agnostic : Mix UpCloud, AWS, and local providers in single workflows Type Safety : KCL schema validation prevents runtime errors Dependency Management : Automatic resolution with failure handling State Recovery : Checkpoint-based recovery from any failure point Real-time Monitoring : Live progress tracking with detailed status","breadcrumbs":"Batch Workflow System » System Benefits","id":"1369","title":"System Benefits"},"137":{"body":"UpCloud username UpCloud password Configured via environment variables or config files","breadcrumbs":"Prerequisites » UpCloud","id":"137","title":"UpCloud"},"1370":{"body":"","breadcrumbs":"CLI Architecture » Modular CLI Architecture (v3.2.0 - MAJOR REFACTORING)","id":"1370","title":"Modular CLI Architecture (v3.2.0 - MAJOR REFACTORING)"},"1371":{"body":"A comprehensive CLI refactoring transforming the monolithic 1,329-line script into a modular, maintainable architecture with domain-driven design.","breadcrumbs":"CLI Architecture » 🚀 CLI Refactoring Completed (2025-09-30)","id":"1371","title":"🚀 CLI Refactoring Completed (2025-09-30)"},"1372":{"body":"Main File Reduction : 1,329 lines → 211 lines (84% reduction) Domain Handlers : 7 focused modules (infrastructure, orchestration, development, workspace, configuration, utilities, generation) Code Duplication : 50+ instances eliminated through centralized flag handling Command Registry : 80+ shortcuts for improved user experience Bi-directional Help : provisioning help ws = provisioning ws help Test Coverage : Comprehensive test suite with 6 test groups","breadcrumbs":"CLI Architecture » Architecture Improvements","id":"1372","title":"Architecture Improvements"},"1373":{"body":"","breadcrumbs":"CLI Architecture » Command Shortcuts Reference","id":"1373","title":"Command Shortcuts Reference"},"1374":{"body":"[Full docs: provisioning help infra] s → server (create, delete, list, ssh, price) t, task → taskserv (create, delete, list, generate, check-updates) cl → cluster (create, delete, list) i, infras → infra (list, validate)","breadcrumbs":"CLI Architecture » Infrastructure","id":"1374","title":"Infrastructure"},"1375":{"body":"[Full docs: provisioning help orch] wf, flow → workflow (list, status, monitor, stats, cleanup) bat → batch (submit, list, status, monitor, rollback, cancel, stats) orch → orchestrator (start, stop, status, health, logs)","breadcrumbs":"CLI Architecture » Orchestration","id":"1375","title":"Orchestration"},"1376":{"body":"[Full docs: provisioning help dev] mod → module (discover, load, list, unload, sync-kcl) lyr → layer (explain, show, test, stats) version (check, show, updates, apply, taskserv) pack (core, provider, list, clean)","breadcrumbs":"CLI Architecture » Development","id":"1376","title":"Development"},"1377":{"body":"[Full docs: provisioning help ws] ws → workspace (init, create, validate, info, list, migrate) tpl, tmpl → template (list, types, show, apply, validate)","breadcrumbs":"CLI Architecture » Workspace","id":"1377","title":"Workspace"},"1378":{"body":"[Full docs: provisioning help config] e → env (show environment variables) val → validate (validate configuration) st, config → setup (setup wizard) show (show configuration details) init (initialize infrastructure) allenv (show all config and environment)","breadcrumbs":"CLI Architecture » Configuration","id":"1378","title":"Configuration"},"1379":{"body":"l, ls, list → list (list resources) ssh (SSH operations) sops (edit encrypted files) cache (cache management) providers (provider operations) nu (start Nushell session with provisioning library) qr (QR code generation) nuinfo (Nushell information) plugin, plugins (plugin management)","breadcrumbs":"CLI Architecture » Utilities","id":"1379","title":"Utilities"},"138":{"body":"Once all prerequisites are met, proceed to: → Installation","breadcrumbs":"Prerequisites » Next Steps","id":"138","title":"Next Steps"},"1380":{"body":"[Full docs: provisioning generate help] g, gen → generate (server, taskserv, cluster, infra, new)","breadcrumbs":"CLI Architecture » Generation","id":"1380","title":"Generation"},"1381":{"body":"c → create (create resources) d → delete (delete resources) u → update (update resources) price, cost, costs → price (show pricing) cst, csts → create-server-task (create server with taskservs)","breadcrumbs":"CLI Architecture » Special Commands","id":"1381","title":"Special Commands"},"1382":{"body":"The help system works in both directions: # All these work identically:\\nprovisioning help workspace\\nprovisioning workspace help\\nprovisioning ws help\\nprovisioning help ws # Same for all categories:\\nprovisioning help infra = provisioning infra help\\nprovisioning help orch = provisioning orch help\\nprovisioning help dev = provisioning dev help\\nprovisioning help ws = provisioning ws help\\nprovisioning help plat = provisioning plat help\\nprovisioning help concept = provisioning concept help\\n```plaintext ## CLI Internal Architecture **File Structure:** ```plaintext\\nprovisioning/core/nulib/\\n├── provisioning (211 lines) - Main entry point\\n├── main_provisioning/\\n│ ├── flags.nu (139 lines) - Centralized flag handling\\n│ ├── dispatcher.nu (264 lines) - Command routing\\n│ ├── help_system.nu - Categorized help\\n│ └── commands/ - Domain-focused handlers\\n│ ├── infrastructure.nu (117 lines)\\n│ ├── orchestration.nu (64 lines)\\n│ ├── development.nu (72 lines)\\n│ ├── workspace.nu (56 lines)\\n│ ├── generation.nu (78 lines)\\n│ ├── utilities.nu (157 lines)\\n│ └── configuration.nu (316 lines)\\n```plaintext **For Developers:** - **Adding commands**: Update appropriate domain handler in `commands/`\\n- **Adding shortcuts**: Update command registry in `dispatcher.nu`\\n- **Flag changes**: Modify centralized functions in `flags.nu`\\n- **Testing**: Run `nu tests/test_provisioning_refactor.nu` See [ADR-006: CLI Refactoring](../architecture/adr/adr-006-provisioning-cli-refactoring.md) for complete refactoring details.","breadcrumbs":"CLI Architecture » Bi-directional Help System","id":"1382","title":"Bi-directional Help System"},"1383":{"body":"","breadcrumbs":"Configuration System » Configuration System (v2.0.0)","id":"1383","title":"Configuration System (v2.0.0)"},"1384":{"body":"The system has been completely migrated from ENV-based to config-driven architecture. 65+ files migrated across entire codebase 200+ ENV variables replaced with 476 config accessors 16 token-efficient agents used for systematic migration 92% token efficiency achieved vs monolithic approach","breadcrumbs":"Configuration System » ⚠️ Migration Completed (2025-09-23)","id":"1384","title":"⚠️ Migration Completed (2025-09-23)"},"1385":{"body":"Primary Config : config.defaults.toml (system defaults) User Config : config.user.toml (user preferences) Environment Configs : config.{dev,test,prod}.toml.example Hierarchical Loading : defaults → user → project → infra → env → runtime Interpolation : {{paths.base}}, {{env.HOME}}, {{now.date}}, {{git.branch}}","breadcrumbs":"Configuration System » Configuration Files","id":"1385","title":"Configuration Files"},"1386":{"body":"provisioning validate config - Validate configuration provisioning env - Show environment variables provisioning allenv - Show all config and environment PROVISIONING_ENV=prod provisioning - Use specific environment","breadcrumbs":"Configuration System » Essential Commands","id":"1386","title":"Essential Commands"},"1387":{"body":"See ADR-010: Configuration Format Strategy for complete rationale and design patterns.","breadcrumbs":"Configuration System » Configuration Architecture","id":"1387","title":"Configuration Architecture"},"1388":{"body":"When loading configuration, precedence is (highest to lowest): Runtime Arguments - CLI flags and direct user input Environment Variables - PROVISIONING_* overrides User Configuration - ~/.config/provisioning/user_config.yaml Infrastructure Configuration - Nickel schemas, extensions, provider configs System Defaults - provisioning/config/config.defaults.toml","breadcrumbs":"Configuration System » Configuration Loading Hierarchy (Priority)","id":"1388","title":"Configuration Loading Hierarchy (Priority)"},"1389":{"body":"For new configuration : Infrastructure/schemas → Use Nickel (type-safe, schema-validated) Application settings → Use TOML (hierarchical, supports interpolation) Kubernetes/CI-CD → Use YAML (standard, ecosystem-compatible) For existing workspace configs : KCL still supported but gradually migrating to Nickel Config loader supports both formats during transition","breadcrumbs":"Configuration System » File Type Guidelines","id":"1389","title":"File Type Guidelines"},"139":{"body":"This guide walks you through installing the Provisioning Platform on your system.","breadcrumbs":"Installation Steps » Installation","id":"139","title":"Installation"},"1390":{"body":"This guide shows you how to set up a new infrastructure workspace and extend the provisioning system with custom configurations.","breadcrumbs":"Workspace Setup » Workspace Setup Guide","id":"1390","title":"Workspace Setup Guide"},"1391":{"body":"","breadcrumbs":"Workspace Setup » Quick Start","id":"1391","title":"Quick Start"},"1392":{"body":"# Navigate to the workspace directory\\ncd workspace/infra # Create your infrastructure directory\\nmkdir my-infra\\ncd my-infra # Create the basic structure\\nmkdir -p task-servs clusters defs data tmp\\n```plaintext ### 2. Set Up KCL Module Dependencies Create `kcl.mod`: ```toml\\n[package]\\nname = \\"my-infra\\"\\nedition = \\"v0.11.2\\"\\nversion = \\"0.0.1\\" [dependencies]\\nprovisioning = { path = \\"../../../provisioning/kcl\\", version = \\"0.0.1\\" }\\ntaskservs = { path = \\"../../../provisioning/extensions/taskservs\\", version = \\"0.0.1\\" }\\ncluster = { path = \\"../../../provisioning/extensions/cluster\\", version = \\"0.0.1\\" }\\nupcloud_prov = { path = \\"../../../provisioning/extensions/providers/upcloud/kcl\\", version = \\"0.0.1\\" }\\n```plaintext ### 3. Create Main Settings Create `settings.k`: ```kcl\\nimport provisioning _settings = provisioning.Settings { main_name = \\"my-infra\\" main_title = \\"My Infrastructure Project\\" # Directories settings_path = \\"./settings.yaml\\" defaults_provs_dirpath = \\"./defs\\" prov_data_dirpath = \\"./data\\" created_taskservs_dirpath = \\"./tmp/NOW_deployment\\" # Cluster configuration cluster_admin_host = \\"my-infra-cp-0\\" cluster_admin_user = \\"root\\" servers_wait_started = 40 # Runtime settings runset = { wait = True output_format = \\"yaml\\" output_path = \\"./tmp/NOW\\" inventory_file = \\"./inventory.yaml\\" use_time = True }\\n} _settings\\n```plaintext ### 4. Test Your Setup ```bash\\n# Test the configuration\\nkcl run settings.k # Test with the provisioning system\\ncd ../../../\\nprovisioning -c -i my-infra show settings\\n```plaintext ## Adding Taskservers ### Example: Redis Create `task-servs/redis.k`: ```kcl\\nimport taskservs.redis.kcl.redis as redis_schema _taskserv = redis_schema.Redis { version = \\"7.2.3\\" port = 6379 maxmemory = \\"512mb\\" maxmemory_policy = \\"allkeys-lru\\" persistence = True bind_address = \\"0.0.0.0\\"\\n} _taskserv\\n```plaintext Test it: ```bash\\nkcl run task-servs/redis.k\\n```plaintext ### Example: Kubernetes Create `task-servs/kubernetes.k`: ```kcl\\nimport taskservs.kubernetes.kcl.kubernetes as k8s_schema _taskserv = k8s_schema.Kubernetes { version = \\"1.29.1\\" major_version = \\"1.29\\" cri = \\"crio\\" runtime_default = \\"crun\\" cni = \\"cilium\\" bind_port = 6443\\n} _taskserv\\n```plaintext ### Example: Cilium Create `task-servs/cilium.k`: ```kcl\\nimport taskservs.cilium.kcl.cilium as cilium_schema _taskserv = cilium_schema.Cilium { version = \\"v1.16.5\\"\\n} _taskserv\\n```plaintext ## Using the Provisioning System ### Create Servers ```bash\\n# Check configuration first\\nprovisioning -c -i my-infra server create # Actually create servers\\nprovisioning -i my-infra server create\\n```plaintext ### Install Taskservs ```bash\\n# Install Kubernetes\\nprovisioning -c -i my-infra taskserv create kubernetes # Install Cilium\\nprovisioning -c -i my-infra taskserv create cilium # Install Redis\\nprovisioning -c -i my-infra taskserv create redis\\n```plaintext ### Manage Clusters ```bash\\n# Create cluster\\nprovisioning -c -i my-infra cluster create # List cluster components\\nprovisioning -i my-infra cluster list\\n```plaintext ## Directory Structure Your workspace should look like this: ```plaintext\\nworkspace/infra/my-infra/\\n├── kcl.mod # Module dependencies\\n├── settings.k # Main infrastructure settings\\n├── task-servs/ # Taskserver configurations\\n│ ├── kubernetes.k\\n│ ├── cilium.k\\n│ ├── redis.k\\n│ └── {custom-service}.k\\n├── clusters/ # Cluster definitions\\n│ └── main.k\\n├── defs/ # Provider defaults\\n│ ├── upcloud_defaults.k\\n│ └── {provider}_defaults.k\\n├── data/ # Provider runtime data\\n│ ├── upcloud_settings.k\\n│ └── {provider}_settings.k\\n├── tmp/ # Temporary files\\n│ ├── NOW_deployment/\\n│ └── NOW_clusters/\\n├── inventory.yaml # Generated inventory\\n└── settings.yaml # Generated settings\\n```plaintext ## Advanced Configuration ### Custom Provider Defaults Create `defs/upcloud_defaults.k`: ```kcl\\nimport upcloud_prov.upcloud as upcloud_schema _defaults = upcloud_schema.UpcloudDefaults { zone = \\"de-fra1\\" plan = \\"1xCPU-2GB\\" storage_size = 25 storage_tier = \\"maxiops\\"\\n} _defaults\\n```plaintext ### Cluster Definitions Create `clusters/main.k`: ```kcl\\nimport cluster.main as cluster_schema _cluster = cluster_schema.MainCluster { name = \\"my-infra-cluster\\" control_plane_count = 1 worker_count = 2 services = [ \\"kubernetes\\", \\"cilium\\", \\"redis\\" ]\\n} _cluster\\n```plaintext ## Environment-Specific Configurations ### Development Environment Create `settings-dev.k`: ```kcl\\nimport provisioning _settings = provisioning.Settings { main_name = \\"my-infra-dev\\" main_title = \\"My Infrastructure (Development)\\" # Development-specific settings servers_wait_started = 20 # Faster for dev runset = { wait = False # Don\'t wait in dev output_format = \\"json\\" }\\n} _settings\\n```plaintext ### Production Environment Create `settings-prod.k`: ```kcl\\nimport provisioning _settings = provisioning.Settings { main_name = \\"my-infra-prod\\" main_title = \\"My Infrastructure (Production)\\" # Production-specific settings servers_wait_started = 60 # More conservative runset = { wait = True output_format = \\"yaml\\" use_time = True } # Production security secrets = { provider = \\"sops\\" }\\n} _settings\\n```plaintext ## Troubleshooting ### Common Issues #### KCL Module Not Found ```plaintext\\nError: pkgpath provisioning not found\\n```plaintext **Solution**: Ensure the provisioning module is in the expected location: ```bash\\nls ../../../provisioning/extensions/kcl/provisioning/0.0.1/\\n```plaintext If missing, copy the files: ```bash\\nmkdir -p ../../../provisioning/extensions/kcl/provisioning/0.0.1\\ncp -r ../../../provisioning/kcl/* ../../../provisioning/extensions/kcl/provisioning/0.0.1/\\n```plaintext #### Import Path Errors ```plaintext\\nError: attribute \'Redis\' not found in module\\n```plaintext **Solution**: Check the import path: ```kcl\\n# Wrong\\nimport taskservs.redis.default.kcl.redis as redis_schema # Correct\\nimport taskservs.redis.kcl.redis as redis_schema\\n```plaintext #### Boolean Value Errors ```plaintext\\nError: name \'true\' is not defined\\n```plaintext **Solution**: Use capitalized booleans in KCL: ```kcl\\n# Wrong\\nenabled = true # Correct\\nenabled = True\\n```plaintext ### Debugging Commands ```bash\\n# Check KCL syntax\\nkcl run settings.k # Validate configuration\\nprovisioning -c -i my-infra validate config # Show current settings\\nprovisioning -i my-infra show settings # List available taskservs\\nprovisioning -i my-infra taskserv list # Check infrastructure status\\nprovisioning -i my-infra show servers\\n```plaintext ## Next Steps 1. **Customize your settings**: Modify `settings.k` for your specific needs\\n2. **Add taskservs**: Create configurations for the services you need\\n3. **Test thoroughly**: Use `--check` mode before actual deployment\\n4. **Create clusters**: Define complete deployment configurations\\n5. **Set up CI/CD**: Integrate with your deployment pipeline\\n6. **Monitor**: Set up logging and monitoring for your infrastructure For more advanced topics, see: - [KCL Module Guide](../development/KCL_MODULE_GUIDE.md)\\n- [Creating Custom Taskservers](../development/CUSTOM_TASKSERVERS.md)\\n- [Provider Configuration](../user/PROVIDER_SETUP.md)","breadcrumbs":"Workspace Setup » 1. Create a New Infrastructure Workspace","id":"1392","title":"1. Create a New Infrastructure Workspace"},"1393":{"body":"Version : 1.0.0 Date : 2025-10-06 Status : ✅ Production Ready","breadcrumbs":"Workspace Switching Guide » Workspace Switching Guide","id":"1393","title":"Workspace Switching Guide"},"1394":{"body":"The provisioning system now includes a centralized workspace management system that allows you to easily switch between multiple workspaces without manually editing configuration files.","breadcrumbs":"Workspace Switching Guide » Overview","id":"1394","title":"Overview"},"1395":{"body":"","breadcrumbs":"Workspace Switching Guide » Quick Start","id":"1395","title":"Quick Start"},"1396":{"body":"provisioning workspace list\\n```plaintext Output: ```plaintext\\nRegistered Workspaces: ● librecloud Path: /Users/Akasha/project-provisioning/workspace_librecloud Last used: 2025-10-06T12:29:43Z production Path: /opt/workspaces/production Last used: 2025-10-05T10:15:30Z\\n```plaintext The green ● indicates the currently active workspace. ### Check Active Workspace ```bash\\nprovisioning workspace active\\n```plaintext Output: ```plaintext\\nActive Workspace: Name: librecloud Path: /Users/Akasha/project-provisioning/workspace_librecloud Last used: 2025-10-06T12:29:43Z\\n```plaintext ### Switch to Another Workspace ```bash\\n# Option 1: Using activate\\nprovisioning workspace activate production # Option 2: Using switch (alias)\\nprovisioning workspace switch production\\n```plaintext Output: ```plaintext\\n✓ Workspace \'production\' activated Current workspace: production\\nPath: /opt/workspaces/production ℹ All provisioning commands will now use this workspace\\n```plaintext ### Register a New Workspace ```bash\\n# Register without activating\\nprovisioning workspace register my-project ~/workspaces/my-project # Register and activate immediately\\nprovisioning workspace register my-project ~/workspaces/my-project --activate\\n```plaintext ### Remove Workspace from Registry ```bash\\n# With confirmation prompt\\nprovisioning workspace remove old-workspace # Skip confirmation\\nprovisioning workspace remove old-workspace --force\\n```plaintext **Note**: This only removes the workspace from the registry. The workspace files are NOT deleted. ## Architecture ### Central User Configuration All workspace information is stored in a central user configuration file: **Location**: `~/Library/Application Support/provisioning/user_config.yaml` **Structure**: ```yaml\\n# Active workspace (current workspace in use)\\nactive_workspace: \\"librecloud\\" # Known workspaces (automatically managed)\\nworkspaces: - name: \\"librecloud\\" path: \\"/Users/Akasha/project-provisioning/workspace_librecloud\\" last_used: \\"2025-10-06T12:29:43Z\\" - name: \\"production\\" path: \\"/opt/workspaces/production\\" last_used: \\"2025-10-05T10:15:30Z\\" # User preferences (global settings)\\npreferences: editor: \\"vim\\" output_format: \\"yaml\\" confirm_delete: true confirm_deploy: true default_log_level: \\"info\\" preferred_provider: \\"upcloud\\" # Metadata\\nmetadata: created: \\"2025-10-06T12:29:43Z\\" last_updated: \\"2025-10-06T13:46:16Z\\" version: \\"1.0.0\\"\\n```plaintext ### How It Works 1. **Workspace Registration**: When you register a workspace, it\'s added to the `workspaces` list in `user_config.yaml` 2. **Activation**: When you activate a workspace: - `active_workspace` is updated to the workspace name - The workspace\'s `last_used` timestamp is updated - All provisioning commands now use this workspace\'s configuration 3. **Configuration Loading**: The config loader reads `active_workspace` from `user_config.yaml` and loads: - `workspace_path/config/provisioning.yaml` - `workspace_path/config/providers/*.toml` - `workspace_path/config/platform/*.toml` - `workspace_path/config/kms.toml` ## Advanced Features ### User Preferences You can set global user preferences that apply across all workspaces: ```bash\\n# Get a preference value\\nprovisioning workspace get-preference editor # Set a preference value\\nprovisioning workspace set-preference editor \\"code\\" # View all preferences\\nprovisioning workspace preferences\\n```plaintext **Available Preferences**: - `editor`: Default editor for config files (vim, code, nano, etc.)\\n- `output_format`: Default output format (yaml, json, toml)\\n- `confirm_delete`: Require confirmation for deletions (true/false)\\n- `confirm_deploy`: Require confirmation for deployments (true/false)\\n- `default_log_level`: Default log level (debug, info, warn, error)\\n- `preferred_provider`: Preferred cloud provider (aws, upcloud, local) ### Output Formats List workspaces in different formats: ```bash\\n# Table format (default)\\nprovisioning workspace list # JSON format\\nprovisioning workspace list --format json # YAML format\\nprovisioning workspace list --format yaml\\n```plaintext ### Quiet Mode Activate workspace without output messages: ```bash\\nprovisioning workspace activate production --quiet\\n```plaintext ## Workspace Requirements For a workspace to be activated, it must have: 1. **Directory exists**: The workspace directory must exist on the filesystem 2. **Config directory**: Must have a `config/` directory workspace_name/ └── config/ ├── provisioning.yaml # Required ├── providers/ # Optional ├── platform/ # Optional └── kms.toml # Optional 3. **Main config file**: Must have `config/provisioning.yaml` If these requirements are not met, the activation will fail with helpful error messages: ```plaintext\\n✗ Workspace \'my-project\' not found in registry\\n💡 Available workspaces: [list of workspaces]\\n💡 Register it first with: provisioning workspace register my-project \\n```plaintext ```plaintext\\n✗ Workspace is not migrated to new config system\\n💡 Missing: /path/to/workspace/config\\n💡 Run migration: provisioning workspace migrate my-project\\n```plaintext ## Migration from Old System If you have workspaces using the old context system (`ws_{name}.yaml` files), they still work but you should register them in the new system: ```bash\\n# Register existing workspace\\nprovisioning workspace register old-workspace ~/workspaces/old-workspace # Activate it\\nprovisioning workspace activate old-workspace\\n```plaintext The old `ws_{name}.yaml` files are still supported for backward compatibility, but the new centralized system is recommended. ## Best Practices ### 1. **One Active Workspace at a Time** Only one workspace can be active at a time. All provisioning commands use the active workspace\'s configuration. ### 2. **Use Descriptive Names** Use clear, descriptive names for your workspaces: ```bash\\n# ✅ Good\\nprovisioning workspace register production-us-east ~/workspaces/prod-us-east\\nprovisioning workspace register dev-local ~/workspaces/dev # ❌ Avoid\\nprovisioning workspace register ws1 ~/workspaces/workspace1\\nprovisioning workspace register temp ~/workspaces/t\\n```plaintext ### 3. **Keep Workspaces Organized** Store all workspaces in a consistent location: ```bash\\n~/workspaces/\\n├── production/\\n├── staging/\\n├── development/\\n└── testing/\\n```plaintext ### 4. **Regular Cleanup** Remove workspaces you no longer use: ```bash\\n# List workspaces to see which ones are unused\\nprovisioning workspace list # Remove old workspace\\nprovisioning workspace remove old-workspace\\n```plaintext ### 5. **Backup User Config** Periodically backup your user configuration: ```bash\\ncp \\"~/Library/Application Support/provisioning/user_config.yaml\\" \\\\ \\"~/Library/Application Support/provisioning/user_config.yaml.backup\\"\\n```plaintext ## Troubleshooting ### Workspace Not Found **Problem**: `✗ Workspace \'name\' not found in registry` **Solution**: Register the workspace first: ```bash\\nprovisioning workspace register name /path/to/workspace\\n```plaintext ### Missing Configuration **Problem**: `✗ Missing workspace configuration` **Solution**: Ensure the workspace has a `config/provisioning.yaml` file. Run migration if needed: ```bash\\nprovisioning workspace migrate name\\n```plaintext ### Directory Not Found **Problem**: `✗ Workspace directory not found: /path/to/workspace` **Solution**: 1. Check if the workspace was moved or deleted\\n2. Update the path or remove from registry: ```bash\\nprovisioning workspace remove name\\nprovisioning workspace register name /new/path\\n```plaintext ### Corrupted User Config **Problem**: `Error: Failed to parse user config` **Solution**: The system automatically creates a backup and regenerates the config. Check: ```bash\\nls -la \\"~/Library/Application Support/provisioning/user_config.yaml\\"*\\n```plaintext Restore from backup if needed: ```bash\\ncp \\"~/Library/Application Support/provisioning/user_config.yaml.backup.TIMESTAMP\\" \\\\ \\"~/Library/Application Support/provisioning/user_config.yaml\\"\\n```plaintext ## CLI Commands Reference | Command | Alias | Description |\\n|---------|-------|-------------|\\n| `provisioning workspace activate ` | - | Activate a workspace |\\n| `provisioning workspace switch ` | - | Alias for activate |\\n| `provisioning workspace list` | - | List all registered workspaces |\\n| `provisioning workspace active` | - | Show currently active workspace |\\n| `provisioning workspace register ` | - | Register a new workspace |\\n| `provisioning workspace remove ` | - | Remove workspace from registry |\\n| `provisioning workspace preferences` | - | Show user preferences |\\n| `provisioning workspace set-preference ` | - | Set a preference |\\n| `provisioning workspace get-preference ` | - | Get a preference value | ## Integration with Config System The workspace switching system is fully integrated with the new target-based configuration system: ### Configuration Hierarchy (Priority: Low → High) ```plaintext\\n1. Workspace config workspace/{name}/config/provisioning.yaml\\n2. Provider configs workspace/{name}/config/providers/*.toml\\n3. Platform configs workspace/{name}/config/platform/*.toml\\n4. User context ~/Library/Application Support/provisioning/ws_{name}.yaml (legacy)\\n5. User config ~/Library/Application Support/provisioning/user_config.yaml (new)\\n6. Environment variables PROVISIONING_*\\n```plaintext ### Example Workflow ```bash\\n# 1. Create and activate development workspace\\nprovisioning workspace register dev ~/workspaces/dev --activate # 2. Work on development\\nprovisioning server create web-dev-01\\nprovisioning taskserv create kubernetes # 3. Switch to production\\nprovisioning workspace switch production # 4. Deploy to production\\nprovisioning server create web-prod-01\\nprovisioning taskserv create kubernetes # 5. Switch back to development\\nprovisioning workspace switch dev # All commands now use dev workspace config\\n```plaintext ## KCL Workspace Configuration Starting with v3.6.0, workspaces use **KCL (Kusion Configuration Language)** for type-safe, schema-validated configurations instead of YAML. ### What Changed **Before (YAML)**: ```yaml\\nworkspace: name: myworkspace version: 1.0.0\\npaths: base: /path/to/workspace\\n```plaintext **Now (KCL - Type-Safe)**: ```kcl\\nimport provisioning.workspace_config as ws workspace_config = ws.WorkspaceConfig { workspace: { name: \\"myworkspace\\" version: \\"1.0.0\\" # Validated: must be semantic (X.Y.Z) } paths: { base: \\"/path/to/workspace\\" # ... all paths with type checking }\\n}\\n```plaintext ### Benefits of KCL Configuration - ✅ **Type Safety**: Catch configuration errors at load time, not runtime\\n- ✅ **Schema Validation**: Required fields, value constraints, format checking\\n- ✅ **Immutability**: Enforced immutable defaults prevent accidental changes\\n- ✅ **Self-Documenting**: Schema descriptions provide instant documentation\\n- ✅ **IDE Support**: KCL editor extensions with auto-completion ### Viewing Workspace Configuration ```bash\\n# View your KCL workspace configuration\\nprovisioning workspace config show # View in different formats\\nprovisioning workspace config show --format=yaml # YAML output\\nprovisioning workspace config show --format=json # JSON output\\nprovisioning workspace config show --format=kcl # Raw KCL file # Validate configuration\\nprovisioning workspace config validate\\n# Output: ✅ Validation complete - all configs are valid # Show configuration hierarchy\\nprovisioning workspace config hierarchy\\n```plaintext ### Migrating Existing Workspaces If you have workspaces with YAML configs (`provisioning.yaml`), you can migrate them to KCL: ```bash\\n# Migrate single workspace\\nprovisioning workspace migrate-config myworkspace # Migrate all workspaces\\nprovisioning workspace migrate-config --all # Preview changes without applying\\nprovisioning workspace migrate-config myworkspace --check # Create backup before migration\\nprovisioning workspace migrate-config myworkspace --backup # Force overwrite existing KCL files\\nprovisioning workspace migrate-config myworkspace --force\\n```plaintext **How it works**: 1. Reads existing `provisioning.yaml`\\n2. Converts to KCL using workspace configuration schema\\n3. Validates converted KCL against schema\\n4. Backs up original YAML (optional)\\n5. Saves new `provisioning.k` file ### Backward Compatibility ✅ **Full backward compatibility maintained**: - Existing YAML configs (`provisioning.yaml`) continue to work\\n- Config loader checks for KCL files first, falls back to YAML\\n- No breaking changes - migrate at your own pace\\n- Both formats can coexist during transition ## See Also - **Configuration Guide**: `docs/architecture/adr/ADR-010-configuration-format-strategy.md`\\n- **Migration Complete**: [Migration Guide](../guides/from-scratch.md)\\n- **From-Scratch Guide**: [From-Scratch Guide](../guides/from-scratch.md)\\n- **KCL Patterns**: KCL Module System --- **Maintained By**: Infrastructure Team\\n**Version**: 1.1.0 (Updated for KCL)\\n**Status**: ✅ Production Ready\\n**Last Updated**: 2025-12-03","breadcrumbs":"Workspace Switching Guide » List Available Workspaces","id":"1396","title":"List Available Workspaces"},"1397":{"body":"","breadcrumbs":"Workspace Switching System » Workspace Switching System (v2.0.5)","id":"1397","title":"Workspace Switching System (v2.0.5)"},"1398":{"body":"A centralized workspace management system has been implemented, allowing seamless switching between multiple workspaces without manually editing configuration files. This builds upon the target-based configuration system.","breadcrumbs":"Workspace Switching System » 🚀 Workspace Switching Completed (2025-10-02)","id":"1398","title":"🚀 Workspace Switching Completed (2025-10-02)"},"1399":{"body":"Centralized Configuration : Single user_config.yaml file stores all workspace information Simple CLI Commands : Switch workspaces with a single command Active Workspace Tracking : Automatic tracking of currently active workspace Workspace Registry : Maintain list of all known workspaces User Preferences : Global user settings that apply across all workspaces Automatic Updates : Last-used timestamps and metadata automatically managed Validation : Ensures workspaces have required configuration before activation","breadcrumbs":"Workspace Switching System » Key Features","id":"1399","title":"Key Features"},"14":{"body":"System requirements and prerequisites Different installation methods How to verify your installation Setting up your environment Troubleshooting common installation issues","breadcrumbs":"Installation Guide » What You\'ll Learn","id":"14","title":"What You\'ll Learn"},"140":{"body":"The installation process involves: Cloning the repository Installing Nushell plugins Setting up configuration Initializing your first workspace Estimated time: 15-20 minutes","breadcrumbs":"Installation Steps » Overview","id":"140","title":"Overview"},"1400":{"body":"# List all registered workspaces\\nprovisioning workspace list # Show currently active workspace\\nprovisioning workspace active # Switch to another workspace\\nprovisioning workspace activate \\nprovisioning workspace switch # alias # Register a new workspace\\nprovisioning workspace register [--activate] # Remove workspace from registry (does not delete files)\\nprovisioning workspace remove [--force] # View user preferences\\nprovisioning workspace preferences # Set user preference\\nprovisioning workspace set-preference # Get user preference\\nprovisioning workspace get-preference \\n```plaintext ## Central User Configuration **Location**: `~/Library/Application Support/provisioning/user_config.yaml` **Structure**: ```yaml\\n# Active workspace (current workspace in use)\\nactive_workspace: \\"librecloud\\" # Known workspaces (automatically managed)\\nworkspaces: - name: \\"librecloud\\" path: \\"/Users/Akasha/project-provisioning/workspace_librecloud\\" last_used: \\"2025-10-06T12:29:43Z\\" - name: \\"production\\" path: \\"/opt/workspaces/production\\" last_used: \\"2025-10-05T10:15:30Z\\" # User preferences (global settings)\\npreferences: editor: \\"vim\\" output_format: \\"yaml\\" confirm_delete: true confirm_deploy: true default_log_level: \\"info\\" preferred_provider: \\"upcloud\\" # Metadata\\nmetadata: created: \\"2025-10-06T12:29:43Z\\" last_updated: \\"2025-10-06T13:46:16Z\\" version: \\"1.0.0\\"\\n```plaintext ## Usage Example ```bash\\n# Start with workspace librecloud active\\n$ provisioning workspace active\\nActive Workspace: Name: librecloud Path: /Users/Akasha/project-provisioning/workspace_librecloud Last used: 2025-10-06T13:46:16Z # List all workspaces (● indicates active)\\n$ provisioning workspace list Registered Workspaces: ● librecloud Path: /Users/Akasha/project-provisioning/workspace_librecloud Last used: 2025-10-06T13:46:16Z production Path: /opt/workspaces/production Last used: 2025-10-05T10:15:30Z # Switch to production\\n$ provisioning workspace switch production\\n✓ Workspace \'production\' activated Current workspace: production\\nPath: /opt/workspaces/production ℹ All provisioning commands will now use this workspace # All subsequent commands use production workspace\\n$ provisioning server list\\n$ provisioning taskserv create kubernetes\\n```plaintext ## Integration with Config System The workspace switching system integrates seamlessly with the configuration system: 1. **Active Workspace Detection**: Config loader reads `active_workspace` from `user_config.yaml`\\n2. **Workspace Validation**: Ensures workspace has required `config/provisioning.yaml`\\n3. **Configuration Loading**: Loads workspace-specific configs automatically\\n4. **Automatic Timestamps**: Updates `last_used` on workspace activation **Configuration Hierarchy** (Priority: Low → High): ```plaintext\\n1. Workspace config workspace/{name}/config/provisioning.yaml\\n2. Provider configs workspace/{name}/config/providers/*.toml\\n3. Platform configs workspace/{name}/config/platform/*.toml\\n4. User config ~/Library/Application Support/provisioning/user_config.yaml\\n5. Environment variables PROVISIONING_*\\n```plaintext ## Benefits - ✅ **No Manual Config Editing**: Switch workspaces with single command\\n- ✅ **Multiple Workspaces**: Manage dev, staging, production simultaneously\\n- ✅ **User Preferences**: Global settings across all workspaces\\n- ✅ **Automatic Tracking**: Last-used timestamps, active workspace markers\\n- ✅ **Safe Operations**: Validation before activation, confirmation prompts\\n- ✅ **Backward Compatible**: Old `ws_{name}.yaml` files still supported For more detailed information, see [Workspace Switching Guide](../infrastructure/workspace-switching-guide.md).","breadcrumbs":"Workspace Switching System » Workspace Management Commands","id":"1400","title":"Workspace Management Commands"},"1401":{"body":"Complete command-line reference for Infrastructure Automation. This guide covers all commands, options, and usage patterns.","breadcrumbs":"CLI Reference » CLI Reference","id":"1401","title":"CLI Reference"},"1402":{"body":"Complete command syntax and options All available commands and subcommands Usage examples and patterns Scripting and automation Integration with other tools Advanced command combinations","breadcrumbs":"CLI Reference » What You\'ll Learn","id":"1402","title":"What You\'ll Learn"},"1403":{"body":"All provisioning commands follow this structure: provisioning [global-options] [subcommand] [command-options] [arguments]","breadcrumbs":"CLI Reference » Command Structure","id":"1403","title":"Command Structure"},"1404":{"body":"These options can be used with any command: Option Short Description Example --infra -i Specify infrastructure --infra production --environment Environment override --environment prod --check -c Dry run mode --check --debug -x Enable debug output --debug --yes -y Auto-confirm actions --yes --wait -w Wait for completion --wait --out Output format --out json --help -h Show help --help","breadcrumbs":"CLI Reference » Global Options","id":"1404","title":"Global Options"},"1405":{"body":"Format Description Use Case text Human-readable text Terminal viewing json JSON format Scripting, APIs yaml YAML format Configuration files toml TOML format Settings files table Tabular format Reports, lists","breadcrumbs":"CLI Reference » Output Formats","id":"1405","title":"Output Formats"},"1406":{"body":"","breadcrumbs":"CLI Reference » Core Commands","id":"1406","title":"Core Commands"},"1407":{"body":"Display help information for the system or specific commands. # General help\\nprovisioning help # Command-specific help\\nprovisioning help server\\nprovisioning help taskserv\\nprovisioning help cluster # Show all available commands\\nprovisioning help --all # Show help for subcommand\\nprovisioning server help create Options: --all - Show all available commands --detailed - Show detailed help with examples","breadcrumbs":"CLI Reference » help - Show Help Information","id":"1407","title":"help - Show Help Information"},"1408":{"body":"Display version information for the system and dependencies. # Basic version\\nprovisioning version\\nprovisioning --version\\nprovisioning -V # Detailed version with dependencies\\nprovisioning version --verbose # Show version info with title\\nprovisioning --info\\nprovisioning -I Options: --verbose - Show detailed version information --dependencies - Include dependency versions","breadcrumbs":"CLI Reference » version - Show Version Information","id":"1408","title":"version - Show Version Information"},"1409":{"body":"Display current environment configuration and settings. # Show environment variables\\nprovisioning env # Show all environment and configuration\\nprovisioning allenv # Show specific environment\\nprovisioning env --environment prod # Export environment\\nprovisioning env --export Output includes: Configuration file locations Environment variables Provider settings Path configurations","breadcrumbs":"CLI Reference » env - Environment Information","id":"1409","title":"env - Environment Information"},"141":{"body":"# Clone the repository\\ngit clone https://github.com/provisioning/provisioning-platform.git\\ncd provisioning-platform # Checkout the latest stable release (optional)\\ngit checkout tags/v3.5.0","breadcrumbs":"Installation Steps » Step 1: Clone the Repository","id":"141","title":"Step 1: Clone the Repository"},"1410":{"body":"","breadcrumbs":"CLI Reference » Server Management Commands","id":"1410","title":"Server Management Commands"},"1411":{"body":"Create new server instances based on configuration. # Create all servers in infrastructure\\nprovisioning server create --infra my-infra # Dry run (check mode)\\nprovisioning server create --infra my-infra --check # Create with confirmation\\nprovisioning server create --infra my-infra --yes # Create and wait for completion\\nprovisioning server create --infra my-infra --wait # Create specific server\\nprovisioning server create web-01 --infra my-infra # Create with custom settings\\nprovisioning server create --infra my-infra --settings custom.k Options: --check, -c - Dry run mode (show what would be created) --yes, -y - Auto-confirm creation --wait, -w - Wait for servers to be fully ready --settings, -s - Custom settings file --template, -t - Use specific template","breadcrumbs":"CLI Reference » server create - Create Servers","id":"1411","title":"server create - Create Servers"},"1412":{"body":"Remove server instances and associated resources. # Delete all servers\\nprovisioning server delete --infra my-infra # Delete with confirmation\\nprovisioning server delete --infra my-infra --yes # Delete but keep storage\\nprovisioning server delete --infra my-infra --keepstorage # Delete specific server\\nprovisioning server delete web-01 --infra my-infra # Dry run deletion\\nprovisioning server delete --infra my-infra --check Options: --yes, -y - Auto-confirm deletion --keepstorage - Preserve storage volumes --force - Force deletion even if servers are running","breadcrumbs":"CLI Reference » server delete - Delete Servers","id":"1412","title":"server delete - Delete Servers"},"1413":{"body":"Display information about servers. # List all servers\\nprovisioning server list --infra my-infra # List with detailed information\\nprovisioning server list --infra my-infra --detailed # List in specific format\\nprovisioning server list --infra my-infra --out json # List servers across all infrastructures\\nprovisioning server list --all # Filter by status\\nprovisioning server list --infra my-infra --status running Options: --detailed - Show detailed server information --status - Filter by server status --all - Show servers from all infrastructures","breadcrumbs":"CLI Reference » server list - List Servers","id":"1413","title":"server list - List Servers"},"1414":{"body":"Connect to servers via SSH. # SSH to server\\nprovisioning server ssh web-01 --infra my-infra # SSH with specific user\\nprovisioning server ssh web-01 --user admin --infra my-infra # SSH with custom key\\nprovisioning server ssh web-01 --key ~/.ssh/custom_key --infra my-infra # Execute single command\\nprovisioning server ssh web-01 --command \\"systemctl status nginx\\" --infra my-infra Options: --user - SSH username (default from configuration) --key - SSH private key file --command - Execute command and exit --port - SSH port (default: 22)","breadcrumbs":"CLI Reference » server ssh - SSH Access","id":"1414","title":"server ssh - SSH Access"},"1415":{"body":"Display pricing information for servers. # Show costs for all servers\\nprovisioning server price --infra my-infra # Show detailed cost breakdown\\nprovisioning server price --infra my-infra --detailed # Show monthly estimates\\nprovisioning server price --infra my-infra --monthly # Cost comparison between providers\\nprovisioning server price --infra my-infra --compare Options: --detailed - Detailed cost breakdown --monthly - Monthly cost estimates --compare - Compare costs across providers","breadcrumbs":"CLI Reference » server price - Cost Information","id":"1415","title":"server price - Cost Information"},"1416":{"body":"","breadcrumbs":"CLI Reference » Task Service Commands","id":"1416","title":"Task Service Commands"},"1417":{"body":"Install and configure task services on servers. # Install service on all eligible servers\\nprovisioning taskserv create kubernetes --infra my-infra # Install with check mode\\nprovisioning taskserv create kubernetes --infra my-infra --check # Install specific version\\nprovisioning taskserv create kubernetes --version 1.28 --infra my-infra # Install on specific servers\\nprovisioning taskserv create postgresql --servers db-01,db-02 --infra my-infra # Install with custom configuration\\nprovisioning taskserv create kubernetes --config k8s-config.yaml --infra my-infra Options: --version - Specific version to install --config - Custom configuration file --servers - Target specific servers --force - Force installation even if conflicts exist","breadcrumbs":"CLI Reference » taskserv create - Install Services","id":"1417","title":"taskserv create - Install Services"},"1418":{"body":"Remove task services from servers. # Remove service\\nprovisioning taskserv delete kubernetes --infra my-infra # Remove with data cleanup\\nprovisioning taskserv delete postgresql --cleanup-data --infra my-infra # Remove from specific servers\\nprovisioning taskserv delete nginx --servers web-01,web-02 --infra my-infra # Dry run removal\\nprovisioning taskserv delete kubernetes --infra my-infra --check Options: --cleanup-data - Remove associated data --servers - Target specific servers --force - Force removal","breadcrumbs":"CLI Reference » taskserv delete - Remove Services","id":"1418","title":"taskserv delete - Remove Services"},"1419":{"body":"Display available and installed task services. # List all available services\\nprovisioning taskserv list # List installed services\\nprovisioning taskserv list --infra my-infra --installed # List by category\\nprovisioning taskserv list --category database # List with versions\\nprovisioning taskserv list --versions # Search services\\nprovisioning taskserv list --search kubernetes Options: --installed - Show only installed services --category - Filter by service category --versions - Include version information --search - Search by name or description","breadcrumbs":"CLI Reference » taskserv list - List Services","id":"1419","title":"taskserv list - List Services"},"142":{"body":"The platform uses several Nushell plugins for enhanced functionality.","breadcrumbs":"Installation Steps » Step 2: Install Nushell Plugins","id":"142","title":"Step 2: Install Nushell Plugins"},"1420":{"body":"Generate configuration files for task services. # Generate configuration\\nprovisioning taskserv generate kubernetes --infra my-infra # Generate with custom template\\nprovisioning taskserv generate kubernetes --template custom --infra my-infra # Generate for specific servers\\nprovisioning taskserv generate nginx --servers web-01,web-02 --infra my-infra # Generate and save to file\\nprovisioning taskserv generate postgresql --output db-config.yaml --infra my-infra Options: --template - Use specific template --output - Save to specific file --servers - Target specific servers","breadcrumbs":"CLI Reference » taskserv generate - Generate Configurations","id":"1420","title":"taskserv generate - Generate Configurations"},"1421":{"body":"Check for and manage service version updates. # Check updates for all services\\nprovisioning taskserv check-updates --infra my-infra # Check specific service\\nprovisioning taskserv check-updates kubernetes --infra my-infra # Show available versions\\nprovisioning taskserv versions kubernetes # Update to latest version\\nprovisioning taskserv update kubernetes --infra my-infra # Update to specific version\\nprovisioning taskserv update kubernetes --version 1.29 --infra my-infra Options: --version - Target specific version --security-only - Only security updates --dry-run - Show what would be updated","breadcrumbs":"CLI Reference » taskserv check-updates - Version Management","id":"1421","title":"taskserv check-updates - Version Management"},"1422":{"body":"","breadcrumbs":"CLI Reference » Cluster Management Commands","id":"1422","title":"Cluster Management Commands"},"1423":{"body":"Deploy and configure application clusters. # Create cluster\\nprovisioning cluster create web-cluster --infra my-infra # Create with check mode\\nprovisioning cluster create web-cluster --infra my-infra --check # Create with custom configuration\\nprovisioning cluster create web-cluster --config cluster.yaml --infra my-infra # Create and scale immediately\\nprovisioning cluster create web-cluster --replicas 5 --infra my-infra Options: --config - Custom cluster configuration --replicas - Initial replica count --namespace - Kubernetes namespace","breadcrumbs":"CLI Reference » cluster create - Deploy Clusters","id":"1423","title":"cluster create - Deploy Clusters"},"1424":{"body":"Remove application clusters and associated resources. # Delete cluster\\nprovisioning cluster delete web-cluster --infra my-infra # Delete with data cleanup\\nprovisioning cluster delete web-cluster --cleanup --infra my-infra # Force delete\\nprovisioning cluster delete web-cluster --force --infra my-infra Options: --cleanup - Remove associated data --force - Force deletion --keep-volumes - Preserve persistent volumes","breadcrumbs":"CLI Reference » cluster delete - Remove Clusters","id":"1424","title":"cluster delete - Remove Clusters"},"1425":{"body":"Display information about deployed clusters. # List all clusters\\nprovisioning cluster list --infra my-infra # List with status\\nprovisioning cluster list --infra my-infra --status # List across all infrastructures\\nprovisioning cluster list --all # Filter by namespace\\nprovisioning cluster list --namespace production --infra my-infra Options: --status - Include status information --all - Show clusters from all infrastructures --namespace - Filter by namespace","breadcrumbs":"CLI Reference » cluster list - List Clusters","id":"1425","title":"cluster list - List Clusters"},"1426":{"body":"Adjust cluster size and resources. # Scale cluster\\nprovisioning cluster scale web-cluster --replicas 10 --infra my-infra # Auto-scale configuration\\nprovisioning cluster scale web-cluster --auto-scale --min 3 --max 20 --infra my-infra # Scale specific component\\nprovisioning cluster scale web-cluster --component api --replicas 5 --infra my-infra Options: --replicas - Target replica count --auto-scale - Enable auto-scaling --min, --max - Auto-scaling limits --component - Scale specific component","breadcrumbs":"CLI Reference » cluster scale - Scale Clusters","id":"1426","title":"cluster scale - Scale Clusters"},"1427":{"body":"","breadcrumbs":"CLI Reference » Infrastructure Commands","id":"1427","title":"Infrastructure Commands"},"1428":{"body":"Generate infrastructure and configuration files. # Generate new infrastructure\\nprovisioning generate infra --new my-infrastructure # Generate from template\\nprovisioning generate infra --template web-app --name my-app # Generate server configurations\\nprovisioning generate server --infra my-infra # Generate task service configurations\\nprovisioning generate taskserv --infra my-infra # Generate cluster configurations\\nprovisioning generate cluster --infra my-infra Subcommands: infra - Infrastructure configurations server - Server configurations taskserv - Task service configurations cluster - Cluster configurations Options: --new - Create new infrastructure --template - Use specific template --name - Name for generated resources --output - Output directory","breadcrumbs":"CLI Reference » generate - Generate Configurations","id":"1428","title":"generate - Generate Configurations"},"1429":{"body":"Show detailed information about infrastructure components. # Show settings\\nprovisioning show settings --infra my-infra # Show servers\\nprovisioning show servers --infra my-infra # Show specific server\\nprovisioning show servers web-01 --infra my-infra # Show task services\\nprovisioning show taskservs --infra my-infra # Show costs\\nprovisioning show costs --infra my-infra # Show in different format\\nprovisioning show servers --infra my-infra --out json Subcommands: settings - Configuration settings servers - Server information taskservs - Task service information costs - Cost information data - Raw infrastructure data","breadcrumbs":"CLI Reference » show - Display Information","id":"1429","title":"show - Display Information"},"143":{"body":"# Install from crates.io\\ncargo install nu_plugin_tera # Register with Nushell\\nnu -c \\"plugin add ~/.cargo/bin/nu_plugin_tera; plugin use tera\\"","breadcrumbs":"Installation Steps » Install nu_plugin_tera (Template Rendering)","id":"143","title":"Install nu_plugin_tera (Template Rendering)"},"1430":{"body":"List various types of resources. # List providers\\nprovisioning list providers # List task services\\nprovisioning list taskservs # List clusters\\nprovisioning list clusters # List infrastructures\\nprovisioning list infras # List with selection interface\\nprovisioning list servers --select Subcommands: providers - Available providers taskservs - Available task services clusters - Available clusters infras - Available infrastructures servers - Server instances","breadcrumbs":"CLI Reference » list - List Resources","id":"1430","title":"list - List Resources"},"1431":{"body":"Validate configuration files and infrastructure definitions. # Validate configuration\\nprovisioning validate config --infra my-infra # Validate with detailed output\\nprovisioning validate config --detailed --infra my-infra # Validate specific file\\nprovisioning validate config settings.k --infra my-infra # Quick validation\\nprovisioning validate quick --infra my-infra # Validate interpolation\\nprovisioning validate interpolation --infra my-infra Subcommands: config - Configuration validation quick - Quick infrastructure validation interpolation - Interpolation pattern validation Options: --detailed - Show detailed validation results --strict - Strict validation mode --rules - Show validation rules","breadcrumbs":"CLI Reference » validate - Validate Configuration","id":"1431","title":"validate - Validate Configuration"},"1432":{"body":"","breadcrumbs":"CLI Reference » Configuration Commands","id":"1432","title":"Configuration Commands"},"1433":{"body":"Initialize user and project configurations. # Initialize user configuration\\nprovisioning init config # Initialize with specific template\\nprovisioning init config dev # Initialize project configuration\\nprovisioning init project # Force overwrite existing\\nprovisioning init config --force Subcommands: config - User configuration project - Project configuration Options: --template - Configuration template --force - Overwrite existing files","breadcrumbs":"CLI Reference » init - Initialize Configuration","id":"1433","title":"init - Initialize Configuration"},"1434":{"body":"Manage configuration templates. # List available templates\\nprovisioning template list # Show template content\\nprovisioning template show dev # Validate templates\\nprovisioning template validate # Create custom template\\nprovisioning template create my-template --from dev Subcommands: list - List available templates show - Display template content validate - Validate templates create - Create custom template","breadcrumbs":"CLI Reference » template - Template Management","id":"1434","title":"template - Template Management"},"1435":{"body":"","breadcrumbs":"CLI Reference » Advanced Commands","id":"1435","title":"Advanced Commands"},"1436":{"body":"Start interactive Nushell session with provisioning library loaded. # Start interactive shell\\nprovisioning nu # Execute specific command\\nprovisioning nu -c \\"use lib_provisioning *; show_env\\" # Start with custom script\\nprovisioning nu --script my-script.nu Options: -c - Execute command and exit --script - Run specific script --load - Load additional modules","breadcrumbs":"CLI Reference » nu - Interactive Shell","id":"1436","title":"nu - Interactive Shell"},"1437":{"body":"Edit encrypted configuration files using SOPS. # Edit encrypted file\\nprovisioning sops settings.k --infra my-infra # Encrypt new file\\nprovisioning sops --encrypt new-secrets.k --infra my-infra # Decrypt for viewing\\nprovisioning sops --decrypt secrets.k --infra my-infra # Rotate keys\\nprovisioning sops --rotate-keys secrets.k --infra my-infra Options: --encrypt - Encrypt file --decrypt - Decrypt file --rotate-keys - Rotate encryption keys","breadcrumbs":"CLI Reference » sops - Secret Management","id":"1437","title":"sops - Secret Management"},"1438":{"body":"Manage infrastructure contexts and environments. # Show current context\\nprovisioning context # List available contexts\\nprovisioning context list # Switch context\\nprovisioning context switch production # Create new context\\nprovisioning context create staging --from development # Delete context\\nprovisioning context delete old-context Subcommands: list - List contexts switch - Switch active context create - Create new context delete - Delete context","breadcrumbs":"CLI Reference » context - Context Management","id":"1438","title":"context - Context Management"},"1439":{"body":"","breadcrumbs":"CLI Reference » Workflow Commands","id":"1439","title":"Workflow Commands"},"144":{"body":"# Install from custom repository\\ncargo install --git https://repo.jesusperez.pro/jesus/nushell-plugins nu_plugin_kcl # Register with Nushell\\nnu -c \\"plugin add ~/.cargo/bin/nu_plugin_kcl; plugin use kcl\\"","breadcrumbs":"Installation Steps » Install nu_plugin_kcl (Optional, KCL Integration)","id":"144","title":"Install nu_plugin_kcl (Optional, KCL Integration)"},"1440":{"body":"Manage complex workflows and batch operations. # Submit batch workflow\\nprovisioning workflows batch submit my-workflow.k # Monitor workflow progress\\nprovisioning workflows batch monitor workflow-123 # List workflows\\nprovisioning workflows batch list --status running # Get workflow status\\nprovisioning workflows batch status workflow-123 # Rollback failed workflow\\nprovisioning workflows batch rollback workflow-123 Options: --status - Filter by workflow status --follow - Follow workflow progress --timeout - Set timeout for operations","breadcrumbs":"CLI Reference » workflows - Batch Operations","id":"1440","title":"workflows - Batch Operations"},"1441":{"body":"Control the hybrid orchestrator system. # Start orchestrator\\nprovisioning orchestrator start # Check orchestrator status\\nprovisioning orchestrator status # Stop orchestrator\\nprovisioning orchestrator stop # Show orchestrator logs\\nprovisioning orchestrator logs # Health check\\nprovisioning orchestrator health","breadcrumbs":"CLI Reference » orchestrator - Orchestrator Management","id":"1441","title":"orchestrator - Orchestrator Management"},"1442":{"body":"","breadcrumbs":"CLI Reference » Scripting and Automation","id":"1442","title":"Scripting and Automation"},"1443":{"body":"Provisioning uses standard exit codes: 0 - Success 1 - General error 2 - Invalid command or arguments 3 - Configuration error 4 - Permission denied 5 - Resource not found","breadcrumbs":"CLI Reference » Exit Codes","id":"1443","title":"Exit Codes"},"1444":{"body":"Control behavior through environment variables: # Enable debug mode\\nexport PROVISIONING_DEBUG=true # Set environment\\nexport PROVISIONING_ENV=production # Set output format\\nexport PROVISIONING_OUTPUT_FORMAT=json # Disable interactive prompts\\nexport PROVISIONING_NONINTERACTIVE=true","breadcrumbs":"CLI Reference » Environment Variables","id":"1444","title":"Environment Variables"},"1445":{"body":"#!/bin/bash\\n# Example batch script # Set environment\\nexport PROVISIONING_ENV=production\\nexport PROVISIONING_NONINTERACTIVE=true # Validate first\\nif ! provisioning validate config --infra production; then echo \\"Configuration validation failed\\" exit 1\\nfi # Create infrastructure\\nprovisioning server create --infra production --yes --wait # Install services\\nprovisioning taskserv create kubernetes --infra production --yes\\nprovisioning taskserv create postgresql --infra production --yes # Deploy clusters\\nprovisioning cluster create web-app --infra production --yes echo \\"Deployment completed successfully\\"","breadcrumbs":"CLI Reference » Batch Operations","id":"1445","title":"Batch Operations"},"1446":{"body":"# Get server list as JSON\\nservers=$(provisioning server list --infra my-infra --out json) # Process with jq\\necho \\"$servers\\" | jq \'.[] | select(.status == \\"running\\") | .name\' # Use in scripts\\nfor server in $(echo \\"$servers\\" | jq -r \'.[] | select(.status == \\"running\\") | .name\'); do echo \\"Processing server: $server\\" provisioning server ssh \\"$server\\" --command \\"uptime\\" --infra my-infra\\ndone","breadcrumbs":"CLI Reference » JSON Output Processing","id":"1446","title":"JSON Output Processing"},"1447":{"body":"","breadcrumbs":"CLI Reference » Command Chaining and Pipelines","id":"1447","title":"Command Chaining and Pipelines"},"1448":{"body":"# Chain commands with && (stop on failure)\\nprovisioning validate config --infra my-infra && \\\\\\nprovisioning server create --infra my-infra --check && \\\\\\nprovisioning server create --infra my-infra --yes # Chain with || (continue on failure)\\nprovisioning taskserv create kubernetes --infra my-infra || \\\\\\necho \\"Kubernetes installation failed, continuing with other services\\"","breadcrumbs":"CLI Reference » Sequential Operations","id":"1448","title":"Sequential Operations"},"1449":{"body":"# Full deployment workflow\\ndeploy_infrastructure() { local infra_name=$1 echo \\"Deploying infrastructure: $infra_name\\" # Validate provisioning validate config --infra \\"$infra_name\\" || return 1 # Create servers provisioning server create --infra \\"$infra_name\\" --yes --wait || return 1 # Install base services for service in containerd kubernetes; do provisioning taskserv create \\"$service\\" --infra \\"$infra_name\\" --yes || return 1 done # Deploy applications provisioning cluster create web-app --infra \\"$infra_name\\" --yes || return 1 echo \\"Deployment completed: $infra_name\\"\\n} # Use the function\\ndeploy_infrastructure \\"production\\"","breadcrumbs":"CLI Reference » Complex Workflows","id":"1449","title":"Complex Workflows"},"145":{"body":"# Start Nushell\\nnu # List installed plugins\\nplugin list # Expected output should include:\\n# - tera\\n# - kcl (if installed)","breadcrumbs":"Installation Steps » Verify Plugin Installation","id":"145","title":"Verify Plugin Installation"},"1450":{"body":"","breadcrumbs":"CLI Reference » Integration with Other Tools","id":"1450","title":"Integration with Other Tools"},"1451":{"body":"# GitLab CI example\\ndeploy: script: - provisioning validate config --infra production - provisioning server create --infra production --check - provisioning server create --infra production --yes --wait - provisioning taskserv create kubernetes --infra production --yes only: - main","breadcrumbs":"CLI Reference » CI/CD Integration","id":"1451","title":"CI/CD Integration"},"1452":{"body":"# Health check script\\n#!/bin/bash # Check infrastructure health\\nif provisioning health check --infra production --out json | jq -e \'.healthy\'; then echo \\"Infrastructure healthy\\" exit 0\\nelse echo \\"Infrastructure unhealthy\\" # Send alert curl -X POST https://alerts.company.com/webhook \\\\ -d \'{\\"message\\": \\"Infrastructure health check failed\\"}\' exit 1\\nfi","breadcrumbs":"CLI Reference » Monitoring Integration","id":"1452","title":"Monitoring Integration"},"1453":{"body":"# Backup script\\n#!/bin/bash DATE=$(date +%Y%m%d_%H%M%S)\\nBACKUP_DIR=\\"/backups/provisioning/$DATE\\" # Create backup directory\\nmkdir -p \\"$BACKUP_DIR\\" # Export configurations\\nprovisioning config export --format yaml > \\"$BACKUP_DIR/config.yaml\\" # Backup infrastructure definitions\\nfor infra in $(provisioning list infras --out json | jq -r \'.[]\'); do provisioning show settings --infra \\"$infra\\" --out yaml > \\"$BACKUP_DIR/$infra.yaml\\"\\ndone echo \\"Backup completed: $BACKUP_DIR\\" This CLI reference provides comprehensive coverage of all provisioning commands. Use it as your primary reference for command syntax, options, and integration patterns.","breadcrumbs":"CLI Reference » Backup Automation","id":"1453","title":"Backup Automation"},"1454":{"body":"Version : 2.0.0 Date : 2025-10-06 Status : Implemented","breadcrumbs":"Workspace Config Architecture » Workspace Configuration Architecture","id":"1454","title":"Workspace Configuration Architecture"},"1455":{"body":"The provisioning system now uses a workspace-based configuration architecture where each workspace has its own complete configuration structure. This replaces the old ENV-based and template-only system.","breadcrumbs":"Workspace Config Architecture » Overview","id":"1455","title":"Overview"},"1456":{"body":"config.defaults.toml is ONLY a template, NEVER loaded at runtime This file exists solely as a reference template for generating workspace configurations. The system does NOT load it during operation.","breadcrumbs":"Workspace Config Architecture » Critical Design Principle","id":"1456","title":"Critical Design Principle"},"1457":{"body":"Configuration is loaded in the following order (lowest to highest priority): Workspace Config (Base): {workspace}/config/provisioning.yaml Provider Configs : {workspace}/config/providers/*.toml Platform Configs : {workspace}/config/platform/*.toml User Context : ~/Library/Application Support/provisioning/ws_{name}.yaml Environment Variables : PROVISIONING_* (highest priority)","breadcrumbs":"Workspace Config Architecture » Configuration Hierarchy","id":"1457","title":"Configuration Hierarchy"},"1458":{"body":"When a workspace is initialized, the following structure is created: {workspace}/\\n├── config/\\n│ ├── provisioning.yaml # Main workspace config (generated from template)\\n│ ├── providers/ # Provider-specific configs\\n│ │ ├── aws.toml\\n│ │ ├── local.toml\\n│ │ └── upcloud.toml\\n│ ├── platform/ # Platform service configs\\n│ │ ├── orchestrator.toml\\n│ │ └── mcp.toml\\n│ └── kms.toml # KMS configuration\\n├── infra/ # Infrastructure definitions\\n├── .cache/ # Cache directory\\n├── .runtime/ # Runtime data\\n│ ├── taskservs/\\n│ └── clusters/\\n├── .providers/ # Provider state\\n├── .kms/ # Key management\\n│ └── keys/\\n├── generated/ # Generated files\\n└── .gitignore # Workspace gitignore\\n```plaintext ## Template System Templates are located at: `/Users/Akasha/project-provisioning/provisioning/config/templates/` ### Available Templates 1. **workspace-provisioning.yaml.template** - Main workspace configuration\\n2. **provider-aws.toml.template** - AWS provider configuration\\n3. **provider-local.toml.template** - Local provider configuration\\n4. **provider-upcloud.toml.template** - UpCloud provider configuration\\n5. **kms.toml.template** - KMS configuration\\n6. **user-context.yaml.template** - User context configuration ### Template Variables Templates support the following interpolation variables: - `{{workspace.name}}` - Workspace name\\n- `{{workspace.path}}` - Absolute path to workspace\\n- `{{now.iso}}` - Current timestamp in ISO format\\n- `{{env.HOME}}` - User\'s home directory\\n- `{{env.*}}` - Environment variables (safe list only)\\n- `{{paths.base}}` - Base path (after config load) ## Workspace Initialization ### Command ```bash\\n# Using the workspace init function\\nnu -c \\"use provisioning/core/nulib/lib_provisioning/workspace/init.nu *; workspace-init \'my-workspace\' \'/path/to/workspace\' --providers [\'aws\' \'local\'] --activate\\"\\n```plaintext ### Process 1. **Create Directory Structure**: All necessary directories\\n2. **Generate Config from Template**: Creates `config/provisioning.yaml`\\n3. **Generate Provider Configs**: For each specified provider\\n4. **Generate KMS Config**: Security configuration\\n5. **Create User Context** (if --activate): User-specific overrides\\n6. **Create .gitignore**: Ignore runtime/cache files ## User Context User context files are stored per workspace: **Location**: `~/Library/Application Support/provisioning/ws_{workspace_name}.yaml` ### Purpose - Store user-specific overrides (debug settings, output preferences)\\n- Mark active workspace\\n- Override workspace paths if needed ### Example ```yaml\\nworkspace: name: \\"my-workspace\\" path: \\"/path/to/my-workspace\\" active: true debug: enabled: true log_level: \\"debug\\" output: format: \\"json\\" providers: default: \\"aws\\"\\n```plaintext ## Configuration Loading Process ### 1. Determine Active Workspace ```nushell\\n# Check user config directory for active workspace\\nlet user_config_dir = ~/Library/Application Support/provisioning/\\nlet active_workspace = (find workspace with active: true in ws_*.yaml files)\\n```plaintext ### 2. Load Workspace Config ```nushell\\n# Load main workspace config\\nlet workspace_config = {workspace.path}/config/provisioning.yaml\\n```plaintext ### 3. Load Provider Configs ```nushell\\n# Merge all provider configs\\nfor provider in {workspace.path}/config/providers/*.toml { merge provider config\\n}\\n```plaintext ### 4. Load Platform Configs ```nushell\\n# Merge all platform configs\\nfor platform in {workspace.path}/config/platform/*.toml { merge platform config\\n}\\n```plaintext ### 5. Apply User Context ```nushell\\n# Apply user-specific overrides\\nlet user_context = ~/Library/Application Support/provisioning/ws_{name}.yaml\\nmerge user_context (highest config priority)\\n```plaintext ### 6. Apply Environment Variables ```nushell\\n# Final overrides from environment\\nPROVISIONING_DEBUG=true\\nPROVISIONING_LOG_LEVEL=debug\\nPROVISIONING_PROVIDER=aws\\n# etc.\\n```plaintext ## Migration from Old System ### Before (ENV-based) ```bash\\nexport PROVISIONING=/usr/local/provisioning\\nexport PROVISIONING_INFRA_PATH=/path/to/infra\\nexport PROVISIONING_DEBUG=true\\n# ... many ENV variables\\n```plaintext ### After (Workspace-based) ```bash\\n# Initialize workspace\\nworkspace-init \\"production\\" \\"/workspaces/prod\\" --providers [\\"aws\\"] --activate # All config is now in workspace\\n# No ENV variables needed (except for overrides)\\n```plaintext ### Breaking Changes 1. **`config.defaults.toml` NOT loaded** - Only used as template\\n2. **Workspace required** - Must have active workspace or be in workspace directory\\n3. **New config locations** - User config in `~/Library/Application Support/provisioning/`\\n4. **YAML main config** - `provisioning.yaml` instead of TOML ## Workspace Management Commands ### Initialize Workspace ```nushell\\nuse provisioning/core/nulib/lib_provisioning/workspace/init.nu *\\nworkspace-init \\"my-workspace\\" \\"/path/to/workspace\\" --providers [\\"aws\\" \\"local\\"] --activate\\n```plaintext ### List Workspaces ```nushell\\nworkspace-list\\n```plaintext ### Activate Workspace ```nushell\\nworkspace-activate \\"my-workspace\\"\\n```plaintext ### Get Active Workspace ```nushell\\nworkspace-get-active\\n```plaintext ## Implementation Files ### Core Files 1. **Template Directory**: `/Users/Akasha/project-provisioning/provisioning/config/templates/`\\n2. **Workspace Init**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/workspace/init.nu`\\n3. **Config Loader**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/config/loader.nu` ### Key Changes in Config Loader #### Removed - `get-defaults-config-path()` - No longer loads config.defaults.toml\\n- Old hierarchy with user/project/infra TOML files #### Added - `get-active-workspace()` - Finds active workspace from user config\\n- Support for YAML config files\\n- Provider and platform config merging\\n- User context loading ## Configuration Schema ### Main Workspace Config (provisioning.yaml) ```yaml\\nworkspace: name: string version: string created: timestamp paths: base: string infra: string cache: string runtime: string # ... all paths core: version: string name: string debug: enabled: bool log_level: string # ... debug settings providers: active: [string] default: string # ... all other sections\\n```plaintext ### Provider Config (providers/*.toml) ```toml\\n[provider]\\nname = \\"aws\\"\\nenabled = true\\nworkspace = \\"workspace-name\\" [provider.auth]\\nprofile = \\"default\\"\\nregion = \\"us-east-1\\" [provider.paths]\\nbase = \\"{workspace}/.providers/aws\\"\\ncache = \\"{workspace}/.providers/aws/cache\\"\\n```plaintext ### User Context (ws_{name}.yaml) ```yaml\\nworkspace: name: string path: string active: bool debug: enabled: bool log_level: string output: format: string\\n```plaintext ## Benefits 1. **No Template Loading**: config.defaults.toml is template-only\\n2. **Workspace Isolation**: Each workspace is self-contained\\n3. **Explicit Configuration**: No hidden defaults from ENV\\n4. **Clear Hierarchy**: Predictable override behavior\\n5. **Multi-Workspace Support**: Easy switching between workspaces\\n6. **User Overrides**: Per-workspace user preferences\\n7. **Version Control**: Workspace configs can be committed (except secrets) ## Security Considerations ### Generated .gitignore The workspace .gitignore excludes: - `.cache/` - Cache files\\n- `.runtime/` - Runtime data\\n- `.providers/` - Provider state\\n- `.kms/keys/` - Secret keys\\n- `generated/` - Generated files\\n- `*.log` - Log files ### Secret Management - KMS keys stored in `.kms/keys/` (gitignored)\\n- SOPS config references keys, doesn\'t store them\\n- Provider credentials in user-specific locations (not workspace) ## Troubleshooting ### No Active Workspace Error ```plaintext\\nError: No active workspace found. Please initialize or activate a workspace.\\n```plaintext **Solution**: Initialize or activate a workspace: ```bash\\nworkspace-init \\"my-workspace\\" \\"/path/to/workspace\\" --activate\\n```plaintext ### Config File Not Found ```plaintext\\nError: Required configuration file not found: {workspace}/config/provisioning.yaml\\n```plaintext **Solution**: The workspace config is corrupted or deleted. Re-initialize: ```bash\\nworkspace-init \\"workspace-name\\" \\"/existing/path\\" --providers [\\"aws\\"]\\n```plaintext ### Provider Not Configured **Solution**: Add provider config to workspace: ```bash\\n# Generate provider config manually\\ngenerate-provider-config \\"/workspace/path\\" \\"workspace-name\\" \\"aws\\"\\n```plaintext ## Future Enhancements 1. **Workspace Templates**: Pre-configured workspace templates (dev, prod, test)\\n2. **Workspace Import/Export**: Share workspace configurations\\n3. **Remote Workspace**: Load workspace from remote Git repository\\n4. **Workspace Validation**: Comprehensive workspace health checks\\n5. **Config Migration Tool**: Automated migration from old ENV-based system ## Summary - **config.defaults.toml is ONLY a template** - Never loaded at runtime\\n- **Workspaces are self-contained** - Complete config structure generated from templates\\n- **New hierarchy**: Workspace → Provider → Platform → User Context → ENV\\n- **User context for overrides** - Stored in ~/Library/Application Support/provisioning/\\n- **Clear, explicit configuration** - No hidden defaults ## Related Documentation - Template files: `provisioning/config/templates/`\\n- Workspace init: `provisioning/core/nulib/lib_provisioning/workspace/init.nu`\\n- Config loader: `provisioning/core/nulib/lib_provisioning/config/loader.nu`\\n- User guide: `docs/user/workspace-management.md`","breadcrumbs":"Workspace Config Architecture » Workspace Structure","id":"1458","title":"Workspace Structure"},"1459":{"body":"This guide covers generating and managing temporary credentials (dynamic secrets) instead of using static secrets. See the Quick Reference section below for fast lookup.","breadcrumbs":"Dynamic Secrets Guide » Dynamic Secrets Guide","id":"1459","title":"Dynamic Secrets Guide"},"146":{"body":"Make the provisioning command available globally: # Option 1: Symlink to /usr/local/bin (recommended)\\nsudo ln -s \\"$(pwd)/provisioning/core/cli/provisioning\\" /usr/local/bin/provisioning # Option 2: Add to PATH in your shell profile\\necho \'export PATH=\\"$PATH:\'\\"$(pwd)\\"\'/provisioning/core/cli\\"\' >> ~/.bashrc # or ~/.zshrc\\nsource ~/.bashrc # or ~/.zshrc # Verify installation\\nprovisioning --version","breadcrumbs":"Installation Steps » Step 3: Add CLI to PATH","id":"146","title":"Step 3: Add CLI to PATH"},"1460":{"body":"Quick Start : Generate temporary credentials instead of using static secrets","breadcrumbs":"Dynamic Secrets Guide » Quick Reference","id":"1460","title":"Quick Reference"},"1461":{"body":"Generate AWS Credentials (1 hour) secrets generate aws --role deploy --workspace prod --purpose \\"deployment\\" Generate SSH Key (2 hours) secrets generate ssh --ttl 2 --workspace dev --purpose \\"server access\\" Generate UpCloud Subaccount (2 hours) secrets generate upcloud --workspace staging --purpose \\"testing\\" List Active Secrets secrets list Revoke Secret secrets revoke --reason \\"no longer needed\\" View Statistics secrets stats","breadcrumbs":"Dynamic Secrets Guide » Quick Commands","id":"1461","title":"Quick Commands"},"1462":{"body":"Type TTL Range Renewable Use Case AWS STS 15min - 12h ✅ Yes Cloud resource provisioning SSH Keys 10min - 24h ❌ No Temporary server access UpCloud 30min - 8h ❌ No UpCloud API operations Vault 5min - 24h ✅ Yes Any Vault-backed secret","breadcrumbs":"Dynamic Secrets Guide » Secret Types","id":"1462","title":"Secret Types"},"1463":{"body":"Base URL : http://localhost:9090/api/v1/secrets # Generate secret\\nPOST /generate # Get secret\\nGET /{id} # Revoke secret\\nPOST /{id}/revoke # Renew secret\\nPOST /{id}/renew # List secrets\\nGET /list # List expiring\\nGET /expiring # Statistics\\nGET /stats","breadcrumbs":"Dynamic Secrets Guide » REST API Endpoints","id":"1463","title":"REST API Endpoints"},"1464":{"body":"# Generate\\nlet creds = secrets generate aws ` --role deploy ` --region us-west-2 ` --workspace prod ` --purpose \\"Deploy servers\\" # Export to environment\\nexport-env { AWS_ACCESS_KEY_ID: ($creds.credentials.access_key_id) AWS_SECRET_ACCESS_KEY: ($creds.credentials.secret_access_key) AWS_SESSION_TOKEN: ($creds.credentials.session_token)\\n} # Use credentials\\nprovisioning server create # Cleanup\\nsecrets revoke ($creds.id) --reason \\"done\\"","breadcrumbs":"Dynamic Secrets Guide » AWS STS Example","id":"1464","title":"AWS STS Example"},"1465":{"body":"# Generate\\nlet key = secrets generate ssh ` --ttl 4 ` --workspace dev ` --purpose \\"Debug issue\\" # Save key\\n$key.credentials.private_key | save ~/.ssh/temp_key\\nchmod 600 ~/.ssh/temp_key # Use key\\nssh -i ~/.ssh/temp_key user@server # Cleanup\\nrm ~/.ssh/temp_key\\nsecrets revoke ($key.id) --reason \\"fixed\\"","breadcrumbs":"Dynamic Secrets Guide » SSH Key Example","id":"1465","title":"SSH Key Example"},"1466":{"body":"File : provisioning/platform/orchestrator/config.defaults.toml [secrets]\\ndefault_ttl_hours = 1\\nmax_ttl_hours = 12\\nauto_revoke_on_expiry = true\\nwarning_threshold_minutes = 5 aws_account_id = \\"123456789012\\"\\naws_default_region = \\"us-east-1\\" upcloud_username = \\"${UPCLOUD_USER}\\"\\nupcloud_password = \\"${UPCLOUD_PASS}\\"","breadcrumbs":"Dynamic Secrets Guide » Configuration","id":"1466","title":"Configuration"},"1467":{"body":"","breadcrumbs":"Dynamic Secrets Guide » Troubleshooting","id":"1467","title":"Troubleshooting"},"1468":{"body":"→ Check service initialization","breadcrumbs":"Dynamic Secrets Guide » \\"Provider not found\\"","id":"1468","title":"\\"Provider not found\\""},"1469":{"body":"→ Reduce TTL or configure higher max","breadcrumbs":"Dynamic Secrets Guide » \\"TTL exceeds maximum\\"","id":"1469","title":"\\"TTL exceeds maximum\\""},"147":{"body":"Generate keys for encrypting sensitive configuration: # Create Age key directory\\nmkdir -p ~/.config/provisioning/age # Generate private key\\nage-keygen -o ~/.config/provisioning/age/private_key.txt # Extract public key\\nage-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisioning/age/public_key.txt # Secure the keys\\nchmod 600 ~/.config/provisioning/age/private_key.txt\\nchmod 644 ~/.config/provisioning/age/public_key.txt","breadcrumbs":"Installation Steps » Step 4: Generate Age Encryption Keys","id":"147","title":"Step 4: Generate Age Encryption Keys"},"1470":{"body":"→ Generate new secret instead","breadcrumbs":"Dynamic Secrets Guide » \\"Secret not renewable\\"","id":"1470","title":"\\"Secret not renewable\\""},"1471":{"body":"→ Check provider requirements (e.g., AWS needs \'role\')","breadcrumbs":"Dynamic Secrets Guide » \\"Missing required parameter\\"","id":"1471","title":"\\"Missing required parameter\\""},"1472":{"body":"✅ No static credentials stored ✅ Automatic expiration (1-12 hours) ✅ Auto-revocation on expiry ✅ Full audit trail ✅ Memory-only storage ✅ TLS in transit","breadcrumbs":"Dynamic Secrets Guide » Security Features","id":"1472","title":"Security Features"},"1473":{"body":"Orchestrator logs : provisioning/platform/orchestrator/data/orchestrator.log Debug secrets : secrets list | where is_expired == true","breadcrumbs":"Dynamic Secrets Guide » Support","id":"1473","title":"Support"},"1474":{"body":"Version : 1.0.0 | Date : 2025-10-06","breadcrumbs":"Mode System Guide » Mode System Quick Reference","id":"1474","title":"Mode System Quick Reference"},"1475":{"body":"# Check current mode\\nprovisioning mode current # List all available modes\\nprovisioning mode list # Switch to a different mode\\nprovisioning mode switch # Validate mode configuration\\nprovisioning mode validate\\n```plaintext --- ## Available Modes | Mode | Use Case | Auth | Orchestrator | OCI Registry |\\n|------|----------|------|--------------|--------------|\\n| **solo** | Local development | None | Local binary | Local Zot (optional) |\\n| **multi-user** | Team collaboration | Token (JWT) | Remote | Remote Harbor |\\n| **cicd** | CI/CD pipelines | Token (CI injected) | Remote | Remote Harbor |\\n| **enterprise** | Production | mTLS | Kubernetes HA | Harbor HA + DR | --- ## Mode Comparison ### Solo Mode - ✅ **Best for**: Individual developers\\n- 🔐 **Authentication**: None\\n- 🚀 **Services**: Local orchestrator only\\n- 📦 **Extensions**: Local filesystem\\n- 🔒 **Workspace Locking**: Disabled\\n- 💾 **Resource Limits**: Unlimited ### Multi-User Mode - ✅ **Best for**: Development teams (5-20 developers)\\n- 🔐 **Authentication**: Token (JWT, 24h expiry)\\n- 🚀 **Services**: Remote orchestrator, control-center, DNS, git\\n- 📦 **Extensions**: OCI registry (Harbor)\\n- 🔒 **Workspace Locking**: Enabled (Gitea provider)\\n- 💾 **Resource Limits**: 10 servers, 32 cores, 128GB per user ### CI/CD Mode - ✅ **Best for**: Automated pipelines\\n- 🔐 **Authentication**: Token (1h expiry, CI/CD injected)\\n- 🚀 **Services**: Remote orchestrator, DNS, git\\n- 📦 **Extensions**: OCI registry (always pull latest)\\n- 🔒 **Workspace Locking**: Disabled (stateless)\\n- 💾 **Resource Limits**: 5 servers, 16 cores, 64GB per pipeline ### Enterprise Mode - ✅ **Best for**: Large enterprises with strict compliance\\n- 🔐 **Authentication**: mTLS (TLS 1.3)\\n- 🚀 **Services**: All services on Kubernetes (HA)\\n- 📦 **Extensions**: OCI registry (signature verification)\\n- 🔒 **Workspace Locking**: Required (etcd provider)\\n- 💾 **Resource Limits**: 20 servers, 64 cores, 256GB per user --- ## Common Operations ### Initialize Mode System ```bash\\nprovisioning mode init\\n```plaintext ### Check Current Mode ```bash\\nprovisioning mode current # Output:\\n# mode: solo\\n# configured: true\\n# config_file: ~/.provisioning/config/active-mode.yaml\\n```plaintext ### List All Modes ```bash\\nprovisioning mode list # Output:\\n# ┌───────────────┬───────────────────────────────────┬─────────┐\\n# │ mode │ description │ current │\\n# ├───────────────┼───────────────────────────────────┼─────────┤\\n# │ solo │ Single developer local development │ ● │\\n# │ multi-user │ Team collaboration │ │\\n# │ cicd │ CI/CD pipeline execution │ │\\n# │ enterprise │ Production enterprise deployment │ │\\n# └───────────────┴───────────────────────────────────┴─────────┘\\n```plaintext ### Switch Mode ```bash\\n# Switch with confirmation\\nprovisioning mode switch multi-user # Dry run (preview changes)\\nprovisioning mode switch multi-user --dry-run # With validation\\nprovisioning mode switch multi-user --validate\\n```plaintext ### Show Mode Details ```bash\\n# Show current mode\\nprovisioning mode show # Show specific mode\\nprovisioning mode show enterprise\\n```plaintext ### Validate Mode ```bash\\n# Validate current mode\\nprovisioning mode validate # Validate specific mode\\nprovisioning mode validate cicd\\n```plaintext ### Compare Modes ```bash\\nprovisioning mode compare solo multi-user # Output shows differences in:\\n# - Authentication\\n# - Service deployments\\n# - Extension sources\\n# - Workspace locking\\n# - Security settings\\n```plaintext --- ## OCI Registry Management ### Solo Mode Only ```bash\\n# Start local OCI registry\\nprovisioning mode oci-registry start # Check registry status\\nprovisioning mode oci-registry status # View registry logs\\nprovisioning mode oci-registry logs # Stop registry\\nprovisioning mode oci-registry stop\\n```plaintext **Note**: OCI registry management only works in solo mode with local deployment. --- ## Mode-Specific Workflows ### Solo Mode Workflow ```bash\\n# 1. Initialize (defaults to solo)\\nprovisioning workspace init # 2. Start orchestrator\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background # 3. (Optional) Start OCI registry\\nprovisioning mode oci-registry start # 4. Create infrastructure\\nprovisioning server create web-01 --check\\nprovisioning taskserv create kubernetes # Extensions loaded from local filesystem\\n```plaintext ### Multi-User Mode Workflow ```bash\\n# 1. Switch to multi-user mode\\nprovisioning mode switch multi-user # 2. Authenticate\\nprovisioning auth login\\n# Enter JWT token from team admin # 3. Lock workspace\\nprovisioning workspace lock my-infra # 4. Pull extensions from OCI registry\\nprovisioning extension pull upcloud\\nprovisioning extension pull kubernetes # 5. Create infrastructure\\nprovisioning server create web-01 # 6. Unlock workspace\\nprovisioning workspace unlock my-infra\\n```plaintext ### CI/CD Mode Workflow ```yaml\\n# GitLab CI example\\ndeploy: stage: deploy script: # Token injected by CI - export PROVISIONING_MODE=cicd - mkdir -p /var/run/secrets/provisioning - echo \\"$PROVISIONING_TOKEN\\" > /var/run/secrets/provisioning/token # Validate - provisioning validate --all # Test - provisioning test quick kubernetes # Deploy - provisioning server create --check - provisioning server create after_script: - provisioning workspace cleanup\\n```plaintext ### Enterprise Mode Workflow ```bash\\n# 1. Switch to enterprise mode\\nprovisioning mode switch enterprise # 2. Verify Kubernetes connectivity\\nkubectl get pods -n provisioning-system # 3. Login to Harbor\\ndocker login harbor.enterprise.local # 4. Request workspace (requires approval)\\nprovisioning workspace request prod-deployment\\n# Approval from: platform-team, security-team # 5. After approval, lock workspace\\nprovisioning workspace lock prod-deployment --provider etcd # 6. Pull extensions (with signature verification)\\nprovisioning extension pull upcloud --verify-signature # 7. Deploy infrastructure\\nprovisioning infra create --check\\nprovisioning infra create # 8. Release workspace\\nprovisioning workspace unlock prod-deployment\\n```plaintext --- ## Configuration Files ### Mode Templates ```plaintext\\nworkspace/config/modes/\\n├── solo.yaml # Solo mode configuration\\n├── multi-user.yaml # Multi-user mode configuration\\n├── cicd.yaml # CI/CD mode configuration\\n└── enterprise.yaml # Enterprise mode configuration\\n```plaintext ### Active Mode Configuration ```plaintext\\n~/.provisioning/config/active-mode.yaml\\n```plaintext This file is created/updated when you switch modes. --- ## OCI Registry Namespaces All modes use the following OCI registry namespaces: | Namespace | Purpose | Example |\\n|-----------|---------|---------|\\n| `*-extensions` | Extension artifacts | `provisioning-extensions/upcloud:latest` |\\n| `*-kcl` | KCL package artifacts | `provisioning-kcl/lib:v1.0.0` |\\n| `*-platform` | Platform service images | `provisioning-platform/orchestrator:latest` |\\n| `*-test` | Test environment images | `provisioning-test/ubuntu:22.04` | **Note**: Prefix varies by mode (`dev-`, `provisioning-`, `cicd-`, `prod-`) --- ## Troubleshooting ### Mode switch fails ```bash\\n# Validate mode first\\nprovisioning mode validate # Check runtime requirements\\nprovisioning mode validate --check-requirements\\n```plaintext ### Cannot start OCI registry (solo mode) ```bash\\n# Check if registry binary is installed\\nwhich zot # Install Zot\\n# macOS: brew install project-zot/tap/zot\\n# Linux: Download from https://github.com/project-zot/zot/releases # Check if port 5000 is available\\nlsof -i :5000\\n```plaintext ### Authentication fails (multi-user/cicd/enterprise) ```bash\\n# Check token expiry\\nprovisioning auth status # Re-authenticate\\nprovisioning auth login # For enterprise mTLS, verify certificates\\nls -la /etc/provisioning/certs/\\n# Should contain: client.crt, client.key, ca.crt\\n```plaintext ### Workspace locking issues (multi-user/enterprise) ```bash\\n# Check lock status\\nprovisioning workspace lock-status # Force unlock (use with caution)\\nprovisioning workspace unlock --force # Check lock provider status\\n# Multi-user: Check Gitea connectivity\\ncurl -I https://git.company.local # Enterprise: Check etcd cluster\\netcdctl endpoint health\\n```plaintext ### OCI registry connection fails ```bash\\n# Test registry connectivity\\ncurl https://harbor.company.local/v2/ # Check authentication token\\ncat ~/.provisioning/tokens/oci # Verify network connectivity\\nping harbor.company.local # For Harbor, check credentials\\ndocker login harbor.company.local\\n```plaintext --- ## Environment Variables | Variable | Purpose | Example |\\n|----------|---------|---------|\\n| `PROVISIONING_MODE` | Override active mode | `export PROVISIONING_MODE=cicd` |\\n| `PROVISIONING_WORKSPACE_CONFIG` | Override config location | `~/.provisioning/config` |\\n| `PROVISIONING_PROJECT_ROOT` | Project root directory | `/opt/project-provisioning` | --- ## Best Practices ### 1. Use Appropriate Mode - **Solo**: Individual development, experimentation\\n- **Multi-User**: Team collaboration, shared infrastructure\\n- **CI/CD**: Automated testing and deployment\\n- **Enterprise**: Production deployments, compliance requirements ### 2. Validate Before Switching ```bash\\nprovisioning mode validate \\n```plaintext ### 3. Backup Active Configuration ```bash\\n# Automatic backup created when switching\\nls ~/.provisioning/config/active-mode.yaml.backup\\n```plaintext ### 4. Use Check Mode ```bash\\nprovisioning server create --check\\n```plaintext ### 5. Lock Workspaces in Multi-User/Enterprise ```bash\\nprovisioning workspace lock \\n# ... make changes ...\\nprovisioning workspace unlock \\n```plaintext ### 6. Pull Extensions from OCI (Multi-User/CI/CD/Enterprise) ```bash\\n# Don\'t use local extensions in shared modes\\nprovisioning extension pull \\n```plaintext --- ## Security Considerations ### Solo Mode - ⚠️ No authentication (local development only)\\n- ⚠️ No encryption (sensitive data should use SOPS)\\n- ✅ Isolated environment ### Multi-User Mode - ✅ Token-based authentication\\n- ✅ TLS in transit\\n- ✅ Audit logging\\n- ⚠️ No encryption at rest (configure as needed) ### CI/CD Mode - ✅ Token authentication (short expiry)\\n- ✅ Full encryption (at rest + in transit)\\n- ✅ KMS for secrets\\n- ✅ Vulnerability scanning (critical threshold)\\n- ✅ Image signing required ### Enterprise Mode - ✅ mTLS authentication\\n- ✅ Full encryption (at rest + in transit)\\n- ✅ KMS for all secrets\\n- ✅ Vulnerability scanning (critical threshold)\\n- ✅ Image signing + signature verification\\n- ✅ Network isolation\\n- ✅ Compliance policies (SOC2, ISO27001, HIPAA) --- ## Support and Documentation - **Implementation Summary**: `MODE_SYSTEM_IMPLEMENTATION_SUMMARY.md`\\n- **KCL Schemas**: `provisioning/kcl/modes.k`, `provisioning/kcl/oci_registry.k`\\n- **Mode Templates**: `workspace/config/modes/*.yaml`\\n- **Commands**: `provisioning/core/nulib/lib_provisioning/mode/` --- **Last Updated**: 2025-10-06 | **Version**: 1.0.0","breadcrumbs":"Mode System Guide » Quick Start","id":"1475","title":"Quick Start"},"1476":{"body":"Complete guide to workspace management in the provisioning platform.","breadcrumbs":"Workspace Guide » Workspace Guide","id":"1476","title":"Workspace Guide"},"1477":{"body":"The comprehensive workspace guide is available here: → Workspace Switching Guide - Complete workspace documentation This guide covers: Workspace creation and initialization Switching between multiple workspaces User preferences and configuration Workspace registry management Backup and restore operations","breadcrumbs":"Workspace Guide » 📖 Workspace Switching Guide","id":"1477","title":"📖 Workspace Switching Guide"},"1478":{"body":"# List all workspaces\\nprovisioning workspace list # Switch to a workspace\\nprovisioning workspace switch # Create new workspace\\nprovisioning workspace init # Show active workspace\\nprovisioning workspace active","breadcrumbs":"Workspace Guide » Quick Start","id":"1478","title":"Quick Start"},"1479":{"body":"Workspace Switching Guide - Complete guide Workspace Configuration - Configuration commands Workspace Setup - Initial setup guide For complete workspace documentation, see Workspace Switching Guide .","breadcrumbs":"Workspace Guide » Additional Workspace Resources","id":"1479","title":"Additional Workspace Resources"},"148":{"body":"Set up basic environment variables: # Create environment file\\ncat > ~/.provisioning/env << \'ENVEOF\'\\n# Provisioning Environment Configuration\\nexport PROVISIONING_ENV=dev\\nexport PROVISIONING_PATH=$(pwd)\\nexport PROVISIONING_KAGE=~/.config/provisioning/age\\nENVEOF # Source the environment\\nsource ~/.provisioning/env # Add to shell profile for persistence\\necho \'source ~/.provisioning/env\' >> ~/.bashrc # or ~/.zshrc","breadcrumbs":"Installation Steps » Step 5: Configure Environment","id":"148","title":"Step 5: Configure Environment"},"1480":{"body":"Version : 1.0.0 Last Updated : 2025-10-06 System Version : 2.0.5+","breadcrumbs":"Workspace Enforcement Guide » Workspace Enforcement and Version Tracking Guide","id":"1480","title":"Workspace Enforcement and Version Tracking Guide"},"1481":{"body":"Overview Workspace Requirement Version Tracking Migration Framework Command Reference Troubleshooting Best Practices","breadcrumbs":"Workspace Enforcement Guide » Table of Contents","id":"1481","title":"Table of Contents"},"1482":{"body":"The provisioning system now enforces mandatory workspace requirements for all infrastructure operations. This ensures: Consistent Environment : All operations run in a well-defined workspace Version Compatibility : Workspaces track provisioning and schema versions Safe Migrations : Automatic migration framework with backup/rollback support Configuration Isolation : Each workspace has isolated configurations and state","breadcrumbs":"Workspace Enforcement Guide » Overview","id":"1482","title":"Overview"},"1483":{"body":"✅ Mandatory Workspace : Most commands require an active workspace ✅ Version Tracking : Workspaces track system, schema, and format versions ✅ Compatibility Checks : Automatic validation before operations ✅ Migration Framework : Safe upgrades with backup/restore ✅ Clear Error Messages : Helpful guidance when workspace is missing or incompatible","breadcrumbs":"Workspace Enforcement Guide » Key Features","id":"1483","title":"Key Features"},"1484":{"body":"","breadcrumbs":"Workspace Enforcement Guide » Workspace Requirement","id":"1484","title":"Workspace Requirement"},"1485":{"body":"Almost all provisioning commands now require an active workspace: Infrastructure : server, taskserv, cluster, infra Orchestration : workflow, batch, orchestrator Development : module, layer, pack Generation : generate Configuration : Most config commands Test : test environment commands","breadcrumbs":"Workspace Enforcement Guide » Commands That Require Workspace","id":"1485","title":"Commands That Require Workspace"},"1486":{"body":"Only informational and workspace management commands work without a workspace: help - Help system version - Show version information workspace - Workspace management commands guide / sc - Documentation and quick reference nu - Start Nushell session nuinfo - Nushell information","breadcrumbs":"Workspace Enforcement Guide » Commands That Don\'t Require Workspace","id":"1486","title":"Commands That Don\'t Require Workspace"},"1487":{"body":"If you run a command without an active workspace, you\'ll see: ✗ Workspace Required No active workspace is configured. To get started: 1. Create a new workspace: provisioning workspace init 2. Or activate an existing workspace: provisioning workspace activate 3. List available workspaces: provisioning workspace list\\n```plaintext --- ## Version Tracking ### Workspace Metadata Each workspace maintains metadata in `.provisioning/metadata.yaml`: ```yaml\\nworkspace: name: \\"my-workspace\\" path: \\"/path/to/workspace\\" version: provisioning: \\"2.0.5\\" # System version when created/updated schema: \\"1.0.0\\" # KCL schema version workspace_format: \\"2.0.0\\" # Directory structure version created: \\"2025-10-06T12:00:00Z\\"\\nlast_updated: \\"2025-10-06T13:30:00Z\\" migration_history: [] compatibility: min_provisioning_version: \\"2.0.0\\" min_schema_version: \\"1.0.0\\"\\n```plaintext ### Version Components #### 1. Provisioning Version - **What**: Version of the provisioning system (CLI + libraries)\\n- **Example**: `2.0.5`\\n- **Purpose**: Ensures workspace is compatible with current system #### 2. Schema Version - **What**: Version of KCL schemas used in workspace\\n- **Example**: `1.0.0`\\n- **Purpose**: Tracks configuration schema compatibility #### 3. Workspace Format Version - **What**: Version of workspace directory structure\\n- **Example**: `2.0.0`\\n- **Purpose**: Ensures workspace has required directories and files ### Checking Workspace Version View workspace version information: ```bash\\n# Check active workspace version\\nprovisioning workspace version # Check specific workspace version\\nprovisioning workspace version my-workspace # JSON output\\nprovisioning workspace version --format json\\n```plaintext **Example Output**: ```plaintext\\nWorkspace Version Information System: Version: 2.0.5 Workspace: Name: my-workspace Path: /Users/user/workspaces/my-workspace Version: 2.0.5 Schema Version: 1.0.0 Format Version: 2.0.0 Created: 2025-10-06T12:00:00Z Last Updated: 2025-10-06T13:30:00Z Compatibility: Compatible: true Reason: version_match Message: Workspace and system versions match Migrations: Total: 0\\n```plaintext --- ## Migration Framework ### When Migration is Needed Migration is required when: 1. **No Metadata**: Workspace created before version tracking (< 2.0.5)\\n2. **Version Mismatch**: System version is newer than workspace version\\n3. **Breaking Changes**: Major version update with structural changes ### Compatibility Scenarios #### Scenario 1: No Metadata (Unknown Version) ```plaintext\\nWorkspace version is incompatible: Workspace: my-workspace Path: /path/to/workspace Workspace metadata not found or corrupted This workspace needs migration: Run workspace migration: provisioning workspace migrate my-workspace\\n```plaintext #### Scenario 2: Migration Available ```plaintext\\nℹ Migration available: Workspace can be updated from 2.0.0 to 2.0.5 Run: provisioning workspace migrate my-workspace\\n```plaintext #### Scenario 3: Workspace Too New ```plaintext\\nWorkspace version (3.0.0) is newer than system (2.0.5) Workspace is newer than the system: Workspace version: 3.0.0 System version: 2.0.5 Upgrade the provisioning system to use this workspace.\\n```plaintext ### Running Migrations #### Basic Migration Migrate active workspace to current system version: ```bash\\nprovisioning workspace migrate\\n```plaintext #### Migrate Specific Workspace ```bash\\nprovisioning workspace migrate my-workspace\\n```plaintext #### Migration Options ```bash\\n# Skip backup (not recommended)\\nprovisioning workspace migrate --skip-backup # Force without confirmation\\nprovisioning workspace migrate --force # Migrate to specific version\\nprovisioning workspace migrate --target-version 2.1.0\\n```plaintext ### Migration Process When you run a migration: 1. **Validation**: System validates workspace exists and needs migration\\n2. **Backup**: Creates timestamped backup in `.workspace_backups/`\\n3. **Confirmation**: Prompts for confirmation (unless `--force`)\\n4. **Migration**: Applies migration steps sequentially\\n5. **Verification**: Validates migration success\\n6. **Metadata Update**: Records migration in workspace metadata **Example Migration Output**: ```plaintext\\nWorkspace Migration Workspace: my-workspace\\nPath: /path/to/workspace Current version: unknown\\nTarget version: 2.0.5 This will migrate the workspace from unknown to 2.0.5\\nA backup will be created before migration. Continue with migration? (y/N): y Creating backup...\\n✓ Backup created: /path/.workspace_backups/my-workspace_backup_20251006_123000 Migration Strategy: Initialize metadata\\nDescription: Add metadata tracking to existing workspace\\nFrom: unknown → To: 2.0.5 Migrating workspace to version 2.0.5...\\n✓ Initialize metadata completed ✓ Migration completed successfully\\n```plaintext ### Workspace Backups #### List Backups ```bash\\n# List backups for active workspace\\nprovisioning workspace list-backups # List backups for specific workspace\\nprovisioning workspace list-backups my-workspace\\n```plaintext **Example Output**: ```plaintext\\nWorkspace Backups for my-workspace name created reason size\\nmy-workspace_backup_20251006_1200 2025-10-06T12:00:00Z pre_migration 2.3 MB\\nmy-workspace_backup_20251005_1500 2025-10-05T15:00:00Z pre_migration 2.1 MB\\n```plaintext #### Restore from Backup ```bash\\n# Restore workspace from backup\\nprovisioning workspace restore-backup /path/to/backup # Force restore without confirmation\\nprovisioning workspace restore-backup /path/to/backup --force\\n```plaintext **Restore Process**: ```plaintext\\nRestore Workspace from Backup Backup: /path/.workspace_backups/my-workspace_backup_20251006_1200\\nOriginal path: /path/to/workspace\\nCreated: 2025-10-06T12:00:00Z\\nReason: pre_migration ⚠ This will replace the current workspace at: /path/to/workspace Continue with restore? (y/N): y ✓ Workspace restored from backup\\n```plaintext --- ## Command Reference ### Workspace Version Commands ```bash\\n# Show workspace version information\\nprovisioning workspace version [workspace-name] [--format table|json|yaml] # Check compatibility\\nprovisioning workspace check-compatibility [workspace-name] # Migrate workspace\\nprovisioning workspace migrate [workspace-name] [--skip-backup] [--force] [--target-version VERSION] # List backups\\nprovisioning workspace list-backups [workspace-name] # Restore from backup\\nprovisioning workspace restore-backup [--force]\\n```plaintext ### Workspace Management Commands ```bash\\n# List all workspaces\\nprovisioning workspace list # Show active workspace\\nprovisioning workspace active # Activate workspace\\nprovisioning workspace activate # Create new workspace (includes metadata initialization)\\nprovisioning workspace init [path] # Register existing workspace\\nprovisioning workspace register # Remove workspace from registry\\nprovisioning workspace remove [--force]\\n```plaintext --- ## Troubleshooting ### Problem: \\"No active workspace\\" **Solution**: Activate or create a workspace ```bash\\n# List available workspaces\\nprovisioning workspace list # Activate existing workspace\\nprovisioning workspace activate my-workspace # Or create new workspace\\nprovisioning workspace init new-workspace\\n```plaintext ### Problem: \\"Workspace has invalid structure\\" **Symptoms**: Missing directories or configuration files **Solution**: Run migration to fix structure ```bash\\nprovisioning workspace migrate my-workspace\\n```plaintext ### Problem: \\"Workspace version is incompatible\\" **Solution**: Run migration to upgrade workspace ```bash\\nprovisioning workspace migrate\\n```plaintext ### Problem: Migration Failed **Solution**: Restore from automatic backup ```bash\\n# List backups\\nprovisioning workspace list-backups # Restore from most recent backup\\nprovisioning workspace restore-backup /path/to/backup\\n```plaintext ### Problem: Can\'t Activate Workspace After Migration **Possible Causes**: 1. Migration failed partially\\n2. Workspace path changed\\n3. Metadata corrupted **Solutions**: ```bash\\n# Check workspace compatibility\\nprovisioning workspace check-compatibility my-workspace # If corrupted, restore from backup\\nprovisioning workspace restore-backup /path/to/backup # If path changed, re-register\\nprovisioning workspace remove my-workspace\\nprovisioning workspace register my-workspace /new/path --activate\\n```plaintext --- ## Best Practices ### 1. Always Use Named Workspaces Create workspaces for different environments: ```bash\\nprovisioning workspace init dev ~/workspaces/dev --activate\\nprovisioning workspace init staging ~/workspaces/staging\\nprovisioning workspace init production ~/workspaces/production\\n```plaintext ### 2. Let System Create Backups Never use `--skip-backup` for important workspaces. Backups are cheap, data loss is expensive. ```bash\\n# Good: Default with backup\\nprovisioning workspace migrate # Risky: No backup\\nprovisioning workspace migrate --skip-backup # DON\'T DO THIS\\n```plaintext ### 3. Check Compatibility Before Operations Before major operations, verify workspace compatibility: ```bash\\nprovisioning workspace check-compatibility\\n```plaintext ### 4. Migrate After System Upgrades After upgrading the provisioning system: ```bash\\n# Check if migration available\\nprovisioning workspace version # Migrate if needed\\nprovisioning workspace migrate\\n```plaintext ### 5. Keep Backups for Safety Don\'t immediately delete old backups: ```bash\\n# List backups\\nprovisioning workspace list-backups # Keep at least 2-3 recent backups\\n```plaintext ### 6. Use Version Control for Workspace Configs Initialize git in workspace directory: ```bash\\ncd ~/workspaces/my-workspace\\ngit init\\ngit add config/ infra/\\ngit commit -m \\"Initial workspace configuration\\"\\n```plaintext Exclude runtime and cache directories in `.gitignore`: ```gitignore\\n.cache/\\n.runtime/\\n.provisioning/\\n.workspace_backups/\\n```plaintext ### 7. Document Custom Migrations If you need custom migration steps, document them: ```bash\\n# Create migration notes\\necho \\"Custom steps for v2 to v3 migration\\" > MIGRATION_NOTES.md\\n```plaintext --- ## Migration History Each migration is recorded in workspace metadata: ```yaml\\nmigration_history: - from_version: \\"unknown\\" to_version: \\"2.0.5\\" migration_type: \\"metadata_initialization\\" timestamp: \\"2025-10-06T12:00:00Z\\" success: true notes: \\"Initial metadata creation\\" - from_version: \\"2.0.5\\" to_version: \\"2.1.0\\" migration_type: \\"version_update\\" timestamp: \\"2025-10-15T10:30:00Z\\" success: true notes: \\"Updated to workspace switching support\\"\\n```plaintext View migration history: ```bash\\nprovisioning workspace version --format yaml | grep -A 10 \\"migration_history\\"\\n```plaintext --- ## Summary The workspace enforcement and version tracking system provides: - **Safety**: Mandatory workspace prevents accidental operations outside defined environments\\n- **Compatibility**: Version tracking ensures workspace works with current system\\n- **Upgradability**: Migration framework handles version transitions safely\\n- **Recoverability**: Automatic backups protect against migration failures **Key Commands**: ```bash\\n# Create workspace\\nprovisioning workspace init my-workspace --activate # Check version\\nprovisioning workspace version # Migrate if needed\\nprovisioning workspace migrate # List backups\\nprovisioning workspace list-backups\\n```plaintext For more information, see: - **Workspace Switching Guide**: `docs/user/WORKSPACE_SWITCHING_GUIDE.md`\\n- **Quick Reference**: `provisioning sc` or `provisioning guide quickstart`\\n- **Help System**: `provisioning help workspace` --- **Questions or Issues?** Check the troubleshooting section or run: ```bash\\nprovisioning workspace check-compatibility\\n```plaintext This will provide specific guidance for your situation.","breadcrumbs":"Workspace Enforcement Guide » What Happens Without a Workspace?","id":"1487","title":"What Happens Without a Workspace?"},"1488":{"body":"Version : 1.0.0 Last Updated : 2025-12-04","breadcrumbs":"Workspace Infra Reference » Unified Workspace:Infrastructure Reference System","id":"1488","title":"Unified Workspace:Infrastructure Reference System"},"1489":{"body":"The Workspace:Infrastructure Reference System provides a unified notation for managing workspaces and their associated infrastructure. This system eliminates the need to specify infrastructure separately and enables convenient defaults.","breadcrumbs":"Workspace Infra Reference » Overview","id":"1489","title":"Overview"},"149":{"body":"Create your first workspace: # Initialize a new workspace\\nprovisioning workspace init my-first-workspace # Expected output:\\n# ✓ Workspace \'my-first-workspace\' created successfully\\n# ✓ Configuration template generated\\n# ✓ Workspace activated # Verify workspace\\nprovisioning workspace list","breadcrumbs":"Installation Steps » Step 6: Initialize Workspace","id":"149","title":"Step 6: Initialize Workspace"},"1490":{"body":"","breadcrumbs":"Workspace Infra Reference » Quick Start","id":"1490","title":"Quick Start"},"1491":{"body":"Use the -ws flag with workspace:infra notation: # Use production workspace with sgoyol infrastructure for this command only\\nprovisioning server list -ws production:sgoyol # Use default infrastructure of active workspace\\nprovisioning taskserv create kubernetes\\n```plaintext ### Persistent Activation Activate a workspace with a default infrastructure: ```bash\\n# Activate librecloud workspace and set wuji as default infra\\nprovisioning workspace activate librecloud:wuji # Now all commands use librecloud:wuji by default\\nprovisioning server list\\n```plaintext ## Notation Syntax ### Basic Format ```plaintext\\nworkspace:infra\\n```plaintext | Part | Description | Example |\\n|------|-------------|---------|\\n| `workspace` | Workspace name | `librecloud` |\\n| `:` | Separator | - |\\n| `infra` | Infrastructure name | `wuji` | ### Examples | Notation | Workspace | Infrastructure |\\n|----------|-----------|-----------------|\\n| `librecloud:wuji` | librecloud | wuji |\\n| `production:sgoyol` | production | sgoyol |\\n| `dev:local` | dev | local |\\n| `librecloud` | librecloud | (from default or context) | ## Resolution Priority When no infrastructure is explicitly specified, the system uses this priority order: 1. **Explicit `--infra` flag** (highest) ```bash provisioning server list --infra another-infra PWD Detection cd workspace_librecloud/infra/wuji\\nprovisioning server list # Auto-detects wuji Default Infrastructure # If workspace has default_infra set\\nprovisioning server list # Uses configured default Error (no infra found) # Error: No infrastructure specified","breadcrumbs":"Workspace Infra Reference » Temporal Override (Single Command)","id":"1491","title":"Temporal Override (Single Command)"},"1492":{"body":"","breadcrumbs":"Workspace Infra Reference » Usage Patterns","id":"1492","title":"Usage Patterns"},"1493":{"body":"Use -ws to override workspace:infra for a single command: # Currently in librecloud:wuji context\\nprovisioning server list # Shows librecloud:wuji # Temporary override for this command only\\nprovisioning server list -ws production:sgoyol # Shows production:sgoyol # Back to original context\\nprovisioning server list # Shows librecloud:wuji again\\n```plaintext ### Pattern 2: Persistent Workspace Activation Set a workspace as active with a default infrastructure: ```bash\\n# List available workspaces\\nprovisioning workspace list # Activate with infra notation\\nprovisioning workspace activate production:sgoyol # All subsequent commands use production:sgoyol\\nprovisioning server list\\nprovisioning taskserv create kubernetes\\n```plaintext ### Pattern 3: PWD-Based Inference The system auto-detects workspace and infrastructure from your current directory: ```bash\\n# Your workspace structure\\nworkspace_librecloud/ infra/ wuji/ settings.k another/ settings.k # Navigation auto-detects context\\ncd workspace_librecloud/infra/wuji\\nprovisioning server list # Uses wuji automatically cd ../another\\nprovisioning server list # Switches to another\\n```plaintext ### Pattern 4: Default Infrastructure Management Set a workspace-specific default infrastructure: ```bash\\n# During activation\\nprovisioning workspace activate librecloud:wuji # Or explicitly after activation\\nprovisioning workspace set-default-infra librecloud another-infra # View current defaults\\nprovisioning workspace list\\n```plaintext ## Command Reference ### Workspace Commands ```bash\\n# Activate workspace with infra\\nprovisioning workspace activate workspace:infra # Switch to different workspace\\nprovisioning workspace switch workspace_name # List all workspaces\\nprovisioning workspace list # Show active workspace\\nprovisioning workspace active # Set default infrastructure\\nprovisioning workspace set-default-infra workspace_name infra_name # Get default infrastructure\\nprovisioning workspace get-default-infra workspace_name\\n```plaintext ### Common Commands with `-ws` ```bash\\n# Server operations\\nprovisioning server create -ws workspace:infra\\nprovisioning server list -ws workspace:infra\\nprovisioning server delete name -ws workspace:infra # Task service operations\\nprovisioning taskserv create kubernetes -ws workspace:infra\\nprovisioning taskserv delete kubernetes -ws workspace:infra # Infrastructure operations\\nprovisioning infra validate -ws workspace:infra\\nprovisioning infra list -ws workspace:infra\\n```plaintext ## Features ### ✅ Unified Notation - Single `workspace:infra` format for all references\\n- Works with all provisioning commands\\n- Backward compatible with existing workflows ### ✅ Temporal Override - Use `-ws` flag for single-command overrides\\n- No permanent state changes\\n- Automatically reverted after command ### ✅ Persistent Defaults - Set default infrastructure per workspace\\n- Eliminates repetitive `--infra` flags\\n- Survives across sessions ### ✅ Smart Detection - Auto-detects workspace from directory\\n- Auto-detects infrastructure from PWD\\n- Fallback to configured defaults ### ✅ Error Handling - Clear error messages when infra not found\\n- Validation of workspace and infra existence\\n- Helpful hints for missing configurations ## Environment Context ### TEMP_WORKSPACE Variable The system uses `$env.TEMP_WORKSPACE` for temporal overrides: ```bash\\n# Set temporarily (via -ws flag automatically)\\n$env.TEMP_WORKSPACE = \\"production\\" # Check current context\\necho $env.TEMP_WORKSPACE # Clear after use\\nhide-env TEMP_WORKSPACE\\n```plaintext ## Validation ### Validating Notation ```bash\\n# Valid notation formats\\nlibrecloud:wuji # Standard format\\nproduction:sgoyol.v2 # With dots and hyphens\\ndev-01:local-test # Multiple hyphens\\nprod123:infra456 # Numeric names # Special characters\\nlib-cloud_01:wu-ji.v2 # Mix of all allowed chars\\n```plaintext ### Error Cases ```bash\\n# Workspace not found\\nprovisioning workspace activate unknown:infra\\n# Error: Workspace \'unknown\' not found in registry # Infrastructure not found\\nprovisioning workspace activate librecloud:unknown\\n# Error: Infrastructure \'unknown\' not found in workspace \'librecloud\' # Empty specification\\nprovisioning workspace activate \\"\\"\\n# Error: Workspace \'\' not found in registry\\n```plaintext ## Configuration ### User Configuration Default infrastructure is stored in `~/Library/Application Support/provisioning/user_config.yaml`: ```yaml\\nactive_workspace: \\"librecloud\\" workspaces: - name: \\"librecloud\\" path: \\"/Users/you/workspaces/librecloud\\" last_used: \\"2025-12-04T12:00:00Z\\" default_infra: \\"wuji\\" # Default infrastructure - name: \\"production\\" path: \\"/opt/workspaces/production\\" last_used: \\"2025-12-03T15:30:00Z\\" default_infra: \\"sgoyol\\"\\n```plaintext ### Workspace Schema In `provisioning/kcl/workspace_config.k`: ```kcl\\nschema InfraConfig: \\"\\"\\"Infrastructure context settings\\"\\"\\" current: str default?: str # Default infrastructure for workspace\\n```plaintext ## Best Practices ### 1. Use Persistent Activation for Long Sessions ```bash\\n# Good: Activate at start of session\\nprovisioning workspace activate production:sgoyol # Then use simple commands\\nprovisioning server list\\nprovisioning taskserv create kubernetes\\n```plaintext ### 2. Use Temporal Override for Ad-Hoc Operations ```bash\\n# Good: Quick one-off operation\\nprovisioning server list -ws production:other-infra # Avoid: Repeated -ws flags\\nprovisioning server list -ws prod:infra1\\nprovisioning taskserv list -ws prod:infra1 # Better to activate once\\n```plaintext ### 3. Navigate with PWD for Context Awareness ```bash\\n# Good: Navigate to infrastructure directory\\ncd workspace_librecloud/infra/wuji\\nprovisioning server list # Auto-detects context # Works well with: cd - history, terminal multiplexer panes\\n```plaintext ### 4. Set Meaningful Defaults ```bash\\n# Good: Default to production infrastructure\\nprovisioning workspace activate production:main-infra # Avoid: Default to dev infrastructure in production workspace\\n```plaintext ## Troubleshooting ### Issue: \\"Workspace not found in registry\\" **Solution**: Register the workspace first ```bash\\nprovisioning workspace register librecloud /path/to/workspace_librecloud\\n```plaintext ### Issue: \\"Infrastructure not found\\" **Solution**: Verify infrastructure directory exists ```bash\\nls workspace_librecloud/infra/ # Check available infras\\nprovisioning workspace activate librecloud:wuji # Use correct name\\n```plaintext ### Issue: Temporal override not working **Solution**: Ensure you\'re using `-ws` flag correctly ```bash\\n# Correct\\nprovisioning server list -ws production:sgoyol # Incorrect (missing space)\\nprovisioning server list-wsproduction:sgoyol # Incorrect (ws is not a command)\\nprovisioning -ws production:sgoyol server list\\n```plaintext ### Issue: PWD detection not working **Solution**: Navigate to proper infrastructure directory ```bash\\n# Must be in workspace structure\\ncd workspace_name/infra/infra_name # Then run command\\nprovisioning server list\\n```plaintext ## Migration from Old System ### Old Way ```bash\\nprovisioning workspace activate librecloud\\nprovisioning --infra wuji server list\\nprovisioning --infra wuji taskserv create kubernetes\\n```plaintext ### New Way ```bash\\nprovisioning workspace activate librecloud:wuji\\nprovisioning server list\\nprovisioning taskserv create kubernetes\\n```plaintext ## Performance Notes - **Notation parsing**: <1ms per command\\n- **Workspace detection**: <5ms from PWD\\n- **Workspace switching**: ~100ms (includes platform activation)\\n- **Temporal override**: No additional overhead ## Backward Compatibility All existing commands and flags continue to work: ```bash\\n# Old syntax still works\\nprovisioning --infra wuji server list # New syntax also works\\nprovisioning server list -ws librecloud:wuji # Mix and match\\nprovisioning --infra other-infra server list -ws librecloud:wuji\\n# Uses other-infra (explicit flag takes priority)\\n```plaintext ## See Also - `provisioning help workspace` - Workspace commands\\n- `provisioning help infra` - Infrastructure commands\\n- `docs/architecture/ARCHITECTURE_OVERVIEW.md` - Overall architecture\\n- `docs/user/WORKSPACE_SWITCHING_GUIDE.md` - Workspace switching details","breadcrumbs":"Workspace Infra Reference » Pattern 1: Temporal Override for Commands","id":"1493","title":"Pattern 1: Temporal Override for Commands"},"1494":{"body":"","breadcrumbs":"Workspace Config Commands » Workspace Configuration Management Commands","id":"1494","title":"Workspace Configuration Management Commands"},"1495":{"body":"The workspace configuration management commands provide a comprehensive set of tools for viewing, editing, validating, and managing workspace configurations.","breadcrumbs":"Workspace Config Commands » Overview","id":"1495","title":"Overview"},"1496":{"body":"Command Description workspace config show Display workspace configuration workspace config validate Validate all configuration files workspace config generate provider Generate provider configuration from template workspace config edit Edit configuration files workspace config hierarchy Show configuration loading hierarchy workspace config list List all configuration files","breadcrumbs":"Workspace Config Commands » Command Summary","id":"1496","title":"Command Summary"},"1497":{"body":"","breadcrumbs":"Workspace Config Commands » Commands","id":"1497","title":"Commands"},"1498":{"body":"Display the complete workspace configuration in various formats. # Show active workspace config (YAML format)\\nprovisioning workspace config show # Show specific workspace config\\nprovisioning workspace config show my-workspace # Show in JSON format\\nprovisioning workspace config show --out json # Show in TOML format\\nprovisioning workspace config show --out toml # Show specific workspace in JSON\\nprovisioning workspace config show my-workspace --out json\\n```plaintext **Output:** Complete workspace configuration in the specified format ### Validate Workspace Configuration Validate all configuration files for syntax and required sections. ```bash\\n# Validate active workspace\\nprovisioning workspace config validate # Validate specific workspace\\nprovisioning workspace config validate my-workspace\\n```plaintext **Checks performed:** - Main config (`provisioning.yaml`) - YAML syntax and required sections\\n- Provider configs (`providers/*.toml`) - TOML syntax\\n- Platform service configs (`platform/*.toml`) - TOML syntax\\n- KMS config (`kms.toml`) - TOML syntax **Output:** Validation report with success/error indicators ### Generate Provider Configuration Generate a provider configuration file from a template. ```bash\\n# Generate AWS provider config for active workspace\\nprovisioning workspace config generate provider aws # Generate UpCloud provider config for specific workspace\\nprovisioning workspace config generate provider upcloud --infra my-workspace # Generate local provider config\\nprovisioning workspace config generate provider local\\n```plaintext **What it does:** 1. Locates provider template in `extensions/providers/{name}/config.defaults.toml`\\n2. Interpolates workspace-specific values (`{{workspace.name}}`, `{{workspace.path}}`)\\n3. Saves to `{workspace}/config/providers/{name}.toml` **Output:** Generated configuration file ready for customization ### Edit Configuration Files Open configuration files in your editor for modification. ```bash\\n# Edit main workspace config\\nprovisioning workspace config edit main # Edit specific provider config\\nprovisioning workspace config edit provider aws # Edit platform service config\\nprovisioning workspace config edit platform orchestrator # Edit KMS config\\nprovisioning workspace config edit kms # Edit for specific workspace\\nprovisioning workspace config edit provider upcloud --infra my-workspace\\n```plaintext **Editor used:** Value of `$EDITOR` environment variable (defaults to `vi`) **Config types:** - `main` - Main workspace configuration (`provisioning.yaml`)\\n- `provider ` - Provider configuration (`providers/{name}.toml`)\\n- `platform ` - Platform service configuration (`platform/{name}.toml`)\\n- `kms` - KMS configuration (`kms.toml`) ### Show Configuration Hierarchy Display the configuration loading hierarchy and precedence. ```bash\\n# Show hierarchy for active workspace\\nprovisioning workspace config hierarchy # Show hierarchy for specific workspace\\nprovisioning workspace config hierarchy my-workspace\\n```plaintext **Output:** Visual hierarchy showing: 1. Environment Variables (highest priority)\\n2. User Context\\n3. Platform Services\\n4. Provider Configs\\n5. Workspace Config (lowest priority) ### List Configuration Files List all configuration files for a workspace. ```bash\\n# List all configs\\nprovisioning workspace config list # List only provider configs\\nprovisioning workspace config list --type provider # List only platform configs\\nprovisioning workspace config list --type platform # List only KMS config\\nprovisioning workspace config list --type kms # List for specific workspace\\nprovisioning workspace config list my-workspace --type all\\n```plaintext **Output:** Table of configuration files with type, name, and path ## Workspace Selection All config commands support two ways to specify the workspace: 1. **Active Workspace** (default): ```bash provisioning workspace config show Specific Workspace (using --infra flag): provisioning workspace config show --infra my-workspace","breadcrumbs":"Workspace Config Commands » Show Workspace Configuration","id":"1498","title":"Show Workspace Configuration"},"1499":{"body":"Workspace configurations are organized in a standard structure: {workspace}/\\n├── config/\\n│ ├── provisioning.yaml # Main workspace config\\n│ ├── providers/ # Provider configurations\\n│ │ ├── aws.toml\\n│ │ ├── upcloud.toml\\n│ │ └── local.toml\\n│ ├── platform/ # Platform service configs\\n│ │ ├── orchestrator.toml\\n│ │ ├── control-center.toml\\n│ │ └── mcp.toml\\n│ └── kms.toml # KMS configuration\\n```plaintext ## Configuration Hierarchy Configuration values are loaded in the following order (highest to lowest priority): 1. **Environment Variables** - `PROVISIONING_*` variables\\n2. **User Context** - `~/Library/Application Support/provisioning/ws_{name}.yaml`\\n3. **Platform Services** - `{workspace}/config/platform/*.toml`\\n4. **Provider Configs** - `{workspace}/config/providers/*.toml`\\n5. **Workspace Config** - `{workspace}/config/provisioning.yaml` Higher priority values override lower priority values. ## Examples ### Complete Workflow ```bash\\n# 1. Create new workspace with activation\\nprovisioning workspace init my-project ~/workspaces/my-project --providers [aws,local] --activate # 2. Validate configuration\\nprovisioning workspace config validate # 3. View configuration hierarchy\\nprovisioning workspace config hierarchy # 4. Generate additional provider config\\nprovisioning workspace config generate provider upcloud # 5. Edit provider settings\\nprovisioning workspace config edit provider upcloud # 6. List all configs\\nprovisioning workspace config list # 7. Show complete config in JSON\\nprovisioning workspace config show --out json # 8. Validate everything\\nprovisioning workspace config validate\\n```plaintext ### Multi-Workspace Management ```bash\\n# Create multiple workspaces\\nprovisioning workspace init dev ~/workspaces/dev --activate\\nprovisioning workspace init staging ~/workspaces/staging\\nprovisioning workspace init prod ~/workspaces/prod # Validate specific workspace\\nprovisioning workspace config validate staging # Show config for production\\nprovisioning workspace config show prod --out yaml # Edit provider for specific workspace\\nprovisioning workspace config edit provider aws --infra prod\\n```plaintext ### Configuration Troubleshooting ```bash\\n# 1. Validate all configs\\nprovisioning workspace config validate # 2. If errors, check hierarchy\\nprovisioning workspace config hierarchy # 3. List all config files\\nprovisioning workspace config list # 4. Edit problematic config\\nprovisioning workspace config edit provider aws # 5. Validate again\\nprovisioning workspace config validate\\n```plaintext ## Integration with Other Commands Config commands integrate seamlessly with other workspace operations: ```bash\\n# Create workspace with providers\\nprovisioning workspace init my-app ~/apps/my-app --providers [aws,upcloud] --activate # Generate additional configs\\nprovisioning workspace config generate provider local # Validate before deployment\\nprovisioning workspace config validate # Deploy infrastructure\\nprovisioning server create --infra my-app\\n```plaintext ## Tips 1. **Always validate after editing**: Run `workspace config validate` after manual edits 2. **Use hierarchy to understand precedence**: Run `workspace config hierarchy` to see which config files are being used 3. **Generate from templates**: Use `config generate provider` rather than creating configs manually 4. **Check before activation**: Validate a workspace before activating it as default 5. **Use --out json for scripting**: JSON output is easier to parse in scripts ## See Also - [Workspace Initialization](workspace-initialization.md)\\n- [Provider Configuration](provider-configuration.md)\\n- Configuration Architecture","breadcrumbs":"Workspace Config Commands » Configuration File Locations","id":"1499","title":"Configuration File Locations"},"15":{"body":"","breadcrumbs":"Installation Guide » System Requirements","id":"15","title":"System Requirements"},"150":{"body":"Run the installation verification: # Check system configuration\\nprovisioning validate config # Check all dependencies\\nprovisioning env # View detailed environment\\nprovisioning allenv Expected output should show: ✅ All core dependencies installed ✅ Age keys configured ✅ Workspace initialized ✅ Configuration valid","breadcrumbs":"Installation Steps » Step 7: Validate Installation","id":"150","title":"Step 7: Validate Installation"},"1500":{"body":"This guide covers the unified configuration rendering system in the CLI daemon that supports KCL, Nickel, and Tera template engines.","breadcrumbs":"Config Rendering Guide » Configuration Rendering Guide","id":"1500","title":"Configuration Rendering Guide"},"1501":{"body":"The CLI daemon (cli-daemon) provides a high-performance REST API for rendering configurations in three different formats: KCL : Type-safe infrastructure configuration language (familiar, existing patterns) Nickel : Functional configuration language with lazy evaluation (excellent for complex configs) Tera : Jinja2-compatible template engine (simple templating) All three renderers are accessible through a single unified API endpoint with intelligent caching to minimize latency.","breadcrumbs":"Config Rendering Guide » Overview","id":"1501","title":"Overview"},"1502":{"body":"","breadcrumbs":"Config Rendering Guide » Quick Start","id":"1502","title":"Quick Start"},"1503":{"body":"The daemon runs on port 9091 by default: # Start in background\\n./target/release/cli-daemon & # Check it\'s running\\ncurl http://localhost:9091/health\\n```plaintext ### Simple KCL Rendering ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"name = \\\\\\"my-server\\\\\\"\\\\ncpu = 4\\\\nmemory = 8192\\", \\"name\\": \\"server-config\\" }\'\\n```plaintext **Response**: ```json\\n{ \\"rendered\\": \\"name = \\\\\\"my-server\\\\\\"\\\\ncpu = 4\\\\nmemory = 8192\\", \\"error\\": null, \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 45\\n}\\n```plaintext ## REST API Reference ### POST /config/render Render a configuration in any supported language. **Request Headers**: ```plaintext\\nContent-Type: application/json\\n```plaintext **Request Body**: ```json\\n{ \\"language\\": \\"kcl|nickel|tera\\", \\"content\\": \\"...configuration content...\\", \\"context\\": { \\"key1\\": \\"value1\\", \\"key2\\": 123 }, \\"name\\": \\"optional-config-name\\"\\n}\\n```plaintext **Parameters**: | Parameter | Type | Required | Description |\\n|-----------|------|----------|-------------|\\n| `language` | string | Yes | One of: `kcl`, `nickel`, `tera` |\\n| `content` | string | Yes | The configuration or template content to render |\\n| `context` | object | No | Variables to pass to the configuration (JSON object) |\\n| `name` | string | No | Optional name for logging purposes | **Response** (Success): ```json\\n{ \\"rendered\\": \\"...rendered output...\\", \\"error\\": null, \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 23\\n}\\n```plaintext **Response** (Error): ```json\\n{ \\"rendered\\": null, \\"error\\": \\"KCL evaluation failed: undefined variable \'name\'\\", \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 18\\n}\\n```plaintext **Status Codes**: - `200 OK` - Rendering completed (check `error` field in body for evaluation errors)\\n- `400 Bad Request` - Invalid request format\\n- `500 Internal Server Error` - Daemon error ### GET /config/stats Get rendering statistics across all languages. **Response**: ```json\\n{ \\"total_renders\\": 156, \\"successful_renders\\": 154, \\"failed_renders\\": 2, \\"average_time_ms\\": 28, \\"kcl_renders\\": 78, \\"nickel_renders\\": 52, \\"tera_renders\\": 26, \\"kcl_cache_hits\\": 68, \\"nickel_cache_hits\\": 35, \\"tera_cache_hits\\": 18\\n}\\n```plaintext ### POST /config/stats/reset Reset all rendering statistics. **Response**: ```json\\n{ \\"status\\": \\"success\\", \\"message\\": \\"Configuration rendering statistics reset\\"\\n}\\n```plaintext ## KCL Rendering ### Basic KCL Configuration ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"\\nname = \\\\\\"production-server\\\\\\"\\ntype = \\\\\\"web\\\\\\"\\ncpu = 4\\nmemory = 8192\\ndisk = 50 tags = { environment = \\\\\\"production\\\\\\" team = \\\\\\"platform\\\\\\"\\n}\\n\\", \\"name\\": \\"prod-server-config\\" }\'\\n```plaintext ### KCL with Context Variables Pass context variables using the `-D` flag syntax internally: ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"\\nname = option(\\\\\\"server_name\\\\\\", default=\\\\\\"default-server\\\\\\")\\nenvironment = option(\\\\\\"env\\\\\\", default=\\\\\\"dev\\\\\\")\\ncpu = option(\\\\\\"cpu_count\\\\\\", default=2)\\nmemory = option(\\\\\\"memory_mb\\\\\\", default=2048)\\n\\", \\"context\\": { \\"server_name\\": \\"app-server-01\\", \\"env\\": \\"production\\", \\"cpu_count\\": 8, \\"memory_mb\\": 16384 }, \\"name\\": \\"server-with-context\\" }\'\\n```plaintext ### Expected KCL Rendering Time - **First render (cache miss)**: 20-50ms\\n- **Cached render (same content)**: 1-5ms\\n- **Large configs (100+ variables)**: 50-100ms ## Nickel Rendering ### Basic Nickel Configuration ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"nickel\\", \\"content\\": \\"{ name = \\\\\\"production-server\\\\\\", type = \\\\\\"web\\\\\\", cpu = 4, memory = 8192, disk = 50, tags = { environment = \\\\\\"production\\\\\\", team = \\\\\\"platform\\\\\\" }\\n}\\", \\"name\\": \\"nickel-server-config\\" }\'\\n```plaintext ### Nickel with Lazy Evaluation Nickel excels at evaluating only what\'s needed: ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"nickel\\", \\"content\\": \\"{ server = { name = \\\\\\"db-01\\\\\\", # Expensive computation - only computed if accessed health_check = std.array.fold (fun acc x => acc + x) 0 [1, 2, 3, 4, 5] }, networking = { dns_servers = [\\\\\\"8.8.8.8\\\\\\", \\\\\\"8.8.4.4\\\\\\"], firewall_rules = [\\\\\\"allow_ssh\\\\\\", \\\\\\"allow_https\\\\\\"] }\\n}\\", \\"context\\": { \\"only_server\\": true } }\'\\n```plaintext ### Expected Nickel Rendering Time - **First render (cache miss)**: 30-60ms\\n- **Cached render (same content)**: 1-5ms\\n- **Large configs with lazy evaluation**: 40-80ms **Advantage**: Nickel only computes fields that are actually used in the output ## Tera Template Rendering ### Basic Tera Template ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"tera\\", \\"content\\": \\"\\nServer Configuration\\n==================== Name: {{ server_name }}\\nEnvironment: {{ environment | default(value=\\\\\\"development\\\\\\") }}\\nType: {{ server_type }} Assigned Tasks:\\n{% for task in tasks %} - {{ task }}\\n{% endfor %} {% if enable_monitoring %}\\nMonitoring: ENABLED - Prometheus: true - Grafana: true\\n{% else %}\\nMonitoring: DISABLED\\n{% endif %}\\n\\", \\"context\\": { \\"server_name\\": \\"prod-web-01\\", \\"environment\\": \\"production\\", \\"server_type\\": \\"web\\", \\"tasks\\": [\\"kubernetes\\", \\"prometheus\\", \\"cilium\\"], \\"enable_monitoring\\": true }, \\"name\\": \\"server-template\\" }\'\\n```plaintext ### Tera Filters and Functions Tera supports Jinja2-compatible filters and functions: ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"tera\\", \\"content\\": \\"\\nConfiguration for {{ environment | upper }}\\nServers: {{ server_count | default(value=1) }}\\nCost estimate: \\\\${{ monthly_cost | round(precision=2) }} {% for server in servers | reverse %}\\n- {{ server.name }}: {{ server.cpu }} CPUs\\n{% endfor %}\\n\\", \\"context\\": { \\"environment\\": \\"production\\", \\"server_count\\": 5, \\"monthly_cost\\": 1234.567, \\"servers\\": [ {\\"name\\": \\"web-01\\", \\"cpu\\": 4}, {\\"name\\": \\"db-01\\", \\"cpu\\": 8}, {\\"name\\": \\"cache-01\\", \\"cpu\\": 2} ] } }\'\\n```plaintext ### Expected Tera Rendering Time - **Simple templates**: 4-10ms\\n- **Complex templates with loops**: 10-20ms\\n- **Always fast** (template is pre-compiled) ## Performance Characteristics ### Caching Strategy All three renderers use LRU (Least Recently Used) caching: - **Cache Size**: 100 entries per renderer\\n- **Cache Key**: SHA256 hash of (content + context)\\n- **Cache Hit**: Typically < 5ms\\n- **Cache Miss**: Language-dependent (20-60ms) **To maximize cache hits**: 1. Render the same config multiple times → hits after first render\\n2. Use static content when possible → better cache reuse\\n3. Monitor cache hit ratio via `/config/stats` ### Benchmarks Comparison of rendering times (on commodity hardware): | Scenario | KCL | Nickel | Tera |\\n|----------|-----|--------|------|\\n| Simple config (10 vars) | 20ms | 30ms | 5ms |\\n| Medium config (50 vars) | 35ms | 45ms | 8ms |\\n| Large config (100+ vars) | 50-100ms | 50-80ms | 10ms |\\n| Cached render | 1-5ms | 1-5ms | 1-5ms | ### Memory Usage - Each renderer keeps 100 cached entries in memory\\n- Average config size in cache: ~5KB\\n- Maximum memory per renderer: ~500KB + overhead ## Error Handling ### Common Errors #### KCL Binary Not Found **Error Response**: ```json\\n{ \\"rendered\\": null, \\"error\\": \\"KCL binary not found in PATH. Install KCL or set KCL_PATH environment variable\\", \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 0\\n}\\n```plaintext **Solution**: ```bash\\n# Install KCL\\nkcl version # Or set explicit path\\nexport KCL_PATH=/usr/local/bin/kcl\\n```plaintext #### Invalid KCL Syntax **Error Response**: ```json\\n{ \\"rendered\\": null, \\"error\\": \\"KCL evaluation failed: Parse error at line 3: expected \'=\'\\", \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 12\\n}\\n```plaintext **Solution**: Verify KCL syntax. Run `kcl eval file.k` directly for better error messages. #### Missing Context Variable **Error Response**: ```json\\n{ \\"rendered\\": null, \\"error\\": \\"KCL evaluation failed: undefined variable \'required_var\'\\", \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 8\\n}\\n```plaintext **Solution**: Provide required context variables or use `option()` with defaults. #### Invalid JSON in Context **HTTP Status**: `400 Bad Request`\\n**Body**: Error message about invalid JSON **Solution**: Ensure context is valid JSON. ## Integration Examples ### Using with Nushell ```nushell\\n# Render a KCL config from Nushell\\nlet config = open workspace/config/provisioning.k | into string\\nlet response = curl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d $\\"{{ language: \\\\\\"kcl\\\\\\", content: $config }}\\" | from json print $response.rendered\\n```plaintext ### Using with Python ```python\\nimport requests\\nimport json def render_config(language, content, context=None, name=None): payload = { \\"language\\": language, \\"content\\": content, \\"context\\": context or {}, \\"name\\": name } response = requests.post( \\"http://localhost:9091/config/render\\", json=payload ) return response.json() # Example usage\\nresult = render_config( \\"kcl\\", \'name = \\"server\\"\\\\ncpu = 4\', {\\"name\\": \\"prod-server\\"}, \\"my-config\\"\\n) if result[\\"error\\"]: print(f\\"Error: {result[\'error\']}\\")\\nelse: print(f\\"Rendered in {result[\'execution_time_ms\']}ms\\") print(result[\\"rendered\\"])\\n```plaintext ### Using with Curl ```bash\\n#!/bin/bash # Function to render config\\nrender_config() { local language=$1 local content=$2 local name=${3:-\\"unnamed\\"} curl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d @- << EOF\\n{ \\"language\\": \\"$language\\", \\"content\\": $(echo \\"$content\\" | jq -Rs .), \\"name\\": \\"$name\\"\\n}\\nEOF\\n} # Usage\\nrender_config \\"kcl\\" \\"name = \\\\\\"my-server\\\\\\"\\" \\"server-config\\"\\n```plaintext ## Troubleshooting ### Daemon Won\'t Start **Check log level**: ```bash\\nPROVISIONING_LOG_LEVEL=debug ./target/release/cli-daemon\\n```plaintext **Verify Nushell binary**: ```bash\\nwhich nu\\n# or set explicit path\\nNUSHELL_PATH=/usr/local/bin/nu ./target/release/cli-daemon\\n```plaintext ### Very Slow Rendering **Check cache hit rate**: ```bash\\ncurl http://localhost:9091/config/stats | jq \'.kcl_cache_hits / .kcl_renders\'\\n```plaintext **If low cache hit rate**: Rendering same configs repeatedly? **Monitor execution time**: ```bash\\ncurl http://localhost:9091/config/render ... | jq \'.execution_time_ms\'\\n```plaintext ### Rendering Hangs **Set timeout** (depends on client): ```bash\\ncurl --max-time 10 -X POST http://localhost:9091/config/render ...\\n```plaintext **Check daemon logs** for stuck processes. ### Out of Memory **Reduce cache size** (rebuild with modified config) or restart daemon. ## Best Practices 1. **Choose right language for task**: - KCL: Familiar, type-safe, use if already in ecosystem - Nickel: Large configs with lazy evaluation needs - Tera: Simple templating, fastest 2. **Use context variables** instead of hardcoding values: ```json \\"context\\": { \\"environment\\": \\"production\\", \\"replica_count\\": 3 } Monitor statistics to understand performance: watch -n 1 \'curl -s http://localhost:9091/config/stats | jq\' Cache warming : Pre-render common configs on startup Error handling : Always check error field in response","breadcrumbs":"Config Rendering Guide » Starting the Daemon","id":"1503","title":"Starting the Daemon"},"1504":{"body":"KCL Documentation Nickel User Manual Tera Template Engine CLI Daemon Architecture: provisioning/platform/cli-daemon/README.md","breadcrumbs":"Config Rendering Guide » See Also","id":"1504","title":"See Also"},"1505":{"body":"","breadcrumbs":"Config Rendering Guide » Quick Reference","id":"1505","title":"Quick Reference"},"1506":{"body":"POST http://localhost:9091/config/render\\n```plaintext ### Request Template ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl|nickel|tera\\", \\"content\\": \\"...\\", \\"context\\": {...}, \\"name\\": \\"optional-name\\" }\'\\n```plaintext ### Quick Examples #### KCL - Simple Config ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"name = \\\\\\"server\\\\\\"\\\\ncpu = 4\\\\nmemory = 8192\\" }\'\\n```plaintext #### KCL - With Context ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"name = option(\\\\\\"server_name\\\\\\")\\\\nenvironment = option(\\\\\\"env\\\\\\", default=\\\\\\"dev\\\\\\")\\", \\"context\\": {\\"server_name\\": \\"prod-01\\", \\"env\\": \\"production\\"} }\'\\n```plaintext #### Nickel - Simple Config ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"nickel\\", \\"content\\": \\"{name = \\\\\\"server\\\\\\", cpu = 4, memory = 8192}\\" }\'\\n```plaintext #### Tera - Template with Loops ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"tera\\", \\"content\\": \\"{% for task in tasks %}{{ task }}\\\\n{% endfor %}\\", \\"context\\": {\\"tasks\\": [\\"kubernetes\\", \\"postgres\\", \\"redis\\"]} }\'\\n```plaintext ### Statistics ```bash\\n# Get stats\\ncurl http://localhost:9091/config/stats # Reset stats\\ncurl -X POST http://localhost:9091/config/stats/reset # Watch stats in real-time\\nwatch -n 1 \'curl -s http://localhost:9091/config/stats | jq\'\\n```plaintext ### Performance Guide | Language | Cold | Cached | Use Case |\\n|----------|------|--------|----------|\\n| **KCL** | 20-50ms | 1-5ms | Type-safe infrastructure configs |\\n| **Nickel** | 30-60ms | 1-5ms | Large configs, lazy evaluation |\\n| **Tera** | 5-20ms | 1-5ms | Simple templating | ### Status Codes | Code | Meaning |\\n|------|---------|\\n| 200 | Success (check `error` field for evaluation errors) |\\n| 400 | Invalid request |\\n| 500 | Daemon error | ### Response Fields ```json\\n{ \\"rendered\\": \\"...output or null on error\\", \\"error\\": \\"...error message or null on success\\", \\"language\\": \\"kcl|nickel|tera\\", \\"execution_time_ms\\": 23\\n}\\n```plaintext ### Languages Comparison #### KCL ```kcl\\nname = \\"server\\"\\ntype = \\"web\\"\\ncpu = 4\\nmemory = 8192 tags = { env = \\"prod\\" team = \\"platform\\"\\n}\\n```plaintext **Pros**: Familiar syntax, type-safe, existing patterns\\n**Cons**: Eager evaluation, verbose for simple cases #### Nickel ```nickel\\n{ name = \\"server\\", type = \\"web\\", cpu = 4, memory = 8192, tags = { env = \\"prod\\", team = \\"platform\\" }\\n}\\n```plaintext **Pros**: Lazy evaluation, functional style, compact\\n**Cons**: Different paradigm, smaller ecosystem #### Tera ```jinja2\\nServer: {{ name }}\\nType: {{ type | upper }}\\n{% for tag_name, tag_value in tags %}\\n- {{ tag_name }}: {{ tag_value }}\\n{% endfor %}\\n```plaintext **Pros**: Fast, simple, familiar template syntax\\n**Cons**: No validation, template-only ### Caching **How it works**: SHA256(content + context) → cached result **Cache hit**: < 5ms\\n**Cache miss**: 20-60ms (language dependent)\\n**Cache size**: 100 entries per language **Cache stats**: ```bash\\ncurl -s http://localhost:9091/config/stats | jq \'{ kcl_cache_hits: .kcl_cache_hits, kcl_renders: .kcl_renders, kcl_hit_ratio: (.kcl_cache_hits / .kcl_renders * 100)\\n}\'\\n```plaintext ### Common Tasks #### Batch Rendering ```bash\\n#!/bin/bash\\nfor config in configs/*.k; do curl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \\"$(jq -n --arg content \\\\\\"$(cat $config)\\\\\\" \\\\ \'{language: \\"kcl\\", content: $content}\')\\"\\ndone\\n```plaintext #### Validate Before Rendering ```bash\\n# KCL validation\\nkcl eval --strict my-config.k # Nickel validation (via daemon first render)\\ncurl ... # catches errors in response\\n```plaintext #### Monitor Cache Performance ```bash\\n#!/bin/bash\\nwhile true; do STATS=$(curl -s http://localhost:9091/config/stats) HIT_RATIO=$( echo \\"$STATS\\" | jq \'.kcl_cache_hits / .kcl_renders * 100\') echo \\"Cache hit ratio: ${HIT_RATIO}%\\" sleep 5\\ndone\\n```plaintext ### Error Examples #### Missing Binary ```json\\n{ \\"error\\": \\"KCL binary not found. Install KCL or set KCL_PATH\\", \\"rendered\\": null\\n}\\n```plaintext **Fix**: `export KCL_PATH=/path/to/kcl` or install KCL #### Syntax Error ```json\\n{ \\"error\\": \\"KCL evaluation failed: Parse error at line 3\\", \\"rendered\\": null\\n}\\n```plaintext **Fix**: Check KCL syntax, run `kcl eval file.k` directly #### Missing Variable ```json\\n{ \\"error\\": \\"KCL evaluation failed: undefined variable \'name\'\\", \\"rendered\\": null\\n}\\n```plaintext **Fix**: Provide in `context` or use `option()` with default ### Integration Quick Start #### Nushell ```nushell\\nuse lib_provisioning let config = open server.k | into string\\nlet result = (curl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d {language: \\"kcl\\", content: $config} | from json) if ($result.error != null) { error $result.error\\n} else { print $result.rendered\\n}\\n```plaintext #### Python ```python\\nimport requests resp = requests.post(\\"http://localhost:9091/config/render\\", json={ \\"language\\": \\"kcl\\", \\"content\\": \'name = \\"server\\"\', \\"context\\": {}\\n})\\nresult = resp.json()\\nprint(result[\\"rendered\\"] if not result[\\"error\\"] else f\\"Error: {result[\'error\']}\\")\\n```plaintext #### Bash ```bash\\nrender() { curl -s -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \\"$1\\" | jq \'.\'\\n} # Usage\\nrender \'{\\"language\\":\\"kcl\\",\\"content\\":\\"name = \\\\\\"server\\\\\\"\\"}\'\\n```plaintext ### Environment Variables ```bash\\n# Daemon configuration\\nPROVISIONING_LOG_LEVEL=debug # Log level\\nDAEMON_BIND=127.0.0.1:9091 # Bind address\\nNUSHELL_PATH=/usr/local/bin/nu # Nushell binary\\nKCL_PATH=/usr/local/bin/kcl # KCL binary\\nNICKEL_PATH=/usr/local/bin/nickel # Nickel binary\\n```plaintext ### Useful Commands ```bash\\n# Health check\\ncurl http://localhost:9091/health # Daemon info\\ncurl http://localhost:9091/info # View stats\\ncurl http://localhost:9091/config/stats | jq \'.\' # Pretty print stats\\ncurl -s http://localhost:9091/config/stats | jq \'{ total: .total_renders, success_rate: (.successful_renders / .total_renders * 100), avg_time: .average_time_ms, cache_hit_rate: ((.kcl_cache_hits + .nickel_cache_hits) / (.kcl_renders + .nickel_renders) * 100)\\n}\'\\n```plaintext ### Troubleshooting Checklist - [ ] Daemon running? `curl http://localhost:9091/health`\\n- [ ] Correct content for language?\\n- [ ] Valid JSON in context?\\n- [ ] Binary available? (KCL/Nickel)\\n- [ ] Check log level? `PROVISIONING_LOG_LEVEL=debug`\\n- [ ] Cache hit rate? `/config/stats`\\n- [ ] Error in response? Check `error` field","breadcrumbs":"Config Rendering Guide » API Endpoint","id":"1506","title":"API Endpoint"},"1507":{"body":"This comprehensive guide explains the configuration system of the Infrastructure Automation platform, helping you understand, customize, and manage all configuration aspects.","breadcrumbs":"Configuration » Configuration Guide","id":"1507","title":"Configuration Guide"},"1508":{"body":"Understanding the configuration hierarchy and precedence Working with different configuration file types Configuration interpolation and templating Environment-specific configurations User customization and overrides Validation and troubleshooting Advanced configuration patterns","breadcrumbs":"Configuration » What You\'ll Learn","id":"1508","title":"What You\'ll Learn"},"1509":{"body":"","breadcrumbs":"Configuration » Configuration Architecture","id":"1509","title":"Configuration Architecture"},"151":{"body":"If you plan to use platform services (orchestrator, control center, etc.): # Build platform services\\ncd provisioning/platform # Build orchestrator\\ncd orchestrator\\ncargo build --release\\ncd .. # Build control center\\ncd control-center\\ncargo build --release\\ncd .. # Build KMS service\\ncd kms-service\\ncargo build --release\\ncd .. # Verify builds\\nls */target/release/","breadcrumbs":"Installation Steps » Optional: Install Platform Services","id":"151","title":"Optional: Install Platform Services"},"1510":{"body":"The system uses a layered configuration approach with clear precedence rules: Runtime CLI arguments (highest precedence) ↓ (overrides)\\nEnvironment Variables ↓ (overrides)\\nInfrastructure Config (./.provisioning.toml) ↓ (overrides)\\nProject Config (./provisioning.toml) ↓ (overrides)\\nUser Config (~/.config/provisioning/config.toml) ↓ (overrides)\\nSystem Defaults (config.defaults.toml) (lowest precedence)\\n```plaintext ### Configuration File Types | File Type | Purpose | Location | Format |\\n|-----------|---------|----------|--------|\\n| **System Defaults** | Base system configuration | `config.defaults.toml` | TOML |\\n| **User Config** | Personal preferences | `~/.config/provisioning/config.toml` | TOML |\\n| **Project Config** | Project-wide settings | `./provisioning.toml` | TOML |\\n| **Infrastructure Config** | Infra-specific settings | `./.provisioning.toml` | TOML |\\n| **Environment Config** | Environment overrides | `config.{env}.toml` | TOML |\\n| **Infrastructure Definitions** | Infrastructure as Code | `settings.k`, `*.k` | KCL | ## Understanding Configuration Sections ### Core System Configuration ```toml\\n[core]\\nversion = \\"1.0.0\\" # System version\\nname = \\"provisioning\\" # System identifier\\n```plaintext ### Path Configuration The most critical configuration section that defines where everything is located: ```toml\\n[paths]\\n# Base directory - all other paths derive from this\\nbase = \\"/usr/local/provisioning\\" # Derived paths (usually don\'t need to change these)\\nkloud = \\"{{paths.base}}/infra\\"\\nproviders = \\"{{paths.base}}/providers\\"\\ntaskservs = \\"{{paths.base}}/taskservs\\"\\nclusters = \\"{{paths.base}}/cluster\\"\\nresources = \\"{{paths.base}}/resources\\"\\ntemplates = \\"{{paths.base}}/templates\\"\\ntools = \\"{{paths.base}}/tools\\"\\ncore = \\"{{paths.base}}/core\\" [paths.files]\\n# Important file locations\\nsettings_file = \\"settings.k\\"\\nkeys = \\"{{paths.base}}/keys.yaml\\"\\nrequirements = \\"{{paths.base}}/requirements.yaml\\"\\n```plaintext ### Debug and Logging ```toml\\n[debug]\\nenabled = false # Enable debug mode\\nmetadata = false # Show internal metadata\\ncheck = false # Default to check mode (dry run)\\nremote = false # Enable remote debugging\\nlog_level = \\"info\\" # Logging verbosity\\nno_terminal = false # Disable terminal features\\n```plaintext ### Output Configuration ```toml\\n[output]\\nfile_viewer = \\"less\\" # File viewer command\\nformat = \\"yaml\\" # Default output format (json, yaml, toml, text)\\n```plaintext ### Provider Configuration ```toml\\n[providers]\\ndefault = \\"local\\" # Default provider [providers.aws]\\napi_url = \\"\\" # AWS API endpoint (blank = default)\\nauth = \\"\\" # Authentication method\\ninterface = \\"CLI\\" # Interface type (CLI or API) [providers.upcloud]\\napi_url = \\"https://api.upcloud.com/1.3\\"\\nauth = \\"\\"\\ninterface = \\"CLI\\" [providers.local]\\napi_url = \\"\\"\\nauth = \\"\\"\\ninterface = \\"CLI\\"\\n```plaintext ### Encryption (SOPS) Configuration ```toml\\n[sops]\\nuse_sops = true # Enable SOPS encryption\\nconfig_path = \\"{{paths.base}}/.sops.yaml\\" # Search paths for Age encryption keys\\nkey_search_paths = [ \\"{{paths.base}}/keys/age.txt\\", \\"~/.config/sops/age/keys.txt\\"\\n]\\n```plaintext ## Configuration Interpolation The system supports powerful interpolation patterns for dynamic configuration values. ### Basic Interpolation Patterns #### Path Interpolation ```toml\\n# Reference other path values\\ntemplates = \\"{{paths.base}}/my-templates\\"\\ncustom_path = \\"{{paths.providers}}/custom\\"\\n```plaintext #### Environment Variable Interpolation ```toml\\n# Access environment variables\\nuser_home = \\"{{env.HOME}}\\"\\ncurrent_user = \\"{{env.USER}}\\"\\ncustom_path = \\"{{env.CUSTOM_PATH || /default/path}}\\" # With fallback\\n```plaintext #### Date/Time Interpolation ```toml\\n# Dynamic date/time values\\nlog_file = \\"{{paths.base}}/logs/app-{{now.date}}.log\\"\\nbackup_dir = \\"{{paths.base}}/backups/{{now.timestamp}}\\"\\n```plaintext #### Git Information Interpolation ```toml\\n# Git repository information\\ndeployment_branch = \\"{{git.branch}}\\"\\nversion_tag = \\"{{git.tag}}\\"\\ncommit_hash = \\"{{git.commit}}\\"\\n```plaintext #### Cross-Section References ```toml\\n# Reference values from other sections\\ndatabase_host = \\"{{providers.aws.database_endpoint}}\\"\\napi_key = \\"{{sops.decrypted_key}}\\"\\n```plaintext ### Advanced Interpolation #### Function Calls ```toml\\n# Built-in functions\\nconfig_path = \\"{{path.join(env.HOME, .config, provisioning)}}\\"\\nsafe_name = \\"{{str.lower(str.replace(project.name, \' \', \'-\'))}}\\"\\n```plaintext #### Conditional Expressions ```toml\\n# Conditional logic\\ndebug_level = \\"{{debug.enabled && \'debug\' || \'info\'}}\\"\\nstorage_path = \\"{{env.STORAGE_PATH || path.join(paths.base, \'storage\')}}\\"\\n```plaintext ### Interpolation Examples ```toml\\n[paths]\\nbase = \\"/opt/provisioning\\"\\nworkspace = \\"{{env.HOME}}/provisioning-workspace\\"\\ncurrent_project = \\"{{paths.workspace}}/{{env.PROJECT_NAME || \'default\'}}\\" [deployment]\\nenvironment = \\"{{env.DEPLOY_ENV || \'development\'}}\\"\\ntimestamp = \\"{{now.iso8601}}\\"\\nversion = \\"{{git.tag || git.commit}}\\" [database]\\nconnection_string = \\"postgresql://{{env.DB_USER}}:{{env.DB_PASS}}@{{env.DB_HOST || \'localhost\'}}/{{env.DB_NAME}}\\" [notifications]\\nslack_channel = \\"#{{env.TEAM_NAME || \'general\'}}-notifications\\"\\nemail_subject = \\"Deployment {{deployment.environment}} - {{deployment.timestamp}}\\"\\n```plaintext ## Environment-Specific Configuration ### Environment Detection The system automatically detects the environment using: 1. **PROVISIONING_ENV** environment variable\\n2. **Git branch patterns** (dev, staging, main/master)\\n3. **Directory patterns** (development, staging, production)\\n4. **Explicit configuration** ### Environment Configuration Files Create environment-specific configurations: #### Development Environment (`config.dev.toml`) ```toml\\n[core]\\nname = \\"provisioning-dev\\" [debug]\\nenabled = true\\nlog_level = \\"debug\\"\\nmetadata = true [providers]\\ndefault = \\"local\\" [cache]\\nenabled = false # Disable caching for development [notifications]\\nenabled = false # No notifications in dev\\n```plaintext #### Testing Environment (`config.test.toml`) ```toml\\n[core]\\nname = \\"provisioning-test\\" [debug]\\nenabled = true\\ncheck = true # Default to check mode in testing\\nlog_level = \\"info\\" [providers]\\ndefault = \\"local\\" [infrastructure]\\nauto_cleanup = true # Clean up test resources\\nresource_prefix = \\"test-{{git.branch}}-\\"\\n```plaintext #### Production Environment (`config.prod.toml`) ```toml\\n[core]\\nname = \\"provisioning-prod\\" [debug]\\nenabled = false\\nlog_level = \\"warn\\" [providers]\\ndefault = \\"aws\\" [security]\\nrequire_approval = true\\naudit_logging = true\\nencrypt_backups = true [notifications]\\nenabled = true\\ncritical_only = true\\n```plaintext ### Environment Switching ```bash\\n# Set environment for session\\nexport PROVISIONING_ENV=dev\\nprovisioning env # Use environment for single command\\nprovisioning --environment prod server create # Switch environment permanently\\nprovisioning env set prod\\n```plaintext ## User Configuration Customization ### Creating Your User Configuration ```bash\\n# Initialize user configuration from template\\nprovisioning init config # Or copy and customize\\ncp config-examples/config.user.toml ~/.config/provisioning/config.toml\\n```plaintext ### Common User Customizations #### Developer Setup ```toml\\n[paths]\\nbase = \\"/Users/alice/dev/provisioning\\" [debug]\\nenabled = true\\nlog_level = \\"debug\\" [providers]\\ndefault = \\"local\\" [output]\\nformat = \\"json\\"\\nfile_viewer = \\"code\\" [sops]\\nkey_search_paths = [ \\"/Users/alice/.config/sops/age/keys.txt\\"\\n]\\n```plaintext #### Operations Engineer Setup ```toml\\n[paths]\\nbase = \\"/opt/provisioning\\" [debug]\\nenabled = false\\nlog_level = \\"info\\" [providers]\\ndefault = \\"aws\\" [output]\\nformat = \\"yaml\\" [notifications]\\nenabled = true\\nemail = \\"ops-team@company.com\\"\\n```plaintext #### Team Lead Setup ```toml\\n[paths]\\nbase = \\"/home/teamlead/provisioning\\" [debug]\\nenabled = true\\nmetadata = true\\nlog_level = \\"info\\" [providers]\\ndefault = \\"upcloud\\" [security]\\nrequire_confirmation = true\\naudit_logging = true [sops]\\nkey_search_paths = [ \\"/secure/keys/team-lead.txt\\", \\"~/.config/sops/age/keys.txt\\"\\n]\\n```plaintext ## Project-Specific Configuration ### Project Configuration File (`provisioning.toml`) ```toml\\n[project]\\nname = \\"web-application\\"\\ndescription = \\"Main web application infrastructure\\"\\nversion = \\"2.1.0\\"\\nteam = \\"platform-team\\" [paths]\\n# Project-specific path overrides\\ninfra = \\"./infrastructure\\"\\ntemplates = \\"./custom-templates\\" [defaults]\\n# Project defaults\\nprovider = \\"aws\\"\\nregion = \\"us-west-2\\"\\nenvironment = \\"development\\" [cost_controls]\\nmax_monthly_budget = 5000.00\\nalert_threshold = 0.8 [compliance]\\nrequired_tags = [\\"team\\", \\"environment\\", \\"cost-center\\"]\\nencryption_required = true\\nbackup_required = true [notifications]\\nslack_webhook = \\"https://hooks.slack.com/services/...\\"\\nteam_email = \\"platform-team@company.com\\"\\n```plaintext ### Infrastructure-Specific Configuration (`.provisioning.toml`) ```toml\\n[infrastructure]\\nname = \\"production-web-app\\"\\nenvironment = \\"production\\"\\nregion = \\"us-west-2\\" [overrides]\\n# Infrastructure-specific overrides\\ndebug.enabled = false\\ndebug.log_level = \\"error\\"\\ncache.enabled = true [scaling]\\nauto_scaling_enabled = true\\nmin_instances = 3\\nmax_instances = 20 [security]\\nvpc_id = \\"vpc-12345678\\"\\nsubnet_ids = [\\"subnet-12345678\\", \\"subnet-87654321\\"]\\nsecurity_group_id = \\"sg-12345678\\" [monitoring]\\nenabled = true\\nretention_days = 90\\nalerting_enabled = true\\n```plaintext ## Configuration Validation ### Built-in Validation ```bash\\n# Validate current configuration\\nprovisioning validate config # Detailed validation with warnings\\nprovisioning validate config --detailed # Strict validation mode\\nprovisioning validate config strict # Validate specific environment\\nprovisioning validate config --environment prod\\n```plaintext ### Custom Validation Rules Create custom validation in your configuration: ```toml\\n[validation]\\n# Custom validation rules\\nrequired_sections = [\\"paths\\", \\"providers\\", \\"debug\\"]\\nrequired_env_vars = [\\"AWS_REGION\\", \\"PROJECT_NAME\\"]\\nforbidden_values = [\\"password123\\", \\"admin\\"] [validation.paths]\\n# Path validation rules\\nbase_must_exist = true\\nwritable_required = [\\"paths.base\\", \\"paths.cache\\"] [validation.security]\\n# Security validation\\nrequire_encryption = true\\nmin_key_length = 32\\n```plaintext ## Troubleshooting Configuration ### Common Configuration Issues #### Issue 1: Path Not Found Errors ```bash\\n# Problem: Base path doesn\'t exist\\n# Check current configuration\\nprovisioning env | grep paths.base # Verify path exists\\nls -la /path/shown/above # Fix: Update user config\\nnano ~/.config/provisioning/config.toml\\n# Set correct paths.base = \\"/correct/path\\"\\n```plaintext #### Issue 2: Interpolation Failures ```bash\\n# Problem: {{env.VARIABLE}} not resolving\\n# Check environment variables\\nenv | grep VARIABLE # Check interpolation\\nprovisioning validate interpolation test # Debug interpolation\\nprovisioning --debug validate interpolation validate\\n```plaintext #### Issue 3: SOPS Encryption Errors ```bash\\n# Problem: Cannot decrypt SOPS files\\n# Check SOPS configuration\\nprovisioning sops config # Verify key files\\nls -la ~/.config/sops/age/keys.txt # Test decryption\\nsops -d encrypted-file.k\\n```plaintext #### Issue 4: Provider Authentication ```bash\\n# Problem: Provider authentication failed\\n# Check provider configuration\\nprovisioning show providers # Test provider connection\\nprovisioning provider test aws # Verify credentials\\naws configure list # For AWS\\n```plaintext ### Configuration Debugging ```bash\\n# Show current configuration hierarchy\\nprovisioning config show --hierarchy # Show configuration sources\\nprovisioning config sources # Show interpolated values\\nprovisioning config interpolated # Debug specific section\\nprovisioning config debug paths\\nprovisioning config debug providers\\n```plaintext ### Configuration Reset ```bash\\n# Reset to defaults\\nprovisioning config reset # Reset specific section\\nprovisioning config reset providers # Backup current config before reset\\nprovisioning config backup\\n```plaintext ## Advanced Configuration Patterns ### Dynamic Configuration Loading ```toml\\n[dynamic]\\n# Load configuration from external sources\\nconfig_urls = [ \\"https://config.company.com/provisioning/base.toml\\", \\"file:///etc/provisioning/shared.toml\\"\\n] # Conditional configuration loading\\nload_if_exists = [ \\"./local-overrides.toml\\", \\"../shared/team-config.toml\\"\\n]\\n```plaintext ### Configuration Templating ```toml\\n[templates]\\n# Template-based configuration\\nbase_template = \\"aws-web-app\\"\\ntemplate_vars = { region = \\"us-west-2\\" instance_type = \\"t3.medium\\" team_name = \\"platform\\"\\n} # Template inheritance\\nextends = [\\"base-web\\", \\"monitoring\\", \\"security\\"]\\n```plaintext ### Multi-Region Configuration ```toml\\n[regions]\\nprimary = \\"us-west-2\\"\\nsecondary = \\"us-east-1\\" [regions.us-west-2]\\nproviders.aws.region = \\"us-west-2\\"\\navailability_zones = [\\"us-west-2a\\", \\"us-west-2b\\", \\"us-west-2c\\"] [regions.us-east-1]\\nproviders.aws.region = \\"us-east-1\\"\\navailability_zones = [\\"us-east-1a\\", \\"us-east-1b\\", \\"us-east-1c\\"]\\n```plaintext ### Configuration Profiles ```toml\\n[profiles]\\nactive = \\"development\\" [profiles.development]\\ndebug.enabled = true\\nproviders.default = \\"local\\"\\ncost_controls.enabled = false [profiles.staging]\\ndebug.enabled = true\\nproviders.default = \\"aws\\"\\ncost_controls.max_budget = 1000.00 [profiles.production]\\ndebug.enabled = false\\nproviders.default = \\"aws\\"\\nsecurity.strict_mode = true\\n```plaintext ## Configuration Management Best Practices ### 1. Version Control ```bash\\n# Track configuration changes\\ngit add provisioning.toml\\ngit commit -m \\"feat(config): add production settings\\" # Use branches for configuration experiments\\ngit checkout -b config/new-provider\\n```plaintext ### 2. Documentation ```toml\\n# Document your configuration choices\\n[paths]\\n# Using custom base path for team shared installation\\nbase = \\"/opt/team-provisioning\\" [debug]\\n# Debug enabled for troubleshooting infrastructure issues\\nenabled = true\\nlog_level = \\"debug\\" # Temporary while debugging network problems\\n```plaintext ### 3. Validation ```bash\\n# Always validate before committing\\nprovisioning validate config\\ngit add . && git commit -m \\"update config\\"\\n```plaintext ### 4. Backup ```bash\\n# Regular configuration backups\\nprovisioning config export --format yaml > config-backup-$(date +%Y%m%d).yaml # Automated backup script\\necho \'0 2 * * * provisioning config export > ~/backups/config-$(date +\\\\%Y\\\\%m\\\\%d).yaml\' | crontab -\\n```plaintext ### 5. Security - Never commit sensitive values in plain text\\n- Use SOPS for encrypting secrets\\n- Rotate encryption keys regularly\\n- Audit configuration access ```bash\\n# Encrypt sensitive configuration\\nsops -e settings.k > settings.encrypted.k # Audit configuration changes\\ngit log -p -- provisioning.toml\\n```plaintext ## Configuration Migration ### Migrating from Environment Variables ```bash\\n# Old: Environment variables\\nexport PROVISIONING_DEBUG=true\\nexport PROVISIONING_PROVIDER=aws # New: Configuration file\\n[debug]\\nenabled = true [providers]\\ndefault = \\"aws\\"\\n```plaintext ### Upgrading Configuration Format ```bash\\n# Check for configuration updates needed\\nprovisioning config check-version # Migrate to new format\\nprovisioning config migrate --from 1.0 --to 2.0 # Validate migrated configuration\\nprovisioning validate config\\n```plaintext ## Next Steps Now that you understand the configuration system: 1. **Create your user configuration**: `provisioning init config`\\n2. **Set up environment-specific configs** for your workflow\\n3. **Learn CLI commands**: [CLI Reference](cli-reference.md)\\n4. **Practice with examples**: [Examples and Tutorials](examples/)\\n5. **Troubleshoot issues**: [Troubleshooting Guide](troubleshooting-guide.md) You now have complete control over how provisioning behaves in your environment!","breadcrumbs":"Configuration » Configuration Hierarchy","id":"1510","title":"Configuration Hierarchy"},"1511":{"body":"Version : 1.0.0 Date : 2025-10-09 Status : Production Ready","breadcrumbs":"Authentication Layer Guide » Authentication Layer Implementation Guide","id":"1511","title":"Authentication Layer Implementation Guide"},"1512":{"body":"A comprehensive authentication layer has been integrated into the provisioning system to secure sensitive operations. The system uses nu_plugin_auth for JWT authentication with MFA support, providing enterprise-grade security with graceful user experience.","breadcrumbs":"Authentication Layer Guide » Overview","id":"1512","title":"Overview"},"1513":{"body":"","breadcrumbs":"Authentication Layer Guide » Key Features","id":"1513","title":"Key Features"},"1514":{"body":"RS256 asymmetric signing Access tokens (15min) + refresh tokens (7d) OS keyring storage (macOS Keychain, Windows Credential Manager, Linux Secret Service)","breadcrumbs":"Authentication Layer Guide » ✅ JWT Authentication","id":"1514","title":"✅ JWT Authentication"},"1515":{"body":"TOTP (Google Authenticator, Authy) WebAuthn/FIDO2 (YubiKey, Touch ID) Required for production and destructive operations","breadcrumbs":"Authentication Layer Guide » ✅ MFA Support","id":"1515","title":"✅ MFA Support"},"1516":{"body":"Production environment : Requires authentication + MFA Destructive operations : Requires authentication + MFA (delete, destroy) Development/test : Requires authentication, allows skip with flag Check mode : Always bypasses authentication (dry-run operations)","breadcrumbs":"Authentication Layer Guide » ✅ Security Policies","id":"1516","title":"✅ Security Policies"},"1517":{"body":"All authenticated operations logged User, timestamp, operation details MFA verification status JSON format for easy parsing","breadcrumbs":"Authentication Layer Guide » ✅ Audit Logging","id":"1517","title":"✅ Audit Logging"},"1518":{"body":"Clear instructions for login/MFA Distinct error types (platform auth vs provider auth) Helpful guidance for setup","breadcrumbs":"Authentication Layer Guide » ✅ User-Friendly Error Messages","id":"1518","title":"✅ User-Friendly Error Messages"},"1519":{"body":"","breadcrumbs":"Authentication Layer Guide » Quick Start","id":"1519","title":"Quick Start"},"152":{"body":"Use the interactive installer for a guided setup: # Build the installer\\ncd provisioning/platform/installer\\ncargo build --release # Run interactive installer\\n./target/release/provisioning-installer # Or headless installation\\n./target/release/provisioning-installer --headless --mode solo --yes","breadcrumbs":"Installation Steps » Optional: Install Platform with Installer","id":"152","title":"Optional: Install Platform with Installer"},"1520":{"body":"# Interactive login (password prompt)\\nprovisioning auth login # Save credentials to keyring\\nprovisioning auth login --save # Custom control center URL\\nprovisioning auth login admin --url http://control.example.com:9080\\n```plaintext ### 2. Enroll MFA (First Time) ```bash\\n# Enroll TOTP (Google Authenticator)\\nprovisioning auth mfa enroll totp # Scan QR code with authenticator app\\n# Or enter secret manually\\n```plaintext ### 3. Verify MFA (For Sensitive Operations) ```bash\\n# Get 6-digit code from authenticator app\\nprovisioning auth mfa verify --code 123456\\n```plaintext ### 4. Check Authentication Status ```bash\\n# View current authentication status\\nprovisioning auth status # Verify token is valid\\nprovisioning auth verify\\n```plaintext --- ## Protected Operations ### Server Operations ```bash\\n# ✅ CREATE - Requires auth (prod: +MFA)\\nprovisioning server create web-01 # Auth required\\nprovisioning server create web-01 --check # Auth skipped (check mode) # ❌ DELETE - Requires auth + MFA\\nprovisioning server delete web-01 # Auth + MFA required\\nprovisioning server delete web-01 --check # Auth skipped (check mode) # 📖 READ - No auth required\\nprovisioning server list # No auth required\\nprovisioning server ssh web-01 # No auth required\\n```plaintext ### Task Service Operations ```bash\\n# ✅ CREATE - Requires auth (prod: +MFA)\\nprovisioning taskserv create kubernetes # Auth required\\nprovisioning taskserv create kubernetes --check # Auth skipped # ❌ DELETE - Requires auth + MFA\\nprovisioning taskserv delete kubernetes # Auth + MFA required # 📖 READ - No auth required\\nprovisioning taskserv list # No auth required\\n```plaintext ### Cluster Operations ```bash\\n# ✅ CREATE - Requires auth (prod: +MFA)\\nprovisioning cluster create buildkit # Auth required\\nprovisioning cluster create buildkit --check # Auth skipped # ❌ DELETE - Requires auth + MFA\\nprovisioning cluster delete buildkit # Auth + MFA required\\n```plaintext ### Batch Workflows ```bash\\n# ✅ SUBMIT - Requires auth (prod: +MFA)\\nprovisioning batch submit workflow.k # Auth required\\nprovisioning batch submit workflow.k --skip-auth # Auth skipped (if allowed) # 📖 READ - No auth required\\nprovisioning batch list # No auth required\\nprovisioning batch status # No auth required\\n```plaintext --- ## Configuration ### Security Settings (`config.defaults.toml`) ```toml\\n[security]\\nrequire_auth = true # Enable authentication system\\nrequire_mfa_for_production = true # MFA for prod environment\\nrequire_mfa_for_destructive = true # MFA for delete operations\\nauth_timeout = 3600 # Token timeout (1 hour)\\naudit_log_path = \\"{{paths.base}}/logs/audit.log\\" [security.bypass]\\nallow_skip_auth = false # Allow PROVISIONING_SKIP_AUTH env var [plugins]\\nauth_enabled = true # Enable nu_plugin_auth [platform.control_center]\\nurl = \\"http://localhost:9080\\" # Control center URL\\n```plaintext ### Environment-Specific Configuration ```toml\\n# Development\\n[environments.dev]\\nsecurity.bypass.allow_skip_auth = true # Allow auth bypass in dev # Production\\n[environments.prod]\\nsecurity.bypass.allow_skip_auth = false # Never allow bypass\\nsecurity.require_mfa_for_production = true\\n```plaintext --- ## Authentication Bypass (Dev/Test Only) ### Environment Variable Method ```bash\\n# Export environment variable (dev/test only)\\nexport PROVISIONING_SKIP_AUTH=true # Run operations without authentication\\nprovisioning server create web-01 # Unset when done\\nunset PROVISIONING_SKIP_AUTH\\n```plaintext ### Per-Command Flag ```bash\\n# Some commands support --skip-auth flag\\nprovisioning batch submit workflow.k --skip-auth\\n```plaintext ### Check Mode (Always Bypasses Auth) ```bash\\n# Check mode is always allowed without auth\\nprovisioning server create web-01 --check\\nprovisioning taskserv create kubernetes --check\\n```plaintext ⚠️ **WARNING**: Auth bypass should ONLY be used in development/testing environments. Production systems should have `security.bypass.allow_skip_auth = false`. --- ## Error Messages ### Not Authenticated ```plaintext\\n❌ Authentication Required Operation: server create web-01\\nYou must be logged in to perform this operation. To login: provisioning auth login Note: Your credentials will be securely stored in the system keyring.\\n```plaintext **Solution**: Run `provisioning auth login ` --- ### MFA Required ```plaintext\\n❌ MFA Verification Required Operation: server delete web-01\\nReason: destructive operation (delete/destroy) To verify MFA: 1. Get code from your authenticator app 2. Run: provisioning auth mfa verify --code <6-digit-code> Don\'t have MFA set up? Run: provisioning auth mfa enroll totp\\n```plaintext **Solution**: Run `provisioning auth mfa verify --code 123456` --- ### Token Expired ```plaintext\\n❌ Authentication Required Operation: server create web-02\\nYou must be logged in to perform this operation. Error: Token verification failed\\n```plaintext **Solution**: Token expired, re-login with `provisioning auth login ` --- ## Audit Logging All authenticated operations are logged to the audit log file with the following information: ```json\\n{ \\"timestamp\\": \\"2025-10-09 14:32:15\\", \\"user\\": \\"admin\\", \\"operation\\": \\"server_create\\", \\"details\\": { \\"hostname\\": \\"web-01\\", \\"infra\\": \\"production\\", \\"environment\\": \\"prod\\", \\"orchestrated\\": false }, \\"mfa_verified\\": true\\n}\\n```plaintext ### Viewing Audit Logs ```bash\\n# View raw audit log\\ncat provisioning/logs/audit.log # Filter by user\\ncat provisioning/logs/audit.log | jq \'. | select(.user == \\"admin\\")\' # Filter by operation type\\ncat provisioning/logs/audit.log | jq \'. | select(.operation == \\"server_create\\")\' # Filter by date\\ncat provisioning/logs/audit.log | jq \'. | select(.timestamp | startswith(\\"2025-10-09\\"))\'\\n```plaintext --- ## Integration with Control Center The authentication system integrates with the provisioning platform\'s control center REST API: - **POST /api/auth/login** - Login with credentials\\n- **POST /api/auth/logout** - Revoke tokens\\n- **POST /api/auth/verify** - Verify token validity\\n- **GET /api/auth/sessions** - List active sessions\\n- **POST /api/mfa/enroll** - Enroll MFA device\\n- **POST /api/mfa/verify** - Verify MFA code ### Starting Control Center ```bash\\n# Start control center (required for authentication)\\ncd provisioning/platform/control-center\\ncargo run --release\\n```plaintext Or use the orchestrator which includes control center: ```bash\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background\\n```plaintext --- ## Testing Authentication ### Manual Testing ```bash\\n# 1. Start control center\\ncd provisioning/platform/control-center\\ncargo run --release & # 2. Login\\nprovisioning auth login admin # 3. Try creating server (should succeed if authenticated)\\nprovisioning server create test-server --check # 4. Logout\\nprovisioning auth logout # 5. Try creating server (should fail - not authenticated)\\nprovisioning server create test-server --check\\n```plaintext ### Automated Testing ```bash\\n# Run authentication tests\\nnu provisioning/core/nulib/lib_provisioning/plugins/auth_test.nu\\n```plaintext --- ## Troubleshooting ### Plugin Not Available **Error**: `Authentication plugin not available` **Solution**: 1. Check plugin is built: `ls provisioning/core/plugins/nushell-plugins/nu_plugin_auth/target/release/`\\n2. Register plugin: `plugin add target/release/nu_plugin_auth`\\n3. Use plugin: `plugin use auth`\\n4. Verify: `which auth` --- ### Control Center Not Running **Error**: `Cannot connect to control center` **Solution**: 1. Start control center: `cd provisioning/platform/control-center && cargo run --release`\\n2. Or use orchestrator: `cd provisioning/platform/orchestrator && ./scripts/start-orchestrator.nu --background`\\n3. Check URL is correct in config: `provisioning config get platform.control_center.url` --- ### MFA Not Working **Error**: `Invalid MFA code` **Solutions**: - Ensure time is synchronized (TOTP codes are time-based)\\n- Code expires every 30 seconds, get fresh code\\n- Verify you\'re using the correct authenticator app entry\\n- Re-enroll if needed: `provisioning auth mfa enroll totp` --- ### Keyring Access Issues **Error**: `Keyring storage unavailable` **macOS**: Grant Keychain access to Terminal/iTerm2 in System Preferences → Security & Privacy **Linux**: Ensure `gnome-keyring` or `kwallet` is running **Windows**: Check Windows Credential Manager is accessible --- ## Architecture ### Authentication Flow ```plaintext\\n┌─────────────┐\\n│ User Command│\\n└──────┬──────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Infrastructure Command Handler │\\n│ (infrastructure.nu) │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Auth Check │\\n│ - Determine operation type │\\n│ - Check if auth required │\\n│ - Check environment (prod/dev) │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Auth Plugin Wrapper │\\n│ (auth.nu) │\\n│ - Call plugin or HTTP fallback │\\n│ - Verify token validity │\\n│ - Check MFA if required │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ nu_plugin_auth │\\n│ - JWT verification (RS256) │\\n│ - Keyring token storage │\\n│ - MFA verification │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Control Center API │\\n│ - /api/auth/verify │\\n│ - /api/mfa/verify │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Operation Execution │\\n│ (servers/create.nu, etc.) │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Audit Logging │\\n│ - Log to audit.log │\\n│ - Include user, timestamp, MFA │\\n└─────────────────────────────────┘\\n```plaintext ### File Structure ```plaintext\\nprovisioning/\\n├── config/\\n│ └── config.defaults.toml # Security configuration\\n├── core/nulib/\\n│ ├── lib_provisioning/plugins/\\n│ │ └── auth.nu # Auth wrapper (550 lines)\\n│ ├── servers/\\n│ │ └── create.nu # Server ops with auth\\n│ ├── workflows/\\n│ │ └── batch.nu # Batch workflows with auth\\n│ └── main_provisioning/commands/\\n│ └── infrastructure.nu # Infrastructure commands with auth\\n├── core/plugins/nushell-plugins/\\n│ └── nu_plugin_auth/ # Native Rust plugin\\n│ ├── src/\\n│ │ ├── main.rs # Plugin implementation\\n│ │ └── helpers.rs # Helper functions\\n│ └── README.md # Plugin documentation\\n├── platform/control-center/ # Control Center (Rust)\\n│ └── src/auth/ # JWT auth implementation\\n└── logs/ └── audit.log # Audit trail\\n```plaintext --- ## Related Documentation - **Security System Overview**: `docs/architecture/ADR-009-security-system-complete.md`\\n- **JWT Authentication**: `docs/architecture/JWT_AUTH_IMPLEMENTATION.md`\\n- **MFA Implementation**: `docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`\\n- **Plugin README**: `provisioning/core/plugins/nushell-plugins/nu_plugin_auth/README.md`\\n- **Control Center**: `provisioning/platform/control-center/README.md` --- ## Summary of Changes | File | Changes | Lines Added |\\n|------|---------|-------------|\\n| `lib_provisioning/plugins/auth.nu` | Added security policy enforcement functions | +260 |\\n| `config/config.defaults.toml` | Added security configuration section | +19 |\\n| `servers/create.nu` | Added auth check for server creation | +25 |\\n| `workflows/batch.nu` | Added auth check for batch workflow submission | +43 |\\n| `main_provisioning/commands/infrastructure.nu` | Added auth checks for all infrastructure commands | +90 |\\n| `lib_provisioning/providers/interface.nu` | Added authentication guidelines for providers | +65 |\\n| **Total** | **6 files modified** | **~500 lines** | --- ## Best Practices ### For Users 1. **Always login**: Keep your session active to avoid interruptions\\n2. **Use keyring**: Save credentials with `--save` flag for persistence\\n3. **Enable MFA**: Use MFA for production operations\\n4. **Check mode first**: Always test with `--check` before actual operations\\n5. **Monitor audit logs**: Review audit logs regularly for security ### For Developers 1. **Check auth early**: Verify authentication before expensive operations\\n2. **Log operations**: Always log authenticated operations for audit\\n3. **Clear error messages**: Provide helpful guidance for auth failures\\n4. **Respect check mode**: Always skip auth in check/dry-run mode\\n5. **Test both paths**: Test with and without authentication ### For Operators 1. **Production hardening**: Set `allow_skip_auth = false` in production\\n2. **MFA enforcement**: Require MFA for all production environments\\n3. **Monitor audit logs**: Set up log monitoring and alerts\\n4. **Token rotation**: Configure short token timeouts (15min default)\\n5. **Backup authentication**: Ensure multiple admins have MFA enrolled --- ## License MIT License - See LICENSE file for details --- ## Quick Reference **Version**: 1.0.0\\n**Last Updated**: 2025-10-09 --- ### Quick Commands #### Login ```bash\\nprovisioning auth login # Interactive password\\nprovisioning auth login --save # Save to keyring\\n```plaintext #### MFA ```bash\\nprovisioning auth mfa enroll totp # Enroll TOTP\\nprovisioning auth mfa verify --code 123456 # Verify code\\n```plaintext #### Status ```bash\\nprovisioning auth status # Show auth status\\nprovisioning auth verify # Verify token\\n```plaintext #### Logout ```bash\\nprovisioning auth logout # Logout current session\\nprovisioning auth logout --all # Logout all sessions\\n```plaintext --- ### Protected Operations | Operation | Auth | MFA (Prod) | MFA (Delete) | Check Mode |\\n|-----------|------|------------|--------------|------------|\\n| `server create` | ✅ | ✅ | ❌ | Skip |\\n| `server delete` | ✅ | ✅ | ✅ | Skip |\\n| `server list` | ❌ | ❌ | ❌ | - |\\n| `taskserv create` | ✅ | ✅ | ❌ | Skip |\\n| `taskserv delete` | ✅ | ✅ | ✅ | Skip |\\n| `cluster create` | ✅ | ✅ | ❌ | Skip |\\n| `cluster delete` | ✅ | ✅ | ✅ | Skip |\\n| `batch submit` | ✅ | ✅ | ❌ | - | --- ### Bypass Authentication (Dev/Test Only) #### Environment Variable ```bash\\nexport PROVISIONING_SKIP_AUTH=true\\nprovisioning server create test\\nunset PROVISIONING_SKIP_AUTH\\n```plaintext #### Check Mode (Always Allowed) ```bash\\nprovisioning server create prod --check\\nprovisioning taskserv delete k8s --check\\n```plaintext #### Config Flag ```toml\\n[security.bypass]\\nallow_skip_auth = true # Only in dev/test\\n```plaintext --- ### Configuration #### Security Settings ```toml\\n[security]\\nrequire_auth = true\\nrequire_mfa_for_production = true\\nrequire_mfa_for_destructive = true\\nauth_timeout = 3600 [security.bypass]\\nallow_skip_auth = false # true in dev only [plugins]\\nauth_enabled = true [platform.control_center]\\nurl = \\"http://localhost:3000\\"\\n```plaintext --- ### Error Messages #### Not Authenticated ```plaintext\\n❌ Authentication Required\\nOperation: server create web-01\\nTo login: provisioning auth login \\n```plaintext **Fix**: `provisioning auth login ` #### MFA Required ```plaintext\\n❌ MFA Verification Required\\nOperation: server delete web-01\\nReason: destructive operation\\n```plaintext **Fix**: `provisioning auth mfa verify --code ` #### Token Expired ```plaintext\\nError: Token verification failed\\n```plaintext **Fix**: Re-login: `provisioning auth login ` --- ### Troubleshooting | Error | Solution |\\n|-------|----------|\\n| Plugin not available | `plugin add target/release/nu_plugin_auth` |\\n| Control center offline | Start: `cd provisioning/platform/control-center && cargo run` |\\n| Invalid MFA code | Get fresh code (expires in 30s) |\\n| Token expired | Re-login: `provisioning auth login ` |\\n| Keyring access denied | Grant app access in system settings | --- ### Audit Logs ```bash\\n# View audit log\\ncat provisioning/logs/audit.log # Filter by user\\ncat provisioning/logs/audit.log | jq \'. | select(.user == \\"admin\\")\' # Filter by operation\\ncat provisioning/logs/audit.log | jq \'. | select(.operation == \\"server_create\\")\'\\n```plaintext --- ### CI/CD Integration #### Option 1: Skip Auth (Dev/Test Only) ```bash\\nexport PROVISIONING_SKIP_AUTH=true\\nprovisioning server create ci-server\\n```plaintext #### Option 2: Check Mode ```bash\\nprovisioning server create ci-server --check\\n```plaintext #### Option 3: Service Account (Future) ```bash\\nexport PROVISIONING_AUTH_TOKEN=\\"\\"\\nprovisioning server create ci-server\\n```plaintext --- ### Performance | Operation | Auth Overhead |\\n|-----------|---------------|\\n| Server create | ~20ms |\\n| Taskserv create | ~20ms |\\n| Batch submit | ~20ms |\\n| Check mode | 0ms (skipped) | --- ### Related Docs - **Full Guide**: `docs/user/AUTHENTICATION_LAYER_GUIDE.md`\\n- **Implementation**: `AUTHENTICATION_LAYER_IMPLEMENTATION_SUMMARY.md`\\n- **Security ADR**: `docs/architecture/ADR-009-security-system-complete.md` --- **Quick Help**: `provisioning help auth` or `provisioning auth --help` --- **Last Updated**: 2025-10-09\\n**Maintained By**: Security Team --- ## Setup Guide ### Complete Authentication Setup Guide Current Settings (from your config) ```plaintext\\n[security]\\nrequire_auth = true # ✅ Auth is REQUIRED\\nallow_skip_auth = false # ❌ Cannot skip with env var\\nauth_timeout = 3600 # Token valid for 1 hour [platform.control_center]\\nurl = \\"http://localhost:3000\\" # Control Center endpoint\\n```plaintext ### STEP 1: Start Control Center The Control Center is the authentication backend: ```bash\\n# Check if it\'s already running\\ncurl http://localhost:3000/health # If not running, start it\\ncd /Users/Akasha/project-provisioning/provisioning/platform/control-center\\ncargo run --release & # Wait for it to start (may take 30-60 seconds)\\nsleep 30\\ncurl http://localhost:3000/health\\n```plaintext Expected Output: ```json\\n{\\"status\\": \\"healthy\\"}\\n```plaintext ### STEP 2: Find Default Credentials Check for default user setup: ```bash\\n# Look for initialization scripts\\nls -la /Users/Akasha/project-provisioning/provisioning/platform/control-center/ # Check for README or setup instructions\\ncat /Users/Akasha/project-provisioning/provisioning/platform/control-center/README.md # Or check for default config\\ncat /Users/Akasha/project-provisioning/provisioning/platform/control-center/config.toml 2>/dev/null || echo \\"Config not found\\"\\n```plaintext ### STEP 3: Log In Once you have credentials (usually admin / password from setup): ```bash\\n# Interactive login - will prompt for password\\nprovisioning auth login # Or with username\\nprovisioning auth login admin # Verify you\'re logged in\\nprovisioning auth status\\n```plaintext Expected Success Output: ```plaintext\\n✓ Login successful! User: admin\\nRole: admin\\nExpires: 2025-10-22T14:30:00Z\\nMFA: false Session active and ready\\n```plaintext ### STEP 4: Now Create Your Server Once authenticated: ```bash\\n# Try server creation again\\nprovisioning server create sgoyol --check # Or with full details\\nprovisioning server create sgoyol --infra workspace_librecloud --check\\n```plaintext ### 🛠️ Alternative: Skip Auth for Development If you want to bypass authentication temporarily for testing: #### Option A: Edit config to allow skip ```bash\\n# You would need to parse and modify TOML - easier to do next option\\n```plaintext #### Option B: Use environment variable (if allowed by config) ```bash\\nexport PROVISIONING_SKIP_AUTH=true\\nprovisioning server create sgoyol\\nunset PROVISIONING_SKIP_AUTH\\n```plaintext #### Option C: Use check mode (always works, no auth needed) ```bash\\nprovisioning server create sgoyol --check\\n```plaintext #### Option D: Modify config.defaults.toml (permanent for dev) Edit: `provisioning/config/config.defaults.toml` Change line 193 to: ```toml\\nallow_skip_auth = true\\n```plaintext ### 🔍 Troubleshooting | Problem | Solution |\\n|----------------------------|---------------------------------------------------------------------|\\n| Control Center won\'t start | Check port 3000 not in use: `lsof -i :3000` |\\n| \\"No token found\\" error | Login with: `provisioning auth login` |\\n| Login fails | Verify Control Center is running: `curl http://localhost:3000/health` |\\n| Token expired | Re-login: `provisioning auth login` |\\n| Plugin not available | Using HTTP fallback - this is OK, works without plugin |","breadcrumbs":"Authentication Layer Guide » 1. Login to Platform","id":"1520","title":"1. Login to Platform"},"1521":{"body":"Version : 1.0.0 Last Updated : 2025-10-08 Status : Production Ready","breadcrumbs":"Config Encryption Guide » Configuration Encryption Guide","id":"1521","title":"Configuration Encryption Guide"},"1522":{"body":"The Provisioning Platform includes a comprehensive configuration encryption system that provides: Transparent Encryption/Decryption : Configs are automatically decrypted on load Multiple KMS Backends : Age, AWS KMS, HashiCorp Vault, Cosmian KMS Memory-Only Decryption : Secrets never written to disk in plaintext SOPS Integration : Industry-standard encryption with SOPS Sensitive Data Detection : Automatic scanning for unencrypted sensitive data","breadcrumbs":"Config Encryption Guide » Overview","id":"1522","title":"Overview"},"1523":{"body":"Prerequisites Quick Start Configuration Encryption KMS Backends CLI Commands Integration with Config Loader Best Practices Troubleshooting","breadcrumbs":"Config Encryption Guide » Table of Contents","id":"1523","title":"Table of Contents"},"1524":{"body":"","breadcrumbs":"Config Encryption Guide » Prerequisites","id":"1524","title":"Prerequisites"},"1525":{"body":"SOPS (v3.10.2+) # macOS\\nbrew install sops # Linux\\nwget https://github.com/mozilla/sops/releases/download/v3.10.2/sops-v3.10.2.linux.amd64\\nsudo mv sops-v3.10.2.linux.amd64 /usr/local/bin/sops\\nsudo chmod +x /usr/local/bin/sops Age (for Age backend - recommended) # macOS\\nbrew install age # Linux\\napt install age AWS CLI (for AWS KMS backend - optional) brew install awscli","breadcrumbs":"Config Encryption Guide » Required Tools","id":"1525","title":"Required Tools"},"1526":{"body":"# Check SOPS\\nsops --version # Check Age\\nage --version # Check AWS CLI (optional)\\naws --version\\n```plaintext --- ## Quick Start ### 1. Initialize Encryption Generate Age keys and create SOPS configuration: ```bash\\nprovisioning config init-encryption --kms age\\n```plaintext This will: - Generate Age key pair in `~/.config/sops/age/keys.txt`\\n- Display your public key (recipient)\\n- Create `.sops.yaml` in your project ### 2. Set Environment Variables Add to your shell profile (`~/.zshrc` or `~/.bashrc`): ```bash\\n# Age encryption\\nexport SOPS_AGE_RECIPIENTS=\\"age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p\\"\\nexport PROVISIONING_KAGE=\\"$HOME/.config/sops/age/keys.txt\\"\\n```plaintext Replace the recipient with your actual public key. ### 3. Validate Setup ```bash\\nprovisioning config validate-encryption\\n```plaintext Expected output: ```plaintext\\n✅ Encryption configuration is valid SOPS installed: true Age backend: true KMS enabled: false Errors: 0 Warnings: 0\\n```plaintext ### 4. Encrypt Your First Config ```bash\\n# Create a config with sensitive data\\ncat > workspace/config/secure.yaml < edit -> re-encrypt)\\nprovisioning config edit-secure workspace/config/secure.enc.yaml\\n```plaintext This will: 1. Decrypt the file temporarily\\n2. Open in your `$EDITOR` (vim/nano/etc)\\n3. Re-encrypt when you save and close\\n4. Remove temporary decrypted file ### Check Encryption Status ```bash\\n# Check if file is encrypted\\nprovisioning config is-encrypted workspace/config/secure.yaml # Get detailed encryption info\\nprovisioning config encryption-info workspace/config/secure.yaml\\n```plaintext --- ## KMS Backends ### Age (Recommended for Development) **Pros**: - Simple file-based keys\\n- No external dependencies\\n- Fast and secure\\n- Works offline **Setup**: ```bash\\n# Initialize\\nprovisioning config init-encryption --kms age # Set environment variables\\nexport SOPS_AGE_RECIPIENTS=\\"age1...\\" # Your public key\\nexport PROVISIONING_KAGE=\\"$HOME/.config/sops/age/keys.txt\\"\\n```plaintext **Encrypt/Decrypt**: ```bash\\nprovisioning config encrypt secrets.yaml --kms age\\nprovisioning config decrypt secrets.enc.yaml\\n```plaintext ### AWS KMS (Production) **Pros**: - Centralized key management\\n- Audit logging\\n- IAM integration\\n- Key rotation **Setup**: 1. Create KMS key in AWS Console\\n2. Configure AWS credentials: ```bash aws configure Update .sops.yaml: creation_rules: - path_regex: .*\\\\.enc\\\\.yaml$ kms: \\"arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012\\" Encrypt/Decrypt : provisioning config encrypt secrets.yaml --kms aws-kms\\nprovisioning config decrypt secrets.enc.yaml\\n```plaintext ### HashiCorp Vault (Enterprise) **Pros**: - Dynamic secrets\\n- Centralized secret management\\n- Audit logging\\n- Policy-based access **Setup**: 1. Configure Vault address and token: ```bash export VAULT_ADDR=\\"https://vault.example.com:8200\\" export VAULT_TOKEN=\\"s.xxxxxxxxxxxxxx\\" Update configuration: # workspace/config/provisioning.yaml\\nkms: enabled: true mode: \\"remote\\" vault: address: \\"https://vault.example.com:8200\\" transit_key: \\"provisioning\\" Encrypt/Decrypt : provisioning config encrypt secrets.yaml --kms vault\\nprovisioning config decrypt secrets.enc.yaml\\n```plaintext ### Cosmian KMS (Confidential Computing) **Pros**: - Confidential computing support\\n- Zero-knowledge architecture\\n- Post-quantum ready\\n- Cloud-agnostic **Setup**: 1. Deploy Cosmian KMS server\\n2. Update configuration: ```toml kms: enabled: true mode: \\"remote\\" remote: endpoint: \\"https://kms.example.com:9998\\" auth_method: \\"certificate\\" client_cert: \\"/path/to/client.crt\\" client_key: \\"/path/to/client.key\\" Encrypt/Decrypt : provisioning config encrypt secrets.yaml --kms cosmian\\nprovisioning config decrypt secrets.enc.yaml\\n```plaintext --- ## CLI Commands ### Configuration Encryption Commands | Command | Description |\\n|---------|-------------|\\n| `config encrypt ` | Encrypt configuration file |\\n| `config decrypt ` | Decrypt configuration file |\\n| `config edit-secure ` | Edit encrypted file securely |\\n| `config rotate-keys ` | Rotate encryption keys |\\n| `config is-encrypted ` | Check if file is encrypted |\\n| `config encryption-info ` | Show encryption details |\\n| `config validate-encryption` | Validate encryption setup |\\n| `config scan-sensitive ` | Find unencrypted sensitive configs |\\n| `config encrypt-all ` | Encrypt all sensitive configs |\\n| `config init-encryption` | Initialize encryption (generate keys) | ### Examples ```bash\\n# Encrypt workspace config\\nprovisioning config encrypt workspace/config/secure.yaml --in-place # Edit encrypted file\\nprovisioning config edit-secure workspace/config/secure.yaml # Scan for unencrypted sensitive configs\\nprovisioning config scan-sensitive workspace/config --recursive # Encrypt all sensitive configs in workspace\\nprovisioning config encrypt-all workspace/config --kms age --recursive # Check encryption status\\nprovisioning config is-encrypted workspace/config/secure.yaml # Get detailed info\\nprovisioning config encryption-info workspace/config/secure.yaml # Validate setup\\nprovisioning config validate-encryption\\n```plaintext --- ## Integration with Config Loader ### Automatic Decryption The config loader automatically detects and decrypts encrypted files: ```nushell\\n# Load encrypted config (automatically decrypted in memory)\\nuse lib_provisioning/config/loader.nu let config = (load-provisioning-config --debug)\\n```plaintext **Key Features**: - **Transparent**: No code changes needed\\n- **Memory-Only**: Decrypted content never written to disk\\n- **Fallback**: If decryption fails, attempts to load as plain file\\n- **Debug Support**: Shows decryption status with `--debug` flag ### Manual Loading ```nushell\\nuse lib_provisioning/config/encryption.nu # Load encrypted config\\nlet secure_config = (load-encrypted-config \\"workspace/config/secure.enc.yaml\\") # Memory-only decryption (no file created)\\nlet decrypted_content = (decrypt-config-memory \\"workspace/config/secure.enc.yaml\\")\\n```plaintext ### Configuration Hierarchy with Encryption The system supports encrypted files at any level: ```plaintext\\n1. workspace/{name}/config/provisioning.yaml ← Can be encrypted\\n2. workspace/{name}/config/providers/*.toml ← Can be encrypted\\n3. workspace/{name}/config/platform/*.toml ← Can be encrypted\\n4. ~/.../provisioning/ws_{name}.yaml ← Can be encrypted\\n5. Environment variables (PROVISIONING_*) ← Plain text\\n```plaintext --- ## Best Practices ### 1. Encrypt All Sensitive Data **Always encrypt configs containing**: - Passwords\\n- API keys\\n- Secret keys\\n- Private keys\\n- Tokens\\n- Credentials **Scan for unencrypted sensitive data**: ```bash\\nprovisioning config scan-sensitive workspace --recursive\\n```plaintext ### 2. Use Appropriate KMS Backend | Environment | Recommended Backend |\\n|-------------|---------------------|\\n| Development | Age (file-based) |\\n| Staging | AWS KMS or Vault |\\n| Production | AWS KMS or Vault |\\n| CI/CD | AWS KMS with IAM roles | ### 3. Key Management **Age Keys**: - Store private keys securely: `~/.config/sops/age/keys.txt`\\n- Set file permissions: `chmod 600 ~/.config/sops/age/keys.txt`\\n- Backup keys securely (encrypted backup)\\n- Never commit private keys to git **AWS KMS**: - Use separate keys per environment\\n- Enable key rotation\\n- Use IAM policies for access control\\n- Monitor usage with CloudTrail **Vault**: - Use transit engine for encryption\\n- Enable audit logging\\n- Implement least-privilege policies\\n- Regular policy reviews ### 4. File Organization ```plaintext\\nworkspace/\\n└── config/ ├── provisioning.yaml # Plain (no secrets) ├── secure.yaml # Encrypted (SOPS auto-detects) ├── providers/ │ ├── aws.toml # Plain (no secrets) │ └── aws-credentials.enc.toml # Encrypted └── platform/ └── database.enc.yaml # Encrypted\\n```plaintext ### 5. Git Integration **Add to `.gitignore`**: ```gitignore\\n# Unencrypted sensitive files\\n**/secrets.yaml\\n**/credentials.yaml\\n**/*.dec.yaml\\n**/*.dec.toml # Temporary decrypted files\\n*.tmp.yaml\\n*.tmp.toml\\n```plaintext **Commit encrypted files**: ```bash\\n# Encrypted files are safe to commit\\ngit add workspace/config/secure.enc.yaml\\ngit commit -m \\"Add encrypted configuration\\"\\n```plaintext ### 6. Rotation Strategy **Regular Key Rotation**: ```bash\\n# Generate new Age key\\nage-keygen -o ~/.config/sops/age/keys-new.txt # Update .sops.yaml with new recipient # Rotate keys for file\\nprovisioning config rotate-keys workspace/config/secure.yaml \\n```plaintext **Frequency**: - Development: Annually\\n- Production: Quarterly\\n- After team member departure: Immediately ### 7. Audit and Monitoring **Track encryption status**: ```bash\\n# Regular scans\\nprovisioning config scan-sensitive workspace --recursive # Validate encryption setup\\nprovisioning config validate-encryption\\n```plaintext **Monitor access** (with Vault/AWS KMS): - Enable audit logging\\n- Review access patterns\\n- Alert on anomalies --- ## Troubleshooting ### SOPS Not Found **Error**: ```plaintext\\nSOPS binary not found\\n```plaintext **Solution**: ```bash\\n# Install SOPS\\nbrew install sops # Verify\\nsops --version\\n```plaintext ### Age Key Not Found **Error**: ```plaintext\\nAge key file not found: ~/.config/sops/age/keys.txt\\n```plaintext **Solution**: ```bash\\n# Generate new key\\nmkdir -p ~/.config/sops/age\\nage-keygen -o ~/.config/sops/age/keys.txt # Set environment variable\\nexport PROVISIONING_KAGE=\\"$HOME/.config/sops/age/keys.txt\\"\\n```plaintext ### SOPS_AGE_RECIPIENTS Not Set **Error**: ```plaintext\\nno AGE_RECIPIENTS for file.yaml\\n```plaintext **Solution**: ```bash\\n# Extract public key from private key\\ngrep \\"public key:\\" ~/.config/sops/age/keys.txt # Set environment variable\\nexport SOPS_AGE_RECIPIENTS=\\"age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p\\"\\n```plaintext ### Decryption Failed **Error**: ```plaintext\\nFailed to decrypt configuration file\\n```plaintext **Solutions**: 1. **Wrong key**: ```bash # Verify you have the correct private key provisioning config validate-encryption File corrupted : # Check file integrity\\nsops --decrypt workspace/config/secure.yaml Wrong backend : # Check SOPS metadata in file\\nhead -20 workspace/config/secure.yaml","breadcrumbs":"Config Encryption Guide » Verify Installation","id":"1526","title":"Verify Installation"},"1527":{"body":"Error : AccessDeniedException: User is not authorized to perform: kms:Decrypt\\n```plaintext **Solution**: ```bash\\n# Check AWS credentials\\naws sts get-caller-identity # Verify KMS key policy allows your IAM user/role\\naws kms describe-key --key-id \\n```plaintext ### Vault Connection Failed **Error**: ```plaintext\\nVault encryption failed: connection refused\\n```plaintext **Solution**: ```bash\\n# Verify Vault address\\necho $VAULT_ADDR # Check connectivity\\ncurl -k $VAULT_ADDR/v1/sys/health # Verify token\\nvault token lookup\\n```plaintext --- ## Security Considerations ### Threat Model **Protected Against**: - ✅ Plaintext secrets in git\\n- ✅ Accidental secret exposure\\n- ✅ Unauthorized file access\\n- ✅ Key compromise (with rotation) **Not Protected Against**: - ❌ Memory dumps during decryption\\n- ❌ Root/admin access to running process\\n- ❌ Compromised Age/KMS keys\\n- ❌ Social engineering ### Security Best Practices 1. **Principle of Least Privilege**: Only grant decryption access to those who need it\\n2. **Key Separation**: Use different keys for different environments\\n3. **Regular Audits**: Review who has access to keys\\n4. **Secure Key Storage**: Never store private keys in git\\n5. **Rotation**: Regularly rotate encryption keys\\n6. **Monitoring**: Monitor decryption operations (with AWS KMS/Vault) --- ## Additional Resources - **SOPS Documentation**: \\n- **Age Encryption**: \\n- **AWS KMS**: \\n- **HashiCorp Vault**: \\n- **Cosmian KMS**: --- ## Support For issues or questions: - Check troubleshooting section above\\n- Run: `provisioning config validate-encryption`\\n- Review logs with `--debug` flag --- ## Quick Reference ### Setup (One-time) ```bash\\n# 1. Initialize encryption\\nprovisioning config init-encryption --kms age # 2. Set environment variables (add to ~/.zshrc or ~/.bashrc)\\nexport SOPS_AGE_RECIPIENTS=\\"age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p\\"\\nexport PROVISIONING_KAGE=\\"$HOME/.config/sops/age/keys.txt\\" # 3. Validate setup\\nprovisioning config validate-encryption\\n```plaintext ### Common Commands | Task | Command |\\n|------|---------|\\n| **Encrypt file** | `provisioning config encrypt secrets.yaml --in-place` |\\n| **Decrypt file** | `provisioning config decrypt secrets.enc.yaml` |\\n| **Edit encrypted** | `provisioning config edit-secure secrets.enc.yaml` |\\n| **Check if encrypted** | `provisioning config is-encrypted secrets.yaml` |\\n| **Scan for unencrypted** | `provisioning config scan-sensitive workspace --recursive` |\\n| **Encrypt all sensitive** | `provisioning config encrypt-all workspace/config --kms age` |\\n| **Validate setup** | `provisioning config validate-encryption` |\\n| **Show encryption info** | `provisioning config encryption-info secrets.yaml` | ### File Naming Conventions Automatically encrypted by SOPS: - `workspace/*/config/secure.yaml` ← Auto-encrypted\\n- `*.enc.yaml` ← Auto-encrypted\\n- `*.enc.yml` ← Auto-encrypted\\n- `*.enc.toml` ← Auto-encrypted\\n- `workspace/*/config/providers/*credentials*.toml` ← Auto-encrypted ### Quick Workflow ```bash\\n# Create config with secrets\\ncat > workspace/config/secure.yaml < edit -> re-encrypt)\\nprovisioning config edit-secure workspace/config/secure.yaml # Configs are auto-decrypted when loaded\\nprovisioning env # Automatically decrypts secure.yaml\\n```plaintext ### KMS Backends | Backend | Use Case | Setup Command |\\n|---------|----------|---------------|\\n| **Age** | Development, simple setup | `provisioning config init-encryption --kms age` |\\n| **AWS KMS** | Production, AWS environments | Configure in `.sops.yaml` |\\n| **Vault** | Enterprise, dynamic secrets | Set `VAULT_ADDR` and `VAULT_TOKEN` |\\n| **Cosmian** | Confidential computing | Configure in `config.toml` | ### Security Checklist - ✅ Encrypt all files with passwords, API keys, secrets\\n- ✅ Never commit unencrypted secrets to git\\n- ✅ Set file permissions: `chmod 600 ~/.config/sops/age/keys.txt`\\n- ✅ Add plaintext files to `.gitignore`: `*.dec.yaml`, `secrets.yaml`\\n- ✅ Regular key rotation (quarterly for production)\\n- ✅ Separate keys per environment (dev/staging/prod)\\n- ✅ Backup Age keys securely (encrypted backup) ### Troubleshooting | Problem | Solution |\\n|---------|----------|\\n| `SOPS binary not found` | `brew install sops` |\\n| `Age key file not found` | `provisioning config init-encryption --kms age` |\\n| `SOPS_AGE_RECIPIENTS not set` | `export SOPS_AGE_RECIPIENTS=\\"age1...\\"` |\\n| `Decryption failed` | Check key file: `provisioning config validate-encryption` |\\n| `AWS KMS Access Denied` | Verify IAM permissions: `aws sts get-caller-identity` | ### Testing ```bash\\n# Run all encryption tests\\nnu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu # Run specific test\\nnu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu --test roundtrip # Test full workflow\\nnu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu test-full-encryption-workflow # Test KMS backend\\nuse lib_provisioning/kms/client.nu\\nkms-test --backend age\\n```plaintext ### Integration Configs are **automatically decrypted** when loaded: ```nushell\\n# Nushell code - encryption is transparent\\nuse lib_provisioning/config/loader.nu # Auto-decrypts encrypted files in memory\\nlet config = (load-provisioning-config) # Access secrets normally\\nlet db_password = ($config | get database.password)\\n```plaintext ### Emergency Key Recovery If you lose your Age key: 1. **Check backups**: `~/.config/sops/age/keys.txt.backup`\\n2. **Check other systems**: Keys might be on other dev machines\\n3. **Contact team**: Team members with access can re-encrypt for you\\n4. **Rotate secrets**: If keys are lost, rotate all secrets ### Advanced #### Multiple Recipients (Team Access) ```yaml\\n# .sops.yaml\\ncreation_rules: - path_regex: .*\\\\.enc\\\\.yaml$ age: >- age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p, age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8q\\n```plaintext #### Key Rotation ```bash\\n# Generate new key\\nage-keygen -o ~/.config/sops/age/keys-new.txt # Update .sops.yaml with new recipient # Rotate keys for file\\nprovisioning config rotate-keys workspace/config/secure.yaml \\n```plaintext #### Scan and Encrypt All ```bash\\n# Find all unencrypted sensitive configs\\nprovisioning config scan-sensitive workspace --recursive # Encrypt them all\\nprovisioning config encrypt-all workspace --kms age --recursive # Verify\\nprovisioning config scan-sensitive workspace --recursive\\n```plaintext ### Documentation - **Full Guide**: `docs/user/CONFIG_ENCRYPTION_GUIDE.md`\\n- **SOPS Docs**: \\n- **Age Docs**: --- **Last Updated**: 2025-10-08\\n**Version**: 1.0.0","breadcrumbs":"Config Encryption Guide » AWS KMS Access Denied","id":"1527","title":"AWS KMS Access Denied"},"1528":{"body":"","breadcrumbs":"Security System » Complete Security System (v4.0.0)","id":"1528","title":"Complete Security System (v4.0.0)"},"1529":{"body":"A comprehensive security system with 39,699 lines across 12 components providing enterprise-grade protection for infrastructure automation.","breadcrumbs":"Security System » 🔐 Enterprise-Grade Security Implementation","id":"1529","title":"🔐 Enterprise-Grade Security Implementation"},"153":{"body":"","breadcrumbs":"Installation Steps » Troubleshooting","id":"153","title":"Troubleshooting"},"1530":{"body":"","breadcrumbs":"Security System » Core Security Components","id":"1530","title":"Core Security Components"},"1531":{"body":"Type : RS256 token-based authentication Features : Argon2id hashing, token rotation, session management Roles : 5 distinct role levels with inheritance Commands : provisioning login\\nprovisioning mfa totp verify","breadcrumbs":"Security System » 1. Authentication (JWT)","id":"1531","title":"1. Authentication (JWT)"},"1532":{"body":"Type : Policy-as-code using Cedar authorization engine Features : Context-aware policies, hot reload, fine-grained control Updates : Dynamic policy reloading without service restart","breadcrumbs":"Security System » 2. Authorization (Cedar)","id":"1532","title":"2. Authorization (Cedar)"},"1533":{"body":"Methods : TOTP (Time-based OTP) + WebAuthn/FIDO2 Features : Backup codes, rate limiting, device binding Commands : provisioning mfa totp enroll\\nprovisioning mfa webauthn enroll","breadcrumbs":"Security System » 3. Multi-Factor Authentication (MFA)","id":"1533","title":"3. Multi-Factor Authentication (MFA)"},"1534":{"body":"Dynamic Secrets : AWS STS, SSH keys, UpCloud credentials KMS Integration : Vault + AWS KMS + Age + Cosmian Features : Auto-cleanup, TTL management, rotation policies Commands : provisioning secrets generate aws --ttl 1hr\\nprovisioning ssh connect server01","breadcrumbs":"Security System » 4. Secrets Management","id":"1534","title":"4. Secrets Management"},"1535":{"body":"Backends : RustyVault, Age, AWS KMS, HashiCorp Vault, Cosmian Features : Envelope encryption, key rotation, secure storage Commands : provisioning kms encrypt\\nprovisioning config encrypt secure.yaml","breadcrumbs":"Security System » 5. Key Management System (KMS)","id":"1535","title":"5. Key Management System (KMS)"},"1536":{"body":"Format : Structured JSON logs with full context Compliance : GDPR-compliant with PII filtering Retention : 7-year data retention policy Exports : 5 export formats (JSON, CSV, SYSLOG, Splunk, CloudWatch)","breadcrumbs":"Security System » 6. Audit Logging","id":"1536","title":"6. Audit Logging"},"1537":{"body":"Approval : Multi-party approval workflow Features : Temporary elevated privileges, auto-revocation, audit trail Commands : provisioning break-glass request \\"reason\\"\\nprovisioning break-glass approve ","breadcrumbs":"Security System » 7. Break-Glass Emergency Access","id":"1537","title":"7. Break-Glass Emergency Access"},"1538":{"body":"Standards : GDPR, SOC2, ISO 27001, incident response procedures Features : Compliance reporting, audit trails, policy enforcement Commands : provisioning compliance report\\nprovisioning compliance gdpr export ","breadcrumbs":"Security System » 8. Compliance Management","id":"1538","title":"8. Compliance Management"},"1539":{"body":"Filtering : By user, action, time range, resource Features : Structured query language, real-time search Commands : provisioning audit query --user alice --action deploy --from 24h","breadcrumbs":"Security System » 9. Audit Query System","id":"1539","title":"9. Audit Query System"},"154":{"body":"If plugins aren\'t recognized: # Rebuild plugin registry\\nnu -c \\"plugin list; plugin use tera\\"","breadcrumbs":"Installation Steps » Nushell Plugin Not Found","id":"154","title":"Nushell Plugin Not Found"},"1540":{"body":"Features : Rotation policies, expiration tracking, revocation Integration : Seamless with auth system","breadcrumbs":"Security System » 10. Token Management","id":"1540","title":"10. Token Management"},"1541":{"body":"Model : Role-based access control (RBAC) Features : Resource-level permissions, delegation, audit","breadcrumbs":"Security System » 11. Access Control","id":"1541","title":"11. Access Control"},"1542":{"body":"Standards : AES-256, TLS 1.3, envelope encryption Coverage : At-rest and in-transit encryption","breadcrumbs":"Security System » 12. Encryption","id":"1542","title":"12. Encryption"},"1543":{"body":"Overhead : <20ms per secure operation Tests : 350+ comprehensive test cases Endpoints : 83+ REST API endpoints CLI Commands : 111+ security-related commands","breadcrumbs":"Security System » Performance Characteristics","id":"1543","title":"Performance Characteristics"},"1544":{"body":"Component Command Purpose Login provisioning login User authentication MFA TOTP provisioning mfa totp enroll Setup time-based MFA MFA WebAuthn provisioning mfa webauthn enroll Setup hardware security key Secrets provisioning secrets generate aws --ttl 1hr Generate temporary credentials SSH provisioning ssh connect server01 Secure SSH session KMS Encrypt provisioning kms encrypt Encrypt configuration Break-Glass provisioning break-glass request \\"reason\\" Request emergency access Compliance provisioning compliance report Generate compliance report GDPR Export provisioning compliance gdpr export Export user data Audit provisioning audit query --user alice --action deploy --from 24h Search audit logs","breadcrumbs":"Security System » Quick Reference","id":"1544","title":"Quick Reference"},"1545":{"body":"Security system is integrated throughout provisioning platform: Embedded : All authentication/authorization checks Non-blocking : <20ms overhead on operations Graceful degradation : Fallback mechanisms for partial failures Hot reload : Policies update without service restart","breadcrumbs":"Security System » Architecture","id":"1545","title":"Architecture"},"1546":{"body":"Security policies and settings are defined in: provisioning/kcl/security.k - KCL security schema definitions provisioning/config/security/*.toml - Security policy configurations Environment-specific overrides in workspace/config/","breadcrumbs":"Security System » Configuration","id":"1546","title":"Configuration"},"1547":{"body":"Full implementation: ADR-009: Security System Complete User guides: Authentication Layer Guide Admin guides: MFA Admin Setup Guide Implementation details: Various documentation subdirectories","breadcrumbs":"Security System » Documentation","id":"1547","title":"Documentation"},"1548":{"body":"# Show security help\\nprovisioning help security # Show specific security command help\\nprovisioning login --help\\nprovisioning mfa --help\\nprovisioning secrets --help","breadcrumbs":"Security System » Help Commands","id":"1548","title":"Help Commands"},"1549":{"body":"Version : 1.0.0 Date : 2025-10-08 Status : Production-ready","breadcrumbs":"RustyVault KMS Guide » RustyVault KMS Backend Guide","id":"1549","title":"RustyVault KMS Backend Guide"},"155":{"body":"If you encounter permission errors: # Ensure proper ownership\\nsudo chown -R $USER:$USER ~/.config/provisioning # Check PATH\\necho $PATH | grep provisioning","breadcrumbs":"Installation Steps » Permission Denied","id":"155","title":"Permission Denied"},"1550":{"body":"RustyVault is a self-hosted, Rust-based secrets management system that provides a Vault-compatible API . The provisioning platform now supports RustyVault as a KMS backend alongside Age, Cosmian, AWS KMS, and HashiCorp Vault.","breadcrumbs":"RustyVault KMS Guide » Overview","id":"1550","title":"Overview"},"1551":{"body":"Self-hosted : Full control over your key management infrastructure Pure Rust : Better performance and memory safety Vault-compatible : Drop-in replacement for HashiCorp Vault Transit engine OSI-approved License : Apache 2.0 (vs HashiCorp\'s BSL) Embeddable : Can run as standalone service or embedded library No Vendor Lock-in : Open-source alternative to proprietary KMS solutions","breadcrumbs":"RustyVault KMS Guide » Why RustyVault?","id":"1551","title":"Why RustyVault?"},"1552":{"body":"KMS Service Backends:\\n├── Age (local development, file-based)\\n├── Cosmian (privacy-preserving, production)\\n├── AWS KMS (cloud-native AWS)\\n├── HashiCorp Vault (enterprise, external)\\n└── RustyVault (self-hosted, embedded) ✨ NEW\\n```plaintext --- ## Installation ### Option 1: Standalone RustyVault Server ```bash\\n# Install RustyVault binary\\ncargo install rusty_vault # Start RustyVault server\\nrustyvault server -config=/path/to/config.hcl\\n```plaintext ### Option 2: Docker Deployment ```bash\\n# Pull RustyVault image (if available)\\ndocker pull tongsuo/rustyvault:latest # Run RustyVault container\\ndocker run -d \\\\ --name rustyvault \\\\ -p 8200:8200 \\\\ -v $(pwd)/config:/vault/config \\\\ -v $(pwd)/data:/vault/data \\\\ tongsuo/rustyvault:latest\\n```plaintext ### Option 3: From Source ```bash\\n# Clone repository\\ngit clone https://github.com/Tongsuo-Project/RustyVault.git\\ncd RustyVault # Build and run\\ncargo build --release\\n./target/release/rustyvault server -config=config.hcl\\n```plaintext --- ## Configuration ### RustyVault Server Configuration Create `rustyvault-config.hcl`: ```hcl\\n# RustyVault Server Configuration storage \\"file\\" { path = \\"/vault/data\\"\\n} listener \\"tcp\\" { address = \\"0.0.0.0:8200\\" tls_disable = true # Enable TLS in production\\n} api_addr = \\"http://127.0.0.1:8200\\"\\ncluster_addr = \\"https://127.0.0.1:8201\\" # Enable Transit secrets engine\\ndefault_lease_ttl = \\"168h\\"\\nmax_lease_ttl = \\"720h\\"\\n```plaintext ### Initialize RustyVault ```bash\\n# Initialize (first time only)\\nexport VAULT_ADDR=\'http://127.0.0.1:8200\'\\nrustyvault operator init # Unseal (after every restart)\\nrustyvault operator unseal \\nrustyvault operator unseal \\nrustyvault operator unseal # Save root token\\nexport RUSTYVAULT_TOKEN=\'\'\\n```plaintext ### Enable Transit Engine ```bash\\n# Enable transit secrets engine\\nrustyvault secrets enable transit # Create encryption key\\nrustyvault write -f transit/keys/provisioning-main # Verify key creation\\nrustyvault read transit/keys/provisioning-main\\n```plaintext --- ## KMS Service Configuration ### Update `provisioning/config/kms.toml` ```toml\\n[kms]\\ntype = \\"rustyvault\\"\\nserver_url = \\"http://localhost:8200\\"\\ntoken = \\"${RUSTYVAULT_TOKEN}\\"\\nmount_point = \\"transit\\"\\nkey_name = \\"provisioning-main\\"\\ntls_verify = true [service]\\nbind_addr = \\"0.0.0.0:8081\\"\\nlog_level = \\"info\\"\\naudit_logging = true [tls]\\nenabled = false # Set true with HTTPS\\n```plaintext ### Environment Variables ```bash\\n# RustyVault connection\\nexport RUSTYVAULT_ADDR=\\"http://localhost:8200\\"\\nexport RUSTYVAULT_TOKEN=\\"s.xxxxxxxxxxxxxxxxxxxxxx\\"\\nexport RUSTYVAULT_MOUNT_POINT=\\"transit\\"\\nexport RUSTYVAULT_KEY_NAME=\\"provisioning-main\\"\\nexport RUSTYVAULT_TLS_VERIFY=\\"true\\" # KMS service\\nexport KMS_BACKEND=\\"rustyvault\\"\\nexport KMS_BIND_ADDR=\\"0.0.0.0:8081\\"\\n```plaintext --- ## Usage ### Start KMS Service ```bash\\n# With RustyVault backend\\ncd provisioning/platform/kms-service\\ncargo run # With custom config\\ncargo run -- --config=/path/to/kms.toml\\n```plaintext ### CLI Operations ```bash\\n# Encrypt configuration file\\nprovisioning kms encrypt provisioning/config/secrets.yaml # Decrypt configuration\\nprovisioning kms decrypt provisioning/config/secrets.yaml.enc # Generate data key (envelope encryption)\\nprovisioning kms generate-key --spec AES256 # Health check\\nprovisioning kms health\\n```plaintext ### REST API Usage ```bash\\n# Health check\\ncurl http://localhost:8081/health # Encrypt data\\ncurl -X POST http://localhost:8081/encrypt \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"plaintext\\": \\"SGVsbG8sIFdvcmxkIQ==\\", \\"context\\": \\"environment=production\\" }\' # Decrypt data\\ncurl -X POST http://localhost:8081/decrypt \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"ciphertext\\": \\"vault:v1:...\\", \\"context\\": \\"environment=production\\" }\' # Generate data key\\ncurl -X POST http://localhost:8081/datakey/generate \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"key_spec\\": \\"AES_256\\"}\'\\n```plaintext --- ## Advanced Features ### Context-based Encryption (AAD) Additional authenticated data binds encrypted data to specific contexts: ```bash\\n# Encrypt with context\\ncurl -X POST http://localhost:8081/encrypt \\\\ -d \'{ \\"plaintext\\": \\"c2VjcmV0\\", \\"context\\": \\"environment=prod,service=api\\" }\' # Decrypt requires same context\\ncurl -X POST http://localhost:8081/decrypt \\\\ -d \'{ \\"ciphertext\\": \\"vault:v1:...\\", \\"context\\": \\"environment=prod,service=api\\" }\'\\n```plaintext ### Envelope Encryption For large files, use envelope encryption: ```bash\\n# 1. Generate data key\\nDATA_KEY=$(curl -X POST http://localhost:8081/datakey/generate \\\\ -d \'{\\"key_spec\\": \\"AES_256\\"}\' | jq -r \'.plaintext\') # 2. Encrypt large file with data key (locally)\\nopenssl enc -aes-256-cbc -in large-file.bin -out encrypted.bin -K $DATA_KEY # 3. Store encrypted data key (from response)\\necho \\"vault:v1:...\\" > encrypted-data-key.txt\\n```plaintext ### Key Rotation ```bash\\n# Rotate encryption key in RustyVault\\nrustyvault write -f transit/keys/provisioning-main/rotate # Verify new version\\nrustyvault read transit/keys/provisioning-main # Rewrap existing ciphertext with new key version\\ncurl -X POST http://localhost:8081/rewrap \\\\ -d \'{\\"ciphertext\\": \\"vault:v1:...\\"}\'\\n```plaintext --- ## Production Deployment ### High Availability Setup Deploy multiple RustyVault instances behind a load balancer: ```yaml\\n# docker-compose.yml\\nversion: \'3.8\' services: rustyvault-1: image: tongsuo/rustyvault:latest ports: - \\"8200:8200\\" volumes: - ./config:/vault/config - vault-data-1:/vault/data rustyvault-2: image: tongsuo/rustyvault:latest ports: - \\"8201:8200\\" volumes: - ./config:/vault/config - vault-data-2:/vault/data lb: image: nginx:alpine ports: - \\"80:80\\" volumes: - ./nginx.conf:/etc/nginx/nginx.conf depends_on: - rustyvault-1 - rustyvault-2 volumes: vault-data-1: vault-data-2:\\n```plaintext ### TLS Configuration ```toml\\n# kms.toml\\n[kms]\\ntype = \\"rustyvault\\"\\nserver_url = \\"https://vault.example.com:8200\\"\\ntoken = \\"${RUSTYVAULT_TOKEN}\\"\\ntls_verify = true [tls]\\nenabled = true\\ncert_path = \\"/etc/kms/certs/server.crt\\"\\nkey_path = \\"/etc/kms/certs/server.key\\"\\nca_path = \\"/etc/kms/certs/ca.crt\\"\\n```plaintext ### Auto-Unseal (AWS KMS) ```hcl\\n# rustyvault-config.hcl\\nseal \\"awskms\\" { region = \\"us-east-1\\" kms_key_id = \\"arn:aws:kms:us-east-1:123456789012:key/...\\"\\n}\\n```plaintext --- ## Monitoring ### Health Checks ```bash\\n# RustyVault health\\ncurl http://localhost:8200/v1/sys/health # KMS service health\\ncurl http://localhost:8081/health # Metrics (if enabled)\\ncurl http://localhost:8081/metrics\\n```plaintext ### Audit Logging Enable audit logging in RustyVault: ```hcl\\n# rustyvault-config.hcl\\naudit { path = \\"/vault/logs/audit.log\\" format = \\"json\\"\\n}\\n```plaintext --- ## Troubleshooting ### Common Issues **1. Connection Refused** ```bash\\n# Check RustyVault is running\\ncurl http://localhost:8200/v1/sys/health # Check token is valid\\nexport VAULT_ADDR=\'http://localhost:8200\'\\nrustyvault token lookup\\n```plaintext **2. Authentication Failed** ```bash\\n# Verify token in environment\\necho $RUSTYVAULT_TOKEN # Renew token if needed\\nrustyvault token renew\\n```plaintext **3. Key Not Found** ```bash\\n# List available keys\\nrustyvault list transit/keys # Create missing key\\nrustyvault write -f transit/keys/provisioning-main\\n```plaintext **4. TLS Verification Failed** ```bash\\n# Disable TLS verification (dev only)\\nexport RUSTYVAULT_TLS_VERIFY=false # Or add CA certificate\\nexport RUSTYVAULT_CACERT=/path/to/ca.crt\\n```plaintext --- ## Migration from Other Backends ### From HashiCorp Vault RustyVault is API-compatible, minimal changes required: ```bash\\n# Old config (Vault)\\n[kms]\\ntype = \\"vault\\"\\naddress = \\"https://vault.example.com:8200\\"\\ntoken = \\"${VAULT_TOKEN}\\" # New config (RustyVault)\\n[kms]\\ntype = \\"rustyvault\\"\\nserver_url = \\"http://rustyvault.example.com:8200\\"\\ntoken = \\"${RUSTYVAULT_TOKEN}\\"\\n```plaintext ### From Age Re-encrypt existing encrypted files: ```bash\\n# 1. Decrypt with Age\\nprovisioning kms decrypt --backend age secrets.enc > secrets.plain # 2. Encrypt with RustyVault\\nprovisioning kms encrypt --backend rustyvault secrets.plain > secrets.rustyvault.enc\\n```plaintext --- ## Security Considerations ### Best Practices 1. **Enable TLS**: Always use HTTPS in production\\n2. **Rotate Tokens**: Regularly rotate RustyVault tokens\\n3. **Least Privilege**: Use policies to restrict token permissions\\n4. **Audit Logging**: Enable and monitor audit logs\\n5. **Backup Keys**: Secure backup of unseal keys and root token\\n6. **Network Isolation**: Run RustyVault in isolated network segment ### Token Policies Create restricted policy for KMS service: ```hcl\\n# kms-policy.hcl\\npath \\"transit/encrypt/provisioning-main\\" { capabilities = [\\"update\\"]\\n} path \\"transit/decrypt/provisioning-main\\" { capabilities = [\\"update\\"]\\n} path \\"transit/datakey/plaintext/provisioning-main\\" { capabilities = [\\"update\\"]\\n}\\n```plaintext Apply policy: ```bash\\nrustyvault policy write kms-service kms-policy.hcl\\nrustyvault token create -policy=kms-service\\n```plaintext --- ## Performance ### Benchmarks (Estimated) | Operation | Latency | Throughput |\\n|-----------|---------|------------|\\n| Encrypt | 5-15ms | 2,000-5,000 ops/sec |\\n| Decrypt | 5-15ms | 2,000-5,000 ops/sec |\\n| Generate Key | 10-20ms | 1,000-2,000 ops/sec | *Actual performance depends on hardware, network, and RustyVault configuration* ### Optimization Tips 1. **Connection Pooling**: Reuse HTTP connections\\n2. **Batching**: Batch multiple operations when possible\\n3. **Caching**: Cache data keys for envelope encryption\\n4. **Local Unseal**: Use auto-unseal for faster restarts --- ## Related Documentation - **KMS Service**: `docs/user/CONFIG_ENCRYPTION_GUIDE.md`\\n- **Dynamic Secrets**: `docs/user/DYNAMIC_SECRETS_QUICK_REFERENCE.md`\\n- **Security System**: `docs/architecture/ADR-009-security-system-complete.md`\\n- **RustyVault GitHub**: --- ## Support - **GitHub Issues**: \\n- **Documentation**: \\n- **Community**: --- **Last Updated**: 2025-10-08\\n**Maintained By**: Architecture Team","breadcrumbs":"RustyVault KMS Guide » Architecture Position","id":"1552","title":"Architecture Position"},"1553":{"body":"SecretumVault is an enterprise-grade, post-quantum ready secrets management system integrated as the 4th KMS backend in the provisioning platform, alongside Age (dev), Cosmian (prod), and RustyVault (self-hosted).","breadcrumbs":"SecretumVault KMS Guide » SecretumVault KMS Backend Guide","id":"1553","title":"SecretumVault KMS Backend Guide"},"1554":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Overview","id":"1554","title":"Overview"},"1555":{"body":"SecretumVault provides: Post-Quantum Cryptography : Ready for quantum-resistant algorithms Enterprise Features : Policy-as-code (Cedar), audit logging, compliance tracking Multiple Storage Backends : Filesystem (dev), SurrealDB (staging), etcd (prod), PostgreSQL Transit Engine : Encryption-as-a-service for data protection KV Engine : Versioned secret storage with rotation policies High Availability : Seamless transition from embedded to distributed modes","breadcrumbs":"SecretumVault KMS Guide » What is SecretumVault?","id":"1555","title":"What is SecretumVault?"},"1556":{"body":"Scenario Backend Reason Local development Age Simple, no dependencies Testing/Staging SecretumVault Enterprise features, production-like Production Cosmian or SecretumVault Enterprise security, compliance Self-Hosted Enterprise SecretumVault + etcd Full control, HA support","breadcrumbs":"SecretumVault KMS Guide » When to Use SecretumVault","id":"1556","title":"When to Use SecretumVault"},"1557":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Deployment Modes","id":"1557","title":"Deployment Modes"},"1558":{"body":"Storage : Filesystem (~/.config/provisioning/secretumvault/data) Performance : <3ms encryption/decryption Setup : No separate service required Best For : Local development and testing export PROVISIONING_ENV=dev\\nexport KMS_DEV_BACKEND=secretumvault\\nprovisioning kms encrypt config.yaml","breadcrumbs":"SecretumVault KMS Guide » Development Mode (Embedded)","id":"1558","title":"Development Mode (Embedded)"},"1559":{"body":"Storage : SurrealDB (document database) Performance : <10ms operations Setup : Start SecretumVault service separately Best For : Team testing, staging environments # Start SecretumVault service\\nsecretumvault server --storage-backend surrealdb # Configure provisioning\\nexport PROVISIONING_ENV=staging\\nexport SECRETUMVAULT_URL=http://localhost:8200\\nexport SECRETUMVAULT_TOKEN=your-auth-token provisioning kms encrypt config.yaml","breadcrumbs":"SecretumVault KMS Guide » Staging Mode (Service + SurrealDB)","id":"1559","title":"Staging Mode (Service + SurrealDB)"},"156":{"body":"If encryption fails: # Verify keys exist\\nls -la ~/.config/provisioning/age/ # Regenerate if needed\\nage-keygen -o ~/.config/provisioning/age/private_key.txt","breadcrumbs":"Installation Steps » Age Keys Not Found","id":"156","title":"Age Keys Not Found"},"1560":{"body":"Storage : etcd cluster (3+ nodes) Performance : <10ms operations (99th percentile) Setup : etcd cluster + SecretumVault service Best For : Production deployments with HA requirements # Setup etcd cluster (3 nodes minimum)\\netcd --name etcd1 --data-dir etcd1-data \\\\ --advertise-client-urls http://localhost:2379 \\\\ --listen-client-urls http://localhost:2379 # Start SecretumVault with etcd\\nsecretumvault server \\\\ --storage-backend etcd \\\\ --etcd-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 # Configure provisioning\\nexport PROVISIONING_ENV=prod\\nexport SECRETUMVAULT_URL=https://your-secretumvault:8200\\nexport SECRETUMVAULT_TOKEN=your-auth-token\\nexport SECRETUMVAULT_STORAGE=etcd provisioning kms encrypt config.yaml","breadcrumbs":"SecretumVault KMS Guide » Production Mode (Service + etcd)","id":"1560","title":"Production Mode (Service + etcd)"},"1561":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Configuration","id":"1561","title":"Configuration"},"1562":{"body":"Variable Purpose Default Example PROVISIONING_ENV Deployment environment dev staging, prod KMS_DEV_BACKEND Development KMS backend age secretumvault KMS_STAGING_BACKEND Staging KMS backend secretumvault cosmian KMS_PROD_BACKEND Production KMS backend cosmian secretumvault SECRETUMVAULT_URL Server URL http://localhost:8200 https://kms.example.com SECRETUMVAULT_TOKEN Authentication token (none) (Bearer token) SECRETUMVAULT_STORAGE Storage backend filesystem surrealdb, etcd SECRETUMVAULT_TLS_VERIFY Verify TLS certificates false true","breadcrumbs":"SecretumVault KMS Guide » Environment Variables","id":"1562","title":"Environment Variables"},"1563":{"body":"System Defaults : provisioning/config/secretumvault.toml KMS Config : provisioning/config/kms.toml Edit these files to customize: Engine mount points Key names Storage backend settings Performance tuning Audit logging Key rotation policies","breadcrumbs":"SecretumVault KMS Guide » Configuration Files","id":"1563","title":"Configuration Files"},"1564":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Operations","id":"1564","title":"Operations"},"1565":{"body":"# Encrypt a file\\nprovisioning kms encrypt config.yaml\\n# Output: config.yaml.enc # Encrypt with specific key\\nprovisioning kms encrypt --key-id my-key config.yaml # Encrypt and sign\\nprovisioning kms encrypt --sign config.yaml","breadcrumbs":"SecretumVault KMS Guide » Encrypt Data","id":"1565","title":"Encrypt Data"},"1566":{"body":"# Decrypt a file\\nprovisioning kms decrypt config.yaml.enc\\n# Output: config.yaml # Decrypt with specific key\\nprovisioning kms decrypt --key-id my-key config.yaml.enc # Verify and decrypt\\nprovisioning kms decrypt --verify config.yaml.enc","breadcrumbs":"SecretumVault KMS Guide » Decrypt Data","id":"1566","title":"Decrypt Data"},"1567":{"body":"# Generate AES-256 data key\\nprovisioning kms generate-key --spec AES256 # Generate AES-128 data key\\nprovisioning kms generate-key --spec AES128 # Generate RSA-4096 key\\nprovisioning kms generate-key --spec RSA4096","breadcrumbs":"SecretumVault KMS Guide » Generate Data Keys","id":"1567","title":"Generate Data Keys"},"1568":{"body":"# Check KMS health\\nprovisioning kms health # Get KMS version\\nprovisioning kms version # Detailed KMS status\\nprovisioning kms status","breadcrumbs":"SecretumVault KMS Guide » Health and Status","id":"1568","title":"Health and Status"},"1569":{"body":"# Rotate encryption key\\nprovisioning kms rotate-key provisioning-master # Check rotation policy\\nprovisioning kms rotation-policy provisioning-master # Update rotation interval\\nprovisioning kms update-rotation 90 # Rotate every 90 days","breadcrumbs":"SecretumVault KMS Guide » Key Rotation","id":"1569","title":"Key Rotation"},"157":{"body":"Once installation is complete, proceed to: → First Deployment","breadcrumbs":"Installation Steps » Next Steps","id":"157","title":"Next Steps"},"1570":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Storage Backends","id":"1570","title":"Storage Backends"},"1571":{"body":"Local file-based storage with no external dependencies. Pros : Zero external dependencies Fast (local disk access) Easy to inspect/backup Cons : Single-node only No HA Manual backup required Configuration : [secretumvault.storage.filesystem]\\ndata_dir = \\"~/.config/provisioning/secretumvault/data\\"\\npermissions = \\"0700\\"","breadcrumbs":"SecretumVault KMS Guide » Filesystem (Development)","id":"1571","title":"Filesystem (Development)"},"1572":{"body":"Embedded or standalone document database. Pros : Embedded or distributed Flexible schema Real-time syncing Cons : More complex than filesystem New technology (less tested than etcd) Configuration : [secretumvault.storage.surrealdb]\\nconnection_url = \\"ws://localhost:8000\\"\\nnamespace = \\"provisioning\\"\\ndatabase = \\"secrets\\"\\nusername = \\"${SECRETUMVAULT_SURREALDB_USER:-admin}\\"\\npassword = \\"${SECRETUMVAULT_SURREALDB_PASS:-password}\\"","breadcrumbs":"SecretumVault KMS Guide » SurrealDB (Staging)","id":"1572","title":"SurrealDB (Staging)"},"1573":{"body":"Distributed key-value store for high availability. Pros : Proven in production HA and disaster recovery Consistent consensus protocol Multi-site replication Cons : Operational complexity Requires 3+ nodes More infrastructure Configuration : [secretumvault.storage.etcd]\\nendpoints = [\\"http://etcd1:2379\\", \\"http://etcd2:2379\\", \\"http://etcd3:2379\\"]\\ntls_enabled = true\\ntls_cert_file = \\"/path/to/client.crt\\"\\ntls_key_file = \\"/path/to/client.key\\"","breadcrumbs":"SecretumVault KMS Guide » etcd (Production)","id":"1573","title":"etcd (Production)"},"1574":{"body":"Relational database backend. Pros : Mature and reliable Advanced querying Full ACID transactions Cons : Schema requirements External database dependency More operational overhead Configuration : [secretumvault.storage.postgresql]\\nconnection_url = \\"postgresql://user:pass@localhost:5432/secretumvault\\"\\nmax_connections = 10\\nssl_mode = \\"require\\"","breadcrumbs":"SecretumVault KMS Guide » PostgreSQL (Enterprise)","id":"1574","title":"PostgreSQL (Enterprise)"},"1575":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Troubleshooting","id":"1575","title":"Troubleshooting"},"1576":{"body":"Error : \\"Failed to connect to SecretumVault service\\" Solutions : Verify SecretumVault is running: curl http://localhost:8200/v1/sys/health Check server URL configuration: provisioning config show secretumvault.server_url Verify network connectivity: nc -zv localhost 8200","breadcrumbs":"SecretumVault KMS Guide » Connection Errors","id":"1576","title":"Connection Errors"},"1577":{"body":"Error : \\"Authentication failed: X-Vault-Token missing or invalid\\" Solutions : Set authentication token: export SECRETUMVAULT_TOKEN=your-token Verify token is still valid: provisioning secrets verify-token Get new token from SecretumVault: secretumvault auth login","breadcrumbs":"SecretumVault KMS Guide » Authentication Failures","id":"1577","title":"Authentication Failures"},"1578":{"body":"Filesystem Backend Error : \\"Permission denied: ~/.config/provisioning/secretumvault/data\\" Solution : Check directory permissions: ls -la ~/.config/provisioning/secretumvault/\\n# Should be: drwx------ (0700)\\nchmod 700 ~/.config/provisioning/secretumvault/data SurrealDB Backend Error : \\"Failed to connect to SurrealDB at ws://localhost:8000\\" Solution : Start SurrealDB first: surreal start --bind 0.0.0.0:8000 file://secretum.db etcd Backend Error : \\"etcd cluster unhealthy\\" Solution : Check etcd cluster status: etcdctl member list\\netcdctl endpoint health # Verify all nodes are reachable\\ncurl http://etcd1:2379/health\\ncurl http://etcd2:2379/health\\ncurl http://etcd3:2379/health","breadcrumbs":"SecretumVault KMS Guide » Storage Backend Errors","id":"1578","title":"Storage Backend Errors"},"1579":{"body":"Slow encryption/decryption : Check network latency (for service mode): ping -c 3 secretumvault-server Monitor SecretumVault performance: provisioning kms metrics Check storage backend performance: Filesystem: Check disk I/O SurrealDB: Monitor database load etcd: Check cluster consensus state High memory usage : Check cache settings: provisioning config show secretumvault.performance.cache_ttl Reduce cache TTL: provisioning config set secretumvault.performance.cache_ttl 60 Monitor active connections: provisioning kms status","breadcrumbs":"SecretumVault KMS Guide » Performance Issues","id":"1579","title":"Performance Issues"},"158":{"body":"Detailed Installation Guide Workspace Management Troubleshooting Guide","breadcrumbs":"Installation Steps » Additional Resources","id":"158","title":"Additional Resources"},"1580":{"body":"Enable debug logging : export RUST_LOG=debug\\nprovisioning kms encrypt config.yaml Check configuration : provisioning config show secretumvault\\nprovisioning config validate Test connectivity : provisioning kms health --verbose View audit logs : tail -f ~/.config/provisioning/logs/secretumvault-audit.log","breadcrumbs":"SecretumVault KMS Guide » Debugging","id":"1580","title":"Debugging"},"1581":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Security Best Practices","id":"1581","title":"Security Best Practices"},"1582":{"body":"Never commit tokens to version control Use environment variables or .env files (gitignored) Rotate tokens regularly Use different tokens per environment","breadcrumbs":"SecretumVault KMS Guide » Token Management","id":"1582","title":"Token Management"},"1583":{"body":"Enable TLS verification in production: export SECRETUMVAULT_TLS_VERIFY=true Use proper certificates (not self-signed in production) Pin certificates to prevent MITM attacks","breadcrumbs":"SecretumVault KMS Guide » TLS/SSL","id":"1583","title":"TLS/SSL"},"1584":{"body":"Restrict who can access SecretumVault admin UI Use strong authentication (MFA preferred) Audit all secrets access Implement least-privilege principle","breadcrumbs":"SecretumVault KMS Guide » Access Control","id":"1584","title":"Access Control"},"1585":{"body":"Rotate keys regularly (every 90 days recommended) Keep old versions for decryption Test rotation procedures in staging first Monitor rotation status","breadcrumbs":"SecretumVault KMS Guide » Key Rotation","id":"1585","title":"Key Rotation"},"1586":{"body":"Backup SecretumVault data regularly Test restore procedures Store backups securely Keep backup keys separate from encrypted data","breadcrumbs":"SecretumVault KMS Guide » Backup and Recovery","id":"1586","title":"Backup and Recovery"},"1587":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Migration Guide","id":"1587","title":"Migration Guide"},"1588":{"body":"# Export all secrets encrypted with Age\\nprovisioning secrets export --backend age --output secrets.json # Import into SecretumVault\\nprovisioning secrets import --backend secretumvault secrets.json # Re-encrypt all configurations\\nfind workspace/infra -name \\"*.enc\\" -exec provisioning kms reencrypt {} \\\\;","breadcrumbs":"SecretumVault KMS Guide » From Age to SecretumVault","id":"1588","title":"From Age to SecretumVault"},"1589":{"body":"# Both use Vault-compatible APIs, so migration is simpler:\\n# 1. Ensure SecretumVault keys are available\\n# 2. Update KMS_PROD_BACKEND=secretumvault\\n# 3. Test with staging first\\n# 4. Monitor during transition","breadcrumbs":"SecretumVault KMS Guide » From RustyVault to SecretumVault","id":"1589","title":"From RustyVault to SecretumVault"},"159":{"body":"This guide walks you through deploying your first infrastructure using the Provisioning Platform.","breadcrumbs":"First Deployment » First Deployment","id":"159","title":"First Deployment"},"1590":{"body":"# For production migration:\\n# 1. Set up SecretumVault with etcd backend\\n# 2. Verify high availability is working\\n# 3. Run parallel encryption with both systems\\n# 4. Validate all decryptions work\\n# 5. Update KMS_PROD_BACKEND=secretumvault\\n# 6. Monitor closely for 24 hours\\n# 7. Keep Cosmian as fallback for 7 days","breadcrumbs":"SecretumVault KMS Guide » From Cosmian to SecretumVault","id":"1590","title":"From Cosmian to SecretumVault"},"1591":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Performance Tuning","id":"1591","title":"Performance Tuning"},"1592":{"body":"[secretumvault.performance]\\nmax_connections = 5\\nconnection_timeout = 5\\nrequest_timeout = 30\\ncache_ttl = 60","breadcrumbs":"SecretumVault KMS Guide » Development (Filesystem)","id":"1592","title":"Development (Filesystem)"},"1593":{"body":"[secretumvault.performance]\\nmax_connections = 20\\nconnection_timeout = 5\\nrequest_timeout = 30\\ncache_ttl = 300","breadcrumbs":"SecretumVault KMS Guide » Staging (SurrealDB)","id":"1593","title":"Staging (SurrealDB)"},"1594":{"body":"[secretumvault.performance]\\nmax_connections = 50\\nconnection_timeout = 10\\nrequest_timeout = 30\\ncache_ttl = 600","breadcrumbs":"SecretumVault KMS Guide » Production (etcd)","id":"1594","title":"Production (etcd)"},"1595":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Compliance and Audit","id":"1595","title":"Compliance and Audit"},"1596":{"body":"All operations are logged: # View recent audit events\\nprovisioning kms audit --limit 100 # Export audit logs\\nprovisioning kms audit export --output audit.json # Audit specific operations\\nprovisioning kms audit --action encrypt --from 24h","breadcrumbs":"SecretumVault KMS Guide » Audit Logging","id":"1596","title":"Audit Logging"},"1597":{"body":"# Generate compliance report\\nprovisioning compliance report --backend secretumvault # GDPR data export\\nprovisioning compliance gdpr-export user@example.com # SOC2 audit trail\\nprovisioning compliance soc2-export --output soc2-audit.json","breadcrumbs":"SecretumVault KMS Guide » Compliance Reports","id":"1597","title":"Compliance Reports"},"1598":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Advanced Topics","id":"1598","title":"Advanced Topics"},"1599":{"body":"Enable fine-grained access control: # Enable Cedar integration\\nprovisioning config set secretumvault.authorization.cedar_enabled true # Define access policies\\nprovisioning policy define-kms-access user@example.com admin\\nprovisioning policy define-kms-access deployer@example.com deploy-only","breadcrumbs":"SecretumVault KMS Guide » Cedar Authorization Policies","id":"1599","title":"Cedar Authorization Policies"},"16":{"body":"Linux : Any modern distribution (Ubuntu 20.04+, CentOS 8+, Debian 11+) macOS : 11.0+ (Big Sur and newer) Windows : Windows 10/11 with WSL2","breadcrumbs":"Installation Guide » Operating System Support","id":"16","title":"Operating System Support"},"160":{"body":"In this chapter, you\'ll: Configure a simple infrastructure Create your first server Install a task service (Kubernetes) Verify the deployment Estimated time: 10-15 minutes","breadcrumbs":"First Deployment » Overview","id":"160","title":"Overview"},"1600":{"body":"Configure master key settings: # Set KEK rotation interval\\nprovisioning config set secretumvault.rotation.rotation_interval_days 90 # Enable automatic rotation\\nprovisioning config set secretumvault.rotation.auto_rotate true # Retain old versions for decryption\\nprovisioning config set secretumvault.rotation.retain_old_versions true","breadcrumbs":"SecretumVault KMS Guide » Key Encryption Keys (KEK)","id":"1600","title":"Key Encryption Keys (KEK)"},"1601":{"body":"For production deployments across regions: # Region 1\\nexport SECRETUMVAULT_URL=https://kms-us-east.example.com\\nexport SECRETUMVAULT_STORAGE=etcd # Region 2 (for failover)\\nexport SECRETUMVAULT_URL_FALLBACK=https://kms-us-west.example.com","breadcrumbs":"SecretumVault KMS Guide » Multi-Region Setup","id":"1601","title":"Multi-Region Setup"},"1602":{"body":"Documentation : docs/user/SECRETUMVAULT_KMS_GUIDE.md (this file) Configuration Template : provisioning/config/secretumvault.toml KMS Configuration : provisioning/config/kms.toml Issues : Report issues with provisioning kms debug Logs : Check ~/.config/provisioning/logs/secretumvault-*.log","breadcrumbs":"SecretumVault KMS Guide » Support and Resources","id":"1602","title":"Support and Resources"},"1603":{"body":"Age KMS Guide - Simple local encryption Cosmian KMS Guide - Enterprise confidential computing RustyVault Guide - Self-hosted Vault KMS Overview - KMS backend comparison","breadcrumbs":"SecretumVault KMS Guide » See Also","id":"1603","title":"See Also"},"1604":{"body":"","breadcrumbs":"SSH Temporal Keys User Guide » SSH Temporal Keys - User Guide","id":"1604","title":"SSH Temporal Keys - User Guide"},"1605":{"body":"","breadcrumbs":"SSH Temporal Keys User Guide » Quick Start","id":"1605","title":"Quick Start"},"1606":{"body":"The fastest way to use temporal SSH keys: # Auto-generate, deploy, and connect (key auto-revoked after disconnect)\\nssh connect server.example.com # Connect with custom user and TTL\\nssh connect server.example.com --user deploy --ttl 30min # Keep key active after disconnect\\nssh connect server.example.com --keep\\n```plaintext ### Manual Key Management For more control over the key lifecycle: ```bash\\n# 1. Generate key\\nssh generate-key server.example.com --user root --ttl 1hr # Output:\\n# ✓ SSH key generated successfully\\n# Key ID: abc-123-def-456\\n# Type: dynamickeypair\\n# User: root\\n# Server: server.example.com\\n# Expires: 2024-01-01T13:00:00Z\\n# Fingerprint: SHA256:...\\n#\\n# Private Key (save securely):\\n# -----BEGIN OPENSSH PRIVATE KEY-----\\n# ...\\n# -----END OPENSSH PRIVATE KEY----- # 2. Deploy key to server\\nssh deploy-key abc-123-def-456 # 3. Use the private key to connect\\nssh -i /path/to/private/key root@server.example.com # 4. Revoke when done\\nssh revoke-key abc-123-def-456\\n```plaintext ## Key Features ### Automatic Expiration All keys expire automatically after their TTL: - **Default TTL**: 1 hour\\n- **Configurable**: From 5 minutes to 24 hours\\n- **Background Cleanup**: Automatic removal from servers every 5 minutes ### Multiple Key Types Choose the right key type for your use case: | Type | Description | Use Case |\\n|------|-------------|----------|\\n| **dynamic** (default) | Generated Ed25519 keys | Quick SSH access |\\n| **ca** | Vault CA-signed certificate | Enterprise with SSH CA |\\n| **otp** | Vault one-time password | Single-use access | ### Security Benefits ✅ No static SSH keys to manage\\n✅ Short-lived credentials (1 hour default)\\n✅ Automatic cleanup on expiration\\n✅ Audit trail for all operations\\n✅ Private keys never stored on disk ## Common Usage Patterns ### Development Workflow ```bash\\n# Quick SSH for debugging\\nssh connect dev-server.local --ttl 30min # Execute commands\\nssh root@dev-server.local \\"systemctl status nginx\\" # Connection closes, key auto-revokes\\n```plaintext ### Production Deployment ```bash\\n# Generate key with longer TTL for deployment\\nssh generate-key prod-server.example.com --ttl 2hr # Deploy to server\\nssh deploy-key # Run deployment script\\nssh -i /tmp/deploy-key root@prod-server.example.com < deploy.sh # Manual revoke when done\\nssh revoke-key \\n```plaintext ### Multi-Server Access ```bash\\n# Generate one key\\nssh generate-key server01.example.com --ttl 1hr # Use the same private key for multiple servers (if you have provisioning access)\\n# Note: Currently each key is server-specific, multi-server support coming soon\\n```plaintext ## Command Reference ### ssh generate-key Generate a new temporal SSH key. **Syntax**: ```bash\\nssh generate-key [options]\\n```plaintext **Options**: - `--user `: SSH user (default: root)\\n- `--ttl `: Key lifetime (default: 1hr)\\n- `--type `: Key type (default: dynamic)\\n- `--ip
`: Allowed IP (OTP mode only)\\n- `--principal `: Principal (CA mode only) **Examples**: ```bash\\n# Basic usage\\nssh generate-key server.example.com # Custom user and TTL\\nssh generate-key server.example.com --user deploy --ttl 30min # Vault CA mode\\nssh generate-key server.example.com --type ca --principal admin\\n```plaintext ### ssh deploy-key Deploy a generated key to the target server. **Syntax**: ```bash\\nssh deploy-key \\n```plaintext **Example**: ```bash\\nssh deploy-key abc-123-def-456\\n```plaintext ### ssh list-keys List all active SSH keys. **Syntax**: ```bash\\nssh list-keys [--expired]\\n```plaintext **Examples**: ```bash\\n# List active keys\\nssh list-keys # Show only deployed keys\\nssh list-keys | where deployed == true # Include expired keys\\nssh list-keys --expired\\n```plaintext ### ssh get-key Get detailed information about a specific key. **Syntax**: ```bash\\nssh get-key \\n```plaintext **Example**: ```bash\\nssh get-key abc-123-def-456\\n```plaintext ### ssh revoke-key Immediately revoke a key (removes from server and tracking). **Syntax**: ```bash\\nssh revoke-key \\n```plaintext **Example**: ```bash\\nssh revoke-key abc-123-def-456\\n```plaintext ### ssh connect Auto-generate, deploy, connect, and revoke (all-in-one). **Syntax**: ```bash\\nssh connect [options]\\n```plaintext **Options**: - `--user `: SSH user (default: root)\\n- `--ttl `: Key lifetime (default: 1hr)\\n- `--type `: Key type (default: dynamic)\\n- `--keep`: Don\'t revoke after disconnect **Examples**: ```bash\\n# Quick connection\\nssh connect server.example.com # Custom user\\nssh connect server.example.com --user deploy # Keep key active after disconnect\\nssh connect server.example.com --keep\\n```plaintext ### ssh stats Show SSH key statistics. **Syntax**: ```bash\\nssh stats\\n```plaintext **Example Output**: ```plaintext\\nSSH Key Statistics: Total generated: 42 Active keys: 10 Expired keys: 32 Keys by type: dynamic: 35 otp: 5 certificate: 2 Last cleanup: 2024-01-01T12:00:00Z Cleaned keys: 5\\n```plaintext ### ssh cleanup Manually trigger cleanup of expired keys. **Syntax**: ```bash\\nssh cleanup\\n```plaintext ### ssh test Run a quick test of the SSH key system. **Syntax**: ```bash\\nssh test [--user ]\\n```plaintext **Example**: ```bash\\nssh test server.example.com --user root\\n```plaintext ### ssh help Show help information. **Syntax**: ```bash\\nssh help\\n```plaintext ## Duration Formats The `--ttl` option accepts various duration formats: | Format | Example | Meaning |\\n|--------|---------|---------|\\n| Minutes | `30min` | 30 minutes |\\n| Hours | `2hr` | 2 hours |\\n| Mixed | `1hr 30min` | 1.5 hours |\\n| Seconds | `3600sec` | 1 hour | ## Working with Private Keys ### Saving Private Keys When you generate a key, save the private key immediately: ```bash\\n# Generate and save to file\\nssh generate-key server.example.com | get private_key | save -f ~/.ssh/temp_key\\nchmod 600 ~/.ssh/temp_key # Use the key\\nssh -i ~/.ssh/temp_key root@server.example.com # Cleanup\\nrm ~/.ssh/temp_key\\n```plaintext ### Using SSH Agent Add the temporary key to your SSH agent: ```bash\\n# Generate key and extract private key\\nssh generate-key server.example.com | get private_key | save -f /tmp/temp_key\\nchmod 600 /tmp/temp_key # Add to agent\\nssh-add /tmp/temp_key # Connect (agent provides the key automatically)\\nssh root@server.example.com # Remove from agent\\nssh-add -d /tmp/temp_key\\nrm /tmp/temp_key\\n```plaintext ## Troubleshooting ### Key Deployment Fails **Problem**: `ssh deploy-key` returns error **Solutions**: 1. Check SSH connectivity to server: ```bash ssh root@server.example.com Verify provisioning key is configured: echo $PROVISIONING_SSH_KEY Check server SSH daemon: ssh root@server.example.com \\"systemctl status sshd\\"","breadcrumbs":"SSH Temporal Keys User Guide » Generate and Connect with Temporary Key","id":"1606","title":"Generate and Connect with Temporary Key"},"1607":{"body":"Problem : SSH connection fails with \\"Permission denied (publickey)\\" Solutions : Verify key was deployed: ssh list-keys | where id == \\"\\" Check key hasn\'t expired: ssh get-key | get expires_at Verify private key permissions: chmod 600 /path/to/private/key","breadcrumbs":"SSH Temporal Keys User Guide » Private Key Not Working","id":"1607","title":"Private Key Not Working"},"1608":{"body":"Problem : Expired keys not being removed Solutions : Check orchestrator is running: curl http://localhost:9090/health Trigger manual cleanup: ssh cleanup Check orchestrator logs: tail -f ./data/orchestrator.log | grep SSH","breadcrumbs":"SSH Temporal Keys User Guide » Cleanup Not Running","id":"1608","title":"Cleanup Not Running"},"1609":{"body":"","breadcrumbs":"SSH Temporal Keys User Guide » Best Practices","id":"1609","title":"Best Practices"},"161":{"body":"Create a basic infrastructure configuration: # Generate infrastructure template\\nprovisioning generate infra --new my-infra # This creates: workspace/infra/my-infra/\\n# - config.toml (infrastructure settings)\\n# - settings.k (KCL configuration)","breadcrumbs":"First Deployment » Step 1: Configure Infrastructure","id":"161","title":"Step 1: Configure Infrastructure"},"1610":{"body":"Short TTLs : Use the shortest TTL that works for your task ssh connect server.example.com --ttl 30min Immediate Revocation : Revoke keys when you\'re done ssh revoke-key Private Key Handling : Never share or commit private keys # Save to temp location, delete after use\\nssh generate-key server.example.com | get private_key | save -f /tmp/key\\n# ... use key ...\\nrm /tmp/key","breadcrumbs":"SSH Temporal Keys User Guide » Security","id":"1610","title":"Security"},"1611":{"body":"Automated Deployments : Generate key in CI/CD #!/bin/bash\\nKEY_ID=$(ssh generate-key prod.example.com --ttl 1hr | get id)\\nssh deploy-key $KEY_ID\\n# Run deployment\\nansible-playbook deploy.yml\\nssh revoke-key $KEY_ID Interactive Use : Use ssh connect for quick access ssh connect dev.example.com Monitoring : Check statistics regularly ssh stats","breadcrumbs":"SSH Temporal Keys User Guide » Workflow Integration","id":"1611","title":"Workflow Integration"},"1612":{"body":"","breadcrumbs":"SSH Temporal Keys User Guide » Advanced Usage","id":"1612","title":"Advanced Usage"},"1613":{"body":"If your organization uses HashiCorp Vault: CA Mode (Recommended) # Generate CA-signed certificate\\nssh generate-key server.example.com --type ca --principal admin --ttl 1hr # Vault signs your public key\\n# Server must trust Vault CA certificate\\n```plaintext **Setup** (one-time): ```bash\\n# On servers, add to /etc/ssh/sshd_config:\\nTrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem # Get Vault CA public key:\\nvault read -field=public_key ssh/config/ca | \\\\ sudo tee /etc/ssh/trusted-user-ca-keys.pem # Restart SSH:\\nsudo systemctl restart sshd\\n```plaintext #### OTP Mode ```bash\\n# Generate one-time password\\nssh generate-key server.example.com --type otp --ip 192.168.1.100 # Use the OTP to connect (single use only)\\n```plaintext ### Scripting Use in scripts for automated operations: ```nushell\\n# deploy.nu\\ndef deploy [target: string] { let key = (ssh generate-key $target --ttl 1hr) ssh deploy-key $key.id # Run deployment try { ssh $\\"root@($target)\\" \\"bash /path/to/deploy.sh\\" } catch { print \\"Deployment failed\\" } # Always cleanup ssh revoke-key $key.id\\n}\\n```plaintext ## API Integration For programmatic access, use the REST API: ```bash\\n# Generate key\\ncurl -X POST http://localhost:9090/api/v1/ssh/generate \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"key_type\\": \\"dynamickeypair\\", \\"user\\": \\"root\\", \\"target_server\\": \\"server.example.com\\", \\"ttl_seconds\\": 3600 }\' # Deploy key\\ncurl -X POST http://localhost:9090/api/v1/ssh/{key_id}/deploy # List keys\\ncurl http://localhost:9090/api/v1/ssh/keys # Get stats\\ncurl http://localhost:9090/api/v1/ssh/stats\\n```plaintext ## FAQ **Q: Can I use the same key for multiple servers?**\\nA: Currently, each key is tied to a specific server. Multi-server support is planned. **Q: What happens if the orchestrator crashes?**\\nA: Keys in memory are lost, but keys already deployed to servers remain until their expiration time. **Q: Can I extend the TTL of an existing key?**\\nA: No, you must generate a new key. This is by design for security. **Q: What\'s the maximum TTL?**\\nA: Configurable by admin, default maximum is 24 hours. **Q: Are private keys stored anywhere?**\\nA: Private keys exist only in memory during generation and are shown once to the user. They are never written to disk by the system. **Q: What happens if cleanup fails?**\\nA: The key remains in authorized_keys until the next cleanup run. You can trigger manual cleanup with `ssh cleanup`. **Q: Can I use this with non-root users?**\\nA: Yes, use `--user ` when generating the key. **Q: How do I know when my key will expire?**\\nA: Use `ssh get-key ` to see the exact expiration timestamp. ## Support For issues or questions: 1. Check orchestrator logs: `tail -f ./data/orchestrator.log`\\n2. Run diagnostics: `ssh stats`\\n3. Test connectivity: `ssh test server.example.com`\\n4. Review documentation: `SSH_KEY_MANAGEMENT.md` ## See Also - **Architecture**: `SSH_KEY_MANAGEMENT.md`\\n- **Implementation**: `SSH_IMPLEMENTATION_SUMMARY.md`\\n- **Configuration**: `config/ssh-config.toml.example`","breadcrumbs":"SSH Temporal Keys User Guide » Vault Integration","id":"1613","title":"Vault Integration"},"1614":{"body":"Version : 1.0.0 Last Updated : 2025-10-09 Target Audience : Developers, DevOps Engineers, System Administrators","breadcrumbs":"Plugin Integration Guide » Nushell Plugin Integration Guide","id":"1614","title":"Nushell Plugin Integration Guide"},"1615":{"body":"Overview Why Native Plugins? Prerequisites Installation Quick Start (5 Minutes) Authentication Plugin (nu_plugin_auth) KMS Plugin (nu_plugin_kms) Orchestrator Plugin (nu_plugin_orchestrator) Integration Examples Best Practices Troubleshooting Migration Guide Advanced Configuration Security Considerations FAQ","breadcrumbs":"Plugin Integration Guide » Table of Contents","id":"1615","title":"Table of Contents"},"1616":{"body":"The Provisioning Platform provides three native Nushell plugins that dramatically improve performance and user experience compared to traditional HTTP API calls: Plugin Purpose Performance Gain nu_plugin_auth JWT authentication, MFA, session management 20% faster nu_plugin_kms Encryption/decryption with multiple KMS backends 10x faster nu_plugin_orchestrator Orchestrator operations without HTTP overhead 50x faster","breadcrumbs":"Plugin Integration Guide » Overview","id":"1616","title":"Overview"},"1617":{"body":"Traditional HTTP Flow:\\nUser Command → HTTP Request → Network → Server Processing → Response → Parse JSON Total: ~50-100ms per operation Plugin Flow:\\nUser Command → Direct Rust Function Call → Return Nushell Data Structure Total: ~1-10ms per operation\\n```plaintext ### Key Features ✅ **Performance**: 10-50x faster than HTTP API\\n✅ **Type Safety**: Full Nushell type system integration\\n✅ **Pipeline Support**: Native Nushell data structures\\n✅ **Offline Capability**: KMS and orchestrator work without network\\n✅ **OS Integration**: Native keyring for secure token storage\\n✅ **Graceful Fallback**: HTTP still available if plugins not installed --- ## Why Native Plugins? ### Performance Comparison Real-world benchmarks from production workload: | Operation | HTTP API | Plugin | Improvement | Speedup |\\n|-----------|----------|--------|-------------|---------|\\n| **KMS Encrypt (RustyVault)** | ~50ms | ~5ms | -45ms | **10x** |\\n| **KMS Decrypt (RustyVault)** | ~50ms | ~5ms | -45ms | **10x** |\\n| **KMS Encrypt (Age)** | ~30ms | ~3ms | -27ms | **10x** |\\n| **KMS Decrypt (Age)** | ~30ms | ~3ms | -27ms | **10x** |\\n| **Orchestrator Status** | ~30ms | ~1ms | -29ms | **30x** |\\n| **Orchestrator Tasks List** | ~50ms | ~5ms | -45ms | **10x** |\\n| **Orchestrator Validate** | ~100ms | ~10ms | -90ms | **10x** |\\n| **Auth Login** | ~100ms | ~80ms | -20ms | 1.25x |\\n| **Auth Verify** | ~50ms | ~10ms | -40ms | **5x** |\\n| **Auth MFA Verify** | ~80ms | ~60ms | -20ms | 1.3x | ### Use Case: Batch Processing **Scenario**: Encrypt 100 configuration files ```nushell\\n# HTTP API approach\\nls configs/*.yaml | each { |file| http post http://localhost:9998/encrypt { data: (open $file) }\\n} | save encrypted/\\n# Total time: ~5 seconds (50ms × 100) # Plugin approach\\nls configs/*.yaml | each { |file| kms encrypt (open $file) --backend rustyvault\\n} | save encrypted/\\n# Total time: ~0.5 seconds (5ms × 100)\\n# Result: 10x faster\\n```plaintext ### Developer Experience Benefits **1. Native Nushell Integration** ```nushell\\n# HTTP: Parse JSON, check status codes\\nlet result = http post http://localhost:9998/encrypt { data: \\"secret\\" }\\nif $result.status == \\"success\\" { $result.encrypted\\n} else { error make { msg: $result.error }\\n} # Plugin: Direct return values\\nkms encrypt \\"secret\\"\\n# Returns encrypted string directly, errors use Nushell\'s error system\\n```plaintext **2. Pipeline Friendly** ```nushell\\n# HTTP: Requires wrapping, JSON parsing\\n[\\"secret1\\", \\"secret2\\"] | each { |s| (http post http://localhost:9998/encrypt { data: $s }).encrypted\\n} # Plugin: Natural pipeline flow\\n[\\"secret1\\", \\"secret2\\"] | each { |s| kms encrypt $s }\\n```plaintext **3. Tab Completion** ```nushell\\n# All plugin commands have full tab completion\\nkms \\n# → encrypt, decrypt, generate-key, status, backends kms encrypt --\\n# → --backend, --key, --context\\n```plaintext --- ## Prerequisites ### Required Software | Software | Minimum Version | Purpose |\\n|----------|----------------|---------|\\n| **Nushell** | 0.107.1 | Shell and plugin runtime |\\n| **Rust** | 1.75+ | Building plugins from source |\\n| **Cargo** | (included with Rust) | Build tool | ### Optional Dependencies | Software | Purpose | Platform |\\n|----------|---------|----------|\\n| **gnome-keyring** | Secure token storage | Linux |\\n| **kwallet** | Secure token storage | Linux (KDE) |\\n| **age** | Age encryption backend | All |\\n| **RustyVault** | High-performance KMS | All | ### Platform Support | Platform | Status | Notes |\\n|----------|--------|-------|\\n| **macOS** | ✅ Full | Keychain integration |\\n| **Linux** | ✅ Full | Requires keyring service |\\n| **Windows** | ✅ Full | Credential Manager integration |\\n| **FreeBSD** | ⚠️ Partial | No keyring integration | --- ## Installation ### Step 1: Clone or Navigate to Plugin Directory ```bash\\ncd /Users/Akasha/project-provisioning/provisioning/core/plugins/nushell-plugins\\n```plaintext ### Step 2: Build All Plugins ```bash\\n# Build in release mode (optimized for performance)\\ncargo build --release --all # Or build individually\\ncargo build --release -p nu_plugin_auth\\ncargo build --release -p nu_plugin_kms\\ncargo build --release -p nu_plugin_orchestrator\\n```plaintext **Expected output:** ```plaintext Compiling nu_plugin_auth v0.1.0 Compiling nu_plugin_kms v0.1.0 Compiling nu_plugin_orchestrator v0.1.0 Finished release [optimized] target(s) in 2m 15s\\n```plaintext ### Step 3: Register Plugins with Nushell ```bash\\n# Register all three plugins\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator # On macOS, full paths:\\nplugin add $PWD/target/release/nu_plugin_auth\\nplugin add $PWD/target/release/nu_plugin_kms\\nplugin add $PWD/target/release/nu_plugin_orchestrator\\n```plaintext ### Step 4: Verify Installation ```bash\\n# List registered plugins\\nplugin list | where name =~ \\"auth|kms|orch\\" # Test each plugin\\nauth --help\\nkms --help\\norch --help\\n```plaintext **Expected output:** ```plaintext\\n╭───┬─────────────────────────┬─────────┬───────────────────────────────────╮\\n│ # │ name │ version │ filename │\\n├───┼─────────────────────────┼─────────┼───────────────────────────────────┤\\n│ 0 │ nu_plugin_auth │ 0.1.0 │ .../nu_plugin_auth │\\n│ 1 │ nu_plugin_kms │ 0.1.0 │ .../nu_plugin_kms │\\n│ 2 │ nu_plugin_orchestrator │ 0.1.0 │ .../nu_plugin_orchestrator │\\n╰───┴─────────────────────────┴─────────┴───────────────────────────────────╯\\n```plaintext ### Step 5: Configure Environment (Optional) ```bash\\n# Add to ~/.config/nushell/env.nu\\n$env.RUSTYVAULT_ADDR = \\"http://localhost:8200\\"\\n$env.RUSTYVAULT_TOKEN = \\"your-vault-token\\"\\n$env.CONTROL_CENTER_URL = \\"http://localhost:3000\\"\\n$env.ORCHESTRATOR_DATA_DIR = \\"/opt/orchestrator/data\\"\\n```plaintext --- ## Quick Start (5 Minutes) ### 1. Authentication Workflow ```nushell\\n# Login (password prompted securely)\\nauth login admin\\n# ✓ Login successful\\n# User: admin\\n# Role: Admin\\n# Expires: 2025-10-09T14:30:00Z # Verify session\\nauth verify\\n# {\\n# \\"active\\": true,\\n# \\"user\\": \\"admin\\",\\n# \\"role\\": \\"Admin\\",\\n# \\"expires_at\\": \\"2025-10-09T14:30:00Z\\"\\n# } # Enroll in MFA (optional but recommended)\\nauth mfa enroll totp\\n# QR code displayed, save backup codes # Verify MFA\\nauth mfa verify --code 123456\\n# ✓ MFA verification successful # Logout\\nauth logout\\n# ✓ Logged out successfully\\n```plaintext ### 2. KMS Operations ```nushell\\n# Encrypt data\\nkms encrypt \\"my secret data\\"\\n# vault:v1:8GawgGuP... # Decrypt data\\nkms decrypt \\"vault:v1:8GawgGuP...\\"\\n# my secret data # Check available backends\\nkms status\\n# {\\n# \\"backend\\": \\"rustyvault\\",\\n# \\"status\\": \\"healthy\\",\\n# \\"url\\": \\"http://localhost:8200\\"\\n# } # Encrypt with specific backend\\nkms encrypt \\"data\\" --backend age --key age1xxxxxxx\\n```plaintext ### 3. Orchestrator Operations ```nushell\\n# Check orchestrator status (no HTTP call)\\norch status\\n# {\\n# \\"active_tasks\\": 5,\\n# \\"completed_tasks\\": 120,\\n# \\"health\\": \\"healthy\\"\\n# } # Validate workflow\\norch validate workflows/deploy.k\\n# {\\n# \\"valid\\": true,\\n# \\"workflow\\": { \\"name\\": \\"deploy_k8s\\", \\"operations\\": 5 }\\n# } # List running tasks\\norch tasks --status running\\n# [ { \\"task_id\\": \\"task_123\\", \\"name\\": \\"deploy_k8s\\", \\"progress\\": 45 } ]\\n```plaintext ### 4. Combined Workflow ```nushell\\n# Complete authenticated deployment pipeline\\nauth login admin | if $in.success { auth verify } | if $in.active { orch validate workflows/production.k | if $in.valid { kms encrypt (open secrets.yaml | to json) | save production-secrets.enc } }\\n# ✓ Pipeline completed successfully\\n```plaintext --- ## Authentication Plugin (nu_plugin_auth) The authentication plugin manages JWT-based authentication, MFA enrollment/verification, and session management with OS-native keyring integration. ### Available Commands | Command | Purpose | Example |\\n|---------|---------|---------|\\n| `auth login` | Login and store JWT | `auth login admin` |\\n| `auth logout` | Logout and clear tokens | `auth logout` |\\n| `auth verify` | Verify current session | `auth verify` |\\n| `auth sessions` | List active sessions | `auth sessions` |\\n| `auth mfa enroll` | Enroll in MFA | `auth mfa enroll totp` |\\n| `auth mfa verify` | Verify MFA code | `auth mfa verify --code 123456` | ### Command Reference #### `auth login [password]` Login to provisioning platform and store JWT tokens securely in OS keyring. **Arguments:** - `username` (required): Username for authentication\\n- `password` (optional): Password (prompted if not provided) **Flags:** - `--url `: Control center URL (default: `http://localhost:3000`)\\n- `--password `: Password (alternative to positional argument) **Examples:** ```nushell\\n# Interactive password prompt (recommended)\\nauth login admin\\n# Password: ••••••••\\n# ✓ Login successful\\n# User: admin\\n# Role: Admin\\n# Expires: 2025-10-09T14:30:00Z # Password in command (not recommended for production)\\nauth login admin mypassword # Custom control center URL\\nauth login admin --url https://control-center.example.com # Pipeline usage\\nlet creds = { username: \\"admin\\", password: (input --suppress-output \\"Password: \\") }\\nauth login $creds.username $creds.password\\n```plaintext **Token Storage Locations:** - **macOS**: Keychain Access (`login` keychain)\\n- **Linux**: Secret Service API (gnome-keyring, kwallet)\\n- **Windows**: Windows Credential Manager **Security Notes:** - Tokens encrypted at rest by OS\\n- Requires user authentication to access (macOS Touch ID, Linux password)\\n- Never stored in plain text files #### `auth logout` Logout from current session and remove stored tokens from keyring. **Examples:** ```nushell\\n# Simple logout\\nauth logout\\n# ✓ Logged out successfully # Conditional logout\\nif (auth verify | get active) { auth logout echo \\"Session terminated\\"\\n} # Logout all sessions (requires admin role)\\nauth sessions | each { |sess| auth logout --session-id $sess.session_id\\n}\\n```plaintext #### `auth verify` Verify current session status and check token validity. **Returns:** - `active` (bool): Whether session is active\\n- `user` (string): Username\\n- `role` (string): User role\\n- `expires_at` (datetime): Token expiration\\n- `mfa_verified` (bool): MFA verification status **Examples:** ```nushell\\n# Check if logged in\\nauth verify\\n# {\\n# \\"active\\": true,\\n# \\"user\\": \\"admin\\",\\n# \\"role\\": \\"Admin\\",\\n# \\"expires_at\\": \\"2025-10-09T14:30:00Z\\",\\n# \\"mfa_verified\\": true\\n# } # Pipeline usage\\nif (auth verify | get active) { echo \\"✓ Authenticated\\"\\n} else { auth login admin\\n} # Check expiration\\nlet session = auth verify\\nif ($session.expires_at | into datetime) < (date now) { echo \\"Session expired, re-authenticating...\\" auth login $session.user\\n}\\n```plaintext #### `auth sessions` List all active sessions for current user. **Examples:** ```nushell\\n# List all sessions\\nauth sessions\\n# [\\n# {\\n# \\"session_id\\": \\"sess_abc123\\",\\n# \\"created_at\\": \\"2025-10-09T12:00:00Z\\",\\n# \\"expires_at\\": \\"2025-10-09T14:30:00Z\\",\\n# \\"ip_address\\": \\"192.168.1.100\\",\\n# \\"user_agent\\": \\"nushell/0.107.1\\"\\n# }\\n# ] # Filter recent sessions (last hour)\\nauth sessions | where created_at > ((date now) - 1hr) # Find sessions by IP\\nauth sessions | where ip_address =~ \\"192.168\\" # Count active sessions\\nauth sessions | length\\n```plaintext #### `auth mfa enroll ` Enroll in Multi-Factor Authentication (TOTP or WebAuthn). **Arguments:** - `type` (required): MFA type (`totp` or `webauthn`) **TOTP Enrollment:** ```nushell\\nauth mfa enroll totp\\n# ✓ TOTP enrollment initiated\\n#\\n# Scan this QR code with your authenticator app:\\n#\\n# ████ ▄▄▄▄▄ █▀█ █▄▀▀▀▄ ▄▄▄▄▄ ████\\n# ████ █ █ █▀▀▀█▄ ▀▀█ █ █ ████\\n# ████ █▄▄▄█ █ █▀▄ ▀▄▄█ █▄▄▄█ ████\\n# (QR code continues...)\\n#\\n# Or enter manually:\\n# Secret: JBSWY3DPEHPK3PXP\\n# URL: otpauth://totp/Provisioning:admin?secret=JBSWY3DPEHPK3PXP&issuer=Provisioning\\n#\\n# Backup codes (save securely):\\n# 1. ABCD-EFGH-IJKL\\n# 2. MNOP-QRST-UVWX\\n# 3. YZAB-CDEF-GHIJ\\n# (8 more codes...)\\n```plaintext **WebAuthn Enrollment:** ```nushell\\nauth mfa enroll webauthn\\n# ✓ WebAuthn enrollment initiated\\n#\\n# Insert your security key and touch the button...\\n# (waiting for device interaction)\\n#\\n# ✓ Security key registered successfully\\n# Device: YubiKey 5 NFC\\n# Created: 2025-10-09T13:00:00Z\\n```plaintext **Supported Authenticator Apps:** - Google Authenticator\\n- Microsoft Authenticator\\n- Authy\\n- 1Password\\n- Bitwarden **Supported Hardware Keys:** - YubiKey (all models)\\n- Titan Security Key\\n- Feitian ePass\\n- macOS Touch ID\\n- Windows Hello #### `auth mfa verify --code ` Verify MFA code (TOTP or backup code). **Flags:** - `--code ` (required): 6-digit TOTP code or backup code **Examples:** ```nushell\\n# Verify TOTP code\\nauth mfa verify --code 123456\\n# ✓ MFA verification successful # Verify backup code\\nauth mfa verify --code ABCD-EFGH-IJKL\\n# ✓ MFA verification successful (backup code used)\\n# Warning: This backup code cannot be used again # Pipeline usage\\nlet code = input \\"MFA code: \\"\\nauth mfa verify --code $code\\n```plaintext **Error Cases:** ```nushell\\n# Invalid code\\nauth mfa verify --code 999999\\n# Error: Invalid MFA code\\n# → Verify time synchronization on your device # Rate limited\\nauth mfa verify --code 123456\\n# Error: Too many failed attempts\\n# → Wait 5 minutes before trying again # No MFA enrolled\\nauth mfa verify --code 123456\\n# Error: MFA not enrolled for this user\\n# → Run: auth mfa enroll totp\\n```plaintext ### Environment Variables | Variable | Description | Default |\\n|----------|-------------|---------|\\n| `USER` | Default username | Current OS user |\\n| `CONTROL_CENTER_URL` | Control center URL | `http://localhost:3000` |\\n| `AUTH_KEYRING_SERVICE` | Keyring service name | `provisioning-auth` | ### Troubleshooting Authentication **\\"No active session\\"** ```nushell\\n# Solution: Login first\\nauth login \\n```plaintext **\\"Keyring error\\" (macOS)** ```bash\\n# Check Keychain Access permissions\\n# System Preferences → Security & Privacy → Privacy → Full Disk Access\\n# Add: /Applications/Nushell.app (or /usr/local/bin/nu) # Or grant access manually\\nsecurity unlock-keychain ~/Library/Keychains/login.keychain-db\\n```plaintext **\\"Keyring error\\" (Linux)** ```bash\\n# Install keyring service\\nsudo apt install gnome-keyring # Ubuntu/Debian\\nsudo dnf install gnome-keyring # Fedora\\nsudo pacman -S gnome-keyring # Arch # Or use KWallet (KDE)\\nsudo apt install kwalletmanager # Start keyring daemon\\neval $(gnome-keyring-daemon --start)\\nexport $(gnome-keyring-daemon --start --components=secrets)\\n```plaintext **\\"MFA verification failed\\"** ```nushell\\n# Check time synchronization (TOTP requires accurate time)\\n# macOS:\\nsudo sntp -sS time.apple.com # Linux:\\nsudo ntpdate pool.ntp.org\\n# Or\\nsudo systemctl restart systemd-timesyncd # Use backup code if TOTP not working\\nauth mfa verify --code ABCD-EFGH-IJKL\\n```plaintext --- ## KMS Plugin (nu_plugin_kms) The KMS plugin provides high-performance encryption and decryption using multiple backend providers. ### Supported Backends | Backend | Performance | Use Case | Setup Complexity |\\n|---------|------------|----------|------------------|\\n| **rustyvault** | ⚡ Very Fast (~5ms) | Production KMS | Medium |\\n| **age** | ⚡ Very Fast (~3ms) | Local development | Low |\\n| **cosmian** | 🐢 Moderate (~30ms) | Cloud KMS | Medium |\\n| **aws** | 🐢 Moderate (~50ms) | AWS environments | Medium |\\n| **vault** | 🐢 Moderate (~40ms) | Enterprise KMS | High | ### Backend Selection Guide **Choose `rustyvault` when:** - ✅ Running in production with high throughput requirements\\n- ✅ Need ~5ms encryption/decryption latency\\n- ✅ Have RustyVault server deployed\\n- ✅ Require key rotation and versioning **Choose `age` when:** - ✅ Developing locally without external dependencies\\n- ✅ Need simple file encryption\\n- ✅ Want ~3ms latency\\n- ❌ Don\'t need centralized key management **Choose `cosmian` when:** - ✅ Using Cosmian KMS service\\n- ✅ Need cloud-based key management\\n- ⚠️ Can accept ~30ms latency **Choose `aws` when:** - ✅ Deployed on AWS infrastructure\\n- ✅ Using AWS IAM for access control\\n- ✅ Need AWS KMS integration\\n- ⚠️ Can accept ~50ms latency **Choose `vault` when:** - ✅ Using HashiCorp Vault enterprise\\n- ✅ Need advanced policy management\\n- ✅ Require audit trails\\n- ⚠️ Can accept ~40ms latency ### Available Commands | Command | Purpose | Example |\\n|---------|---------|---------|\\n| `kms encrypt` | Encrypt data | `kms encrypt \\"secret\\"` |\\n| `kms decrypt` | Decrypt data | `kms decrypt \\"vault:v1:...\\"` |\\n| `kms generate-key` | Generate DEK | `kms generate-key --spec AES256` |\\n| `kms status` | Backend status | `kms status` | ### Command Reference #### `kms encrypt [--backend ]` Encrypt data using specified KMS backend. **Arguments:** - `data` (required): Data to encrypt (string or binary) **Flags:** - `--backend `: KMS backend (`rustyvault`, `age`, `cosmian`, `aws`, `vault`)\\n- `--key `: Key ID or recipient (backend-specific)\\n- `--context `: Additional authenticated data (AAD) **Examples:** ```nushell\\n# Auto-detect backend from environment\\nkms encrypt \\"secret configuration data\\"\\n# vault:v1:8GawgGuP+emDKX5q... # RustyVault backend\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main\\n# vault:v1:abc123def456... # Age backend (local encryption)\\nkms encrypt \\"data\\" --backend age --key age1xxxxxxxxx\\n# -----BEGIN AGE ENCRYPTED FILE-----\\n# YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+...\\n# -----END AGE ENCRYPTED FILE----- # AWS KMS\\nkms encrypt \\"data\\" --backend aws --key alias/provisioning\\n# AQICAHhwbGF0Zm9ybS1wcm92aXNpb25p... # With context (AAD for additional security)\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main --context \\"user=admin,env=production\\" # Encrypt file contents\\nkms encrypt (open config.yaml) --backend rustyvault | save config.yaml.enc # Encrypt multiple files\\nls configs/*.yaml | each { |file| kms encrypt (open $file.name) --backend age | save $\\"encrypted/($file.name).enc\\"\\n}\\n```plaintext **Output Formats:** - **RustyVault**: `vault:v1:base64_ciphertext`\\n- **Age**: `-----BEGIN AGE ENCRYPTED FILE-----...-----END AGE ENCRYPTED FILE-----`\\n- **AWS**: `base64_aws_kms_ciphertext`\\n- **Cosmian**: `cosmian:v1:base64_ciphertext` #### `kms decrypt [--backend ]` Decrypt KMS-encrypted data. **Arguments:** - `encrypted` (required): Encrypted data (detects format automatically) **Flags:** - `--backend `: KMS backend (auto-detected from format if not specified)\\n- `--context `: Additional authenticated data (must match encryption context) **Examples:** ```nushell\\n# Auto-detect backend from format\\nkms decrypt \\"vault:v1:8GawgGuP...\\"\\n# secret configuration data # Explicit backend\\nkms decrypt \\"vault:v1:abc123...\\" --backend rustyvault # Age decryption\\nkms decrypt \\"-----BEGIN AGE ENCRYPTED FILE-----...\\"\\n# (uses AGE_IDENTITY from environment) # With context (must match encryption context)\\nkms decrypt \\"vault:v1:abc123...\\" --context \\"user=admin,env=production\\" # Decrypt file\\nkms decrypt (open config.yaml.enc) | save config.yaml # Decrypt multiple files\\nls encrypted/*.enc | each { |file| kms decrypt (open $file.name) | save $\\"configs/(($file.name | path basename) | str replace \'.enc\' \'\')\\"\\n} # Pipeline decryption\\nopen secrets.json | get database_password_enc | kms decrypt | str trim | psql --dbname mydb --password\\n```plaintext **Error Cases:** ```nushell\\n# Invalid ciphertext\\nkms decrypt \\"invalid_data\\"\\n# Error: Invalid ciphertext format\\n# → Verify data was encrypted with KMS # Context mismatch\\nkms decrypt \\"vault:v1:abc...\\" --context \\"wrong=context\\"\\n# Error: Authentication failed (AAD mismatch)\\n# → Verify encryption context matches # Backend unavailable\\nkms decrypt \\"vault:v1:abc...\\"\\n# Error: Failed to connect to RustyVault at http://localhost:8200\\n# → Check RustyVault is running: curl http://localhost:8200/v1/sys/health\\n```plaintext #### `kms generate-key [--spec ]` Generate data encryption key (DEK) using KMS envelope encryption. **Flags:** - `--spec `: Key specification (`AES128` or `AES256`, default: `AES256`)\\n- `--backend `: KMS backend **Examples:** ```nushell\\n# Generate AES-256 key\\nkms generate-key\\n# {\\n# \\"plaintext\\": \\"rKz3N8xPq...\\", # base64-encoded key\\n# \\"ciphertext\\": \\"vault:v1:...\\", # encrypted DEK\\n# \\"spec\\": \\"AES256\\"\\n# } # Generate AES-128 key\\nkms generate-key --spec AES128 # Use in envelope encryption pattern\\nlet dek = kms generate-key\\nlet encrypted_data = ($data | openssl enc -aes-256-cbc -K $dek.plaintext)\\n{ data: $encrypted_data, encrypted_key: $dek.ciphertext\\n} | save secure_data.json # Later, decrypt:\\nlet envelope = open secure_data.json\\nlet dek = kms decrypt $envelope.encrypted_key\\n$envelope.data | openssl enc -d -aes-256-cbc -K $dek\\n```plaintext **Use Cases:** - Envelope encryption (encrypt large data locally, protect DEK with KMS)\\n- Database field encryption\\n- File encryption with key wrapping #### `kms status` Show KMS backend status, configuration, and health. **Examples:** ```nushell\\n# Show current backend status\\nkms status\\n# {\\n# \\"backend\\": \\"rustyvault\\",\\n# \\"status\\": \\"healthy\\",\\n# \\"url\\": \\"http://localhost:8200\\",\\n# \\"mount_point\\": \\"transit\\",\\n# \\"version\\": \\"0.1.0\\",\\n# \\"latency_ms\\": 5\\n# } # Check all configured backends\\nkms status --all\\n# [\\n# { \\"backend\\": \\"rustyvault\\", \\"status\\": \\"healthy\\", ... },\\n# { \\"backend\\": \\"age\\", \\"status\\": \\"available\\", ... },\\n# { \\"backend\\": \\"aws\\", \\"status\\": \\"unavailable\\", \\"error\\": \\"...\\" }\\n# ] # Filter to specific backend\\nkms status | where backend == \\"rustyvault\\" # Health check in automation\\nif (kms status | get status) == \\"healthy\\" { echo \\"✓ KMS operational\\"\\n} else { error make { msg: \\"KMS unhealthy\\" }\\n}\\n```plaintext ### Backend Configuration #### RustyVault Backend ```bash\\n# Environment variables\\nexport RUSTYVAULT_ADDR=\\"http://localhost:8200\\"\\nexport RUSTYVAULT_TOKEN=\\"hvs.xxxxxxxxxxxxx\\"\\nexport RUSTYVAULT_MOUNT=\\"transit\\" # Transit engine mount point\\nexport RUSTYVAULT_KEY=\\"provisioning-main\\" # Default key name\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main\\n```plaintext **Setup RustyVault:** ```bash\\n# Start RustyVault\\nrustyvault server -dev # Enable transit engine\\nrustyvault secrets enable transit # Create encryption key\\nrustyvault write -f transit/keys/provisioning-main\\n```plaintext #### Age Backend ```bash\\n# Generate Age keypair\\nage-keygen -o ~/.age/key.txt # Environment variables\\nexport AGE_IDENTITY=\\"$HOME/.age/key.txt\\" # Private key\\nexport AGE_RECIPIENT=\\"age1xxxxxxxxx\\" # Public key (from key.txt)\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend age\\nkms decrypt (open file.enc) --backend age\\n```plaintext #### AWS KMS Backend ```bash\\n# AWS credentials\\nexport AWS_REGION=\\"us-east-1\\"\\nexport AWS_ACCESS_KEY_ID=\\"AKIAXXXXX\\"\\nexport AWS_SECRET_ACCESS_KEY=\\"xxxxx\\" # KMS configuration\\nexport AWS_KMS_KEY_ID=\\"alias/provisioning\\"\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend aws --key alias/provisioning\\n```plaintext **Setup AWS KMS:** ```bash\\n# Create KMS key\\naws kms create-key --description \\"Provisioning Platform\\" # Create alias\\naws kms create-alias --alias-name alias/provisioning --target-key-id # Grant permissions\\naws kms create-grant --key-id --grantee-principal \\\\ --operations Encrypt Decrypt GenerateDataKey\\n```plaintext #### Cosmian Backend ```bash\\n# Cosmian KMS configuration\\nexport KMS_HTTP_URL=\\"http://localhost:9998\\"\\nexport KMS_HTTP_BACKEND=\\"cosmian\\"\\nexport COSMIAN_API_KEY=\\"your-api-key\\"\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend cosmian\\n```plaintext #### Vault Backend (HashiCorp) ```bash\\n# Vault configuration\\nexport VAULT_ADDR=\\"https://vault.example.com:8200\\"\\nexport VAULT_TOKEN=\\"hvs.xxxxxxxxxxxxx\\"\\nexport VAULT_MOUNT=\\"transit\\"\\nexport VAULT_KEY=\\"provisioning\\"\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend vault --key provisioning\\n```plaintext ### Performance Benchmarks **Test Setup:** - Data size: 1KB\\n- Iterations: 1000\\n- Hardware: Apple M1, 16GB RAM\\n- Network: localhost **Results:** | Backend | Encrypt (avg) | Decrypt (avg) | Throughput (ops/sec) |\\n|---------|---------------|---------------|----------------------|\\n| RustyVault | 4.8ms | 5.1ms | ~200 |\\n| Age | 2.9ms | 3.2ms | ~320 |\\n| Cosmian HTTP | 31ms | 29ms | ~33 |\\n| AWS KMS | 52ms | 48ms | ~20 |\\n| Vault | 38ms | 41ms | ~25 | **Scaling Test (1000 operations):** ```nushell\\n# RustyVault: ~5 seconds\\n0..1000 | each { |_| kms encrypt \\"data\\" --backend rustyvault } | length\\n# Age: ~3 seconds\\n0..1000 | each { |_| kms encrypt \\"data\\" --backend age } | length\\n```plaintext ### Troubleshooting KMS **\\"RustyVault connection failed\\"** ```bash\\n# Check RustyVault is running\\ncurl http://localhost:8200/v1/sys/health\\n# Expected: { \\"initialized\\": true, \\"sealed\\": false } # Check environment\\necho $env.RUSTYVAULT_ADDR\\necho $env.RUSTYVAULT_TOKEN # Test authentication\\ncurl -H \\"X-Vault-Token: $RUSTYVAULT_TOKEN\\" $RUSTYVAULT_ADDR/v1/sys/health\\n```plaintext **\\"Age encryption failed\\"** ```bash\\n# Check Age keys exist\\nls -la ~/.age/\\n# Expected: key.txt # Verify key format\\ncat ~/.age/key.txt | head -1\\n# Expected: # created: \\n# Line 2: # public key: age1xxxxx\\n# Line 3: AGE-SECRET-KEY-xxxxx # Extract public key\\nexport AGE_RECIPIENT=$(grep \\"public key:\\" ~/.age/key.txt | cut -d: -f2 | tr -d \' \')\\necho $AGE_RECIPIENT\\n```plaintext **\\"AWS KMS access denied\\"** ```bash\\n# Verify AWS credentials\\naws sts get-caller-identity\\n# Expected: Account, UserId, Arn # Check KMS key permissions\\naws kms describe-key --key-id alias/provisioning # Test encryption\\naws kms encrypt --key-id alias/provisioning --plaintext \\"test\\"\\n```plaintext --- ## Orchestrator Plugin (nu_plugin_orchestrator) The orchestrator plugin provides direct file-based access to orchestrator state, eliminating HTTP overhead for status queries and validation. ### Available Commands | Command | Purpose | Example |\\n|---------|---------|---------|\\n| `orch status` | Orchestrator status | `orch status` |\\n| `orch validate` | Validate workflow | `orch validate workflow.k` |\\n| `orch tasks` | List tasks | `orch tasks --status running` | ### Command Reference #### `orch status [--data-dir ]` Get orchestrator status from local files (no HTTP, ~1ms latency). **Flags:** - `--data-dir `: Data directory (default from `ORCHESTRATOR_DATA_DIR`) **Examples:** ```nushell\\n# Default data directory\\norch status\\n# {\\n# \\"active_tasks\\": 5,\\n# \\"completed_tasks\\": 120,\\n# \\"failed_tasks\\": 2,\\n# \\"pending_tasks\\": 3,\\n# \\"uptime\\": \\"2d 4h 15m\\",\\n# \\"health\\": \\"healthy\\"\\n# } # Custom data directory\\norch status --data-dir /opt/orchestrator/data # Monitor in loop\\nwhile true { clear orch status | table sleep 5sec\\n} # Alert on failures\\nif (orch status | get failed_tasks) > 0 { echo \\"⚠️ Failed tasks detected!\\"\\n}\\n```plaintext #### `orch validate [--strict]` Validate workflow KCL file syntax and structure. **Arguments:** - `workflow.k` (required): Path to KCL workflow file **Flags:** - `--strict`: Enable strict validation (warnings as errors) **Examples:** ```nushell\\n# Basic validation\\norch validate workflows/deploy.k\\n# {\\n# \\"valid\\": true,\\n# \\"workflow\\": {\\n# \\"name\\": \\"deploy_k8s_cluster\\",\\n# \\"version\\": \\"1.0.0\\",\\n# \\"operations\\": 5\\n# },\\n# \\"warnings\\": [],\\n# \\"errors\\": []\\n# } # Strict mode (warnings cause failure)\\norch validate workflows/deploy.k --strict\\n# Error: Validation failed with warnings:\\n# - Operation \'create_servers\': Missing retry_policy\\n# - Operation \'install_k8s\': Resource limits not specified # Validate all workflows\\nls workflows/*.k | each { |file| let result = orch validate $file.name if $result.valid { echo $\\"✓ ($file.name)\\" } else { echo $\\"✗ ($file.name): ($result.errors | str join \', \')\\" }\\n} # CI/CD validation\\ntry { orch validate workflow.k --strict echo \\"✓ Validation passed\\"\\n} catch { echo \\"✗ Validation failed\\" exit 1\\n}\\n```plaintext **Validation Checks:** - ✅ KCL syntax correctness\\n- ✅ Required fields present (`name`, `version`, `operations`)\\n- ✅ Dependency graph valid (no cycles)\\n- ✅ Resource limits within bounds\\n- ✅ Provider configurations valid\\n- ✅ Operation types supported\\n- ⚠️ Optional: Retry policies defined\\n- ⚠️ Optional: Resource limits specified #### `orch tasks [--status ] [--limit ]` List orchestrator tasks from local state. **Flags:** - `--status `: Filter by status (`pending`, `running`, `completed`, `failed`)\\n- `--limit `: Limit results (default: 100)\\n- `--data-dir `: Data directory **Examples:** ```nushell\\n# All tasks (last 100)\\norch tasks\\n# [\\n# {\\n# \\"task_id\\": \\"task_abc123\\",\\n# \\"name\\": \\"deploy_kubernetes\\",\\n# \\"status\\": \\"running\\",\\n# \\"priority\\": 5,\\n# \\"created_at\\": \\"2025-10-09T12:00:00Z\\",\\n# \\"progress\\": 45\\n# }\\n# ] # Running tasks only\\norch tasks --status running # Failed tasks (last 10)\\norch tasks --status failed --limit 10 # Pending high-priority tasks\\norch tasks --status pending | where priority > 7 # Monitor active tasks\\nwatch { orch tasks --status running | select name progress updated_at | table\\n} # Count tasks by status\\norch tasks | group-by status | each { |group| { status: $group.0, count: ($group.1 | length) }\\n}\\n```plaintext ### Environment Variables | Variable | Description | Default |\\n|----------|-------------|---------|\\n| `ORCHESTRATOR_DATA_DIR` | Data directory | `provisioning/platform/orchestrator/data` | ### Performance Comparison | Operation | HTTP API | Plugin | Latency Reduction |\\n|-----------|----------|--------|-------------------|\\n| Status query | ~30ms | ~1ms | **97% faster** |\\n| Validate workflow | ~100ms | ~10ms | **90% faster** |\\n| List tasks | ~50ms | ~5ms | **90% faster** | **Use Case: CI/CD Pipeline** ```nushell\\n# HTTP approach (slow)\\nhttp get http://localhost:9090/tasks --status running | each { |task| http get $\\"http://localhost:9090/tasks/($task.id)\\" }\\n# Total: ~500ms for 10 tasks # Plugin approach (fast)\\norch tasks --status running\\n# Total: ~5ms for 10 tasks\\n# Result: 100x faster\\n```plaintext ### Troubleshooting Orchestrator **\\"Failed to read status\\"** ```bash\\n# Check data directory exists\\nls -la provisioning/platform/orchestrator/data/ # Create if missing\\nmkdir -p provisioning/platform/orchestrator/data # Check permissions (must be readable)\\nchmod 755 provisioning/platform/orchestrator/data\\n```plaintext **\\"Workflow validation failed\\"** ```nushell\\n# Use strict mode for detailed errors\\norch validate workflows/deploy.k --strict # Check KCL syntax manually\\nkcl fmt workflows/deploy.k\\nkcl run workflows/deploy.k\\n```plaintext **\\"No tasks found\\"** ```bash\\n# Check orchestrator running\\nps aux | grep orchestrator # Start orchestrator if not running\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background # Check task files\\nls provisioning/platform/orchestrator/data/tasks/\\n```plaintext --- ## Integration Examples ### Example 1: Complete Authenticated Deployment Full workflow with authentication, secrets, and deployment: ```nushell\\n# Step 1: Login with MFA\\nauth login admin\\nauth mfa verify --code (input \\"MFA code: \\") # Step 2: Verify orchestrator health\\nif (orch status | get health) != \\"healthy\\" { error make { msg: \\"Orchestrator unhealthy\\" }\\n} # Step 3: Validate deployment workflow\\nlet validation = orch validate workflows/production-deploy.k --strict\\nif not $validation.valid { error make { msg: $\\"Validation failed: ($validation.errors)\\" }\\n} # Step 4: Encrypt production secrets\\nlet secrets = open secrets/production.yaml\\nkms encrypt ($secrets | to json) --backend rustyvault --key prod-main | save secrets/production.enc # Step 5: Submit deployment\\nprovisioning cluster create production --check # Step 6: Monitor progress\\nwhile (orch tasks --status running | length) > 0 { orch tasks --status running | select name progress updated_at | table sleep 10sec\\n} echo \\"✓ Deployment complete\\"\\n```plaintext ### Example 2: Batch Secret Rotation Rotate all secrets in multiple environments: ```nushell\\n# Rotate database passwords\\n[\\"dev\\", \\"staging\\", \\"production\\"] | each { |env| # Generate new password let new_password = (openssl rand -base64 32) # Encrypt with environment-specific key let encrypted = kms encrypt $new_password --backend rustyvault --key $\\"($env)-main\\" # Save encrypted password { environment: $env, password_enc: $encrypted, rotated_at: (date now | format date \\"%Y-%m-%d %H:%M:%S\\") } | save $\\"secrets/db-password-($env).json\\" echo $\\"✓ Rotated password for ($env)\\"\\n}\\n```plaintext ### Example 3: Multi-Environment Deployment Deploy to multiple environments with validation: ```nushell\\n# Define environments\\nlet environments = [ { name: \\"dev\\", validate: \\"basic\\" }, { name: \\"staging\\", validate: \\"strict\\" }, { name: \\"production\\", validate: \\"strict\\", mfa_required: true }\\n] # Deploy to each environment\\n$environments | each { |env| echo $\\"Deploying to ($env.name)...\\" # Authenticate if production if $env.mfa_required? { if not (auth verify | get mfa_verified) { auth mfa verify --code (input $\\"MFA code for ($env.name): \\") } } # Validate workflow let validation = if $env.validate == \\"strict\\" { orch validate $\\"workflows/($env.name)-deploy.k\\" --strict } else { orch validate $\\"workflows/($env.name)-deploy.k\\" } if not $validation.valid { echo $\\"✗ Validation failed for ($env.name)\\" continue } # Decrypt secrets let secrets = kms decrypt (open $\\"secrets/($env.name).enc\\") # Deploy provisioning cluster create $env.name echo $\\"✓ Deployed to ($env.name)\\"\\n}\\n```plaintext ### Example 4: Automated Backup and Encryption Backup configuration files with encryption: ```nushell\\n# Backup script\\nlet backup_dir = $\\"backups/(date now | format date \\"%Y%m%d-%H%M%S\\")\\"\\nmkdir $backup_dir # Backup and encrypt configs\\nls configs/**/*.yaml | each { |file| let encrypted = kms encrypt (open $file.name) --backend age let backup_path = $\\"($backup_dir)/($file.name | path basename).enc\\" $encrypted | save $backup_path echo $\\"✓ Backed up ($file.name)\\"\\n} # Create manifest\\n{ backup_date: (date now), files: (ls $\\"($backup_dir)/*.enc\\" | length), backend: \\"age\\"\\n} | save $\\"($backup_dir)/manifest.json\\" echo $\\"✓ Backup complete: ($backup_dir)\\"\\n```plaintext ### Example 5: Health Monitoring Dashboard Real-time health monitoring: ```nushell\\n# Health dashboard\\nwhile true { clear # Header echo \\"=== Provisioning Platform Health Dashboard ===\\" echo $\\"Updated: (date now | format date \\"%Y-%m-%d %H:%M:%S\\")\\" echo \\"\\" # Authentication status let auth_status = try { auth verify } catch { { active: false } } echo $\\"Auth: (if $auth_status.active { \'✓ Active\' } else { \'✗ Inactive\' })\\" # KMS status let kms_health = kms status echo $\\"KMS: (if $kms_health.status == \'healthy\' { \'✓ Healthy\' } else { \'✗ Unhealthy\' })\\" # Orchestrator status let orch_health = orch status echo $\\"Orchestrator: (if $orch_health.health == \'healthy\' { \'✓ Healthy\' } else { \'✗ Unhealthy\' })\\" echo $\\"Active Tasks: ($orch_health.active_tasks)\\" echo $\\"Failed Tasks: ($orch_health.failed_tasks)\\" # Task summary echo \\"\\" echo \\"=== Running Tasks ===\\" orch tasks --status running | select name progress updated_at | table sleep 10sec\\n}\\n```plaintext --- ## Best Practices ### When to Use Plugins vs HTTP **✅ Use Plugins When:** - Performance is critical (high-frequency operations)\\n- Working in pipelines (Nushell data structures)\\n- Need offline capability (KMS, orchestrator local ops)\\n- Building automation scripts\\n- CI/CD pipelines **Use HTTP When:** - Calling from external systems (not Nushell)\\n- Need consistent REST API interface\\n- Cross-language integration\\n- Web UI backend ### Performance Optimization **1. Batch Operations** ```nushell\\n# ❌ Slow: Individual HTTP calls in loop\\nls configs/*.yaml | each { |file| http post http://localhost:9998/encrypt { data: (open $file.name) }\\n}\\n# Total: ~5 seconds (50ms × 100) # ✅ Fast: Plugin in pipeline\\nls configs/*.yaml | each { |file| kms encrypt (open $file.name)\\n}\\n# Total: ~0.5 seconds (5ms × 100)\\n```plaintext **2. Parallel Processing** ```nushell\\n# Process multiple operations in parallel\\nls configs/*.yaml | par-each { |file| kms encrypt (open $file.name) | save $\\"encrypted/($file.name).enc\\" }\\n```plaintext **3. Caching Session State** ```nushell\\n# Cache auth verification\\nlet $auth_cache = auth verify\\nif $auth_cache.active { # Use cached result instead of repeated calls echo $\\"Authenticated as ($auth_cache.user)\\"\\n}\\n```plaintext ### Error Handling **Graceful Degradation:** ```nushell\\n# Try plugin, fallback to HTTP if unavailable\\ndef kms_encrypt [data: string] { try { kms encrypt $data } catch { http post http://localhost:9998/encrypt { data: $data } | get encrypted }\\n}\\n```plaintext **Comprehensive Error Handling:** ```nushell\\n# Handle all error cases\\ndef safe_deployment [] { # Check authentication let auth_status = try { auth verify } catch { echo \\"✗ Authentication failed, logging in...\\" auth login admin auth verify } # Check KMS health let kms_health = try { kms status } catch { error make { msg: \\"KMS unavailable, cannot proceed\\" } } # Validate workflow let validation = try { orch validate workflow.k --strict } catch { error make { msg: \\"Workflow validation failed\\" } } # Proceed if all checks pass if $auth_status.active and $kms_health.status == \\"healthy\\" and $validation.valid { echo \\"✓ All checks passed, deploying...\\" provisioning cluster create production }\\n}\\n```plaintext ### Security Best Practices **1. Never Log Decrypted Data** ```nushell\\n# ❌ BAD: Logs plaintext password\\nlet password = kms decrypt $encrypted_password\\necho $\\"Password: ($password)\\" # Visible in logs! # ✅ GOOD: Use directly without logging\\nlet password = kms decrypt $encrypted_password\\npsql --dbname mydb --password $password # Not logged\\n```plaintext **2. Use Context (AAD) for Critical Data** ```nushell\\n# Encrypt with context\\nlet context = $\\"user=(whoami),env=production,date=(date now | format date \\"%Y-%m-%d\\")\\"\\nkms encrypt $sensitive_data --context $context # Decrypt requires same context\\nkms decrypt $encrypted --context $context\\n```plaintext **3. Rotate Backup Codes** ```nushell\\n# After using backup code, generate new set\\nauth mfa verify --code ABCD-EFGH-IJKL\\n# Warning: Backup code used\\nauth mfa regenerate-backups\\n# New backup codes generated\\n```plaintext **4. Limit Token Lifetime** ```nushell\\n# Check token expiration before long operations\\nlet session = auth verify\\nlet expires_in = (($session.expires_at | into datetime) - (date now))\\nif $expires_in < 5min { echo \\"⚠️ Token expiring soon, re-authenticating...\\" auth login $session.user\\n}\\n```plaintext --- ## Troubleshooting ### Common Issues Across Plugins **\\"Plugin not found\\"** ```bash\\n# Check plugin registration\\nplugin list | where name =~ \\"auth|kms|orch\\" # Re-register if missing\\ncd provisioning/core/plugins/nushell-plugins\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator # Restart Nushell\\nexit\\nnu\\n```plaintext **\\"Plugin command failed\\"** ```nushell\\n# Enable debug mode\\n$env.RUST_LOG = \\"debug\\" # Run command again to see detailed errors\\nkms encrypt \\"test\\" # Check plugin version compatibility\\nplugin list | where name =~ \\"kms\\" | select name version\\n```plaintext **\\"Permission denied\\"** ```bash\\n# Check plugin executable permissions\\nls -l provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_*\\n# Should show: -rwxr-xr-x # Fix if needed\\nchmod +x provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_*\\n```plaintext ### Platform-Specific Issues **macOS Issues:** ```bash\\n# \\"cannot be opened because the developer cannot be verified\\"\\nxattr -d com.apple.quarantine target/release/nu_plugin_auth\\nxattr -d com.apple.quarantine target/release/nu_plugin_kms\\nxattr -d com.apple.quarantine target/release/nu_plugin_orchestrator # Keychain access denied\\n# System Preferences → Security & Privacy → Privacy → Full Disk Access\\n# Add: /usr/local/bin/nu\\n```plaintext **Linux Issues:** ```bash\\n# Keyring service not running\\nsystemctl --user status gnome-keyring-daemon\\nsystemctl --user start gnome-keyring-daemon # Missing dependencies\\nsudo apt install libssl-dev pkg-config # Ubuntu/Debian\\nsudo dnf install openssl-devel # Fedora\\n```plaintext **Windows Issues:** ```powershell\\n# Credential Manager access denied\\n# Control Panel → User Accounts → Credential Manager\\n# Ensure Windows Credential Manager service is running # Missing Visual C++ runtime\\n# Download from: https://aka.ms/vs/17/release/vc_redist.x64.exe\\n```plaintext ### Debugging Techniques **Enable Verbose Logging:** ```nushell\\n# Set log level\\n$env.RUST_LOG = \\"debug,nu_plugin_auth=trace\\" # Run command\\nauth login admin # Check logs\\n```plaintext **Test Plugin Directly:** ```bash\\n# Test plugin communication (advanced)\\necho \'{\\"Call\\": [0, {\\"name\\": \\"auth\\", \\"call\\": \\"login\\", \\"args\\": [\\"admin\\", \\"password\\"]}]}\' \\\\ | target/release/nu_plugin_auth\\n```plaintext **Check Plugin Health:** ```nushell\\n# Test each plugin\\nauth --help # Should show auth commands\\nkms --help # Should show kms commands\\norch --help # Should show orch commands # Test functionality\\nauth verify # Should return session status\\nkms status # Should return backend status\\norch status # Should return orchestrator status\\n```plaintext --- ## Migration Guide ### Migrating from HTTP to Plugin-Based **Phase 1: Install Plugins (No Breaking Changes)** ```bash\\n# Build and register plugins\\ncd provisioning/core/plugins/nushell-plugins\\ncargo build --release --all\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator # Verify HTTP still works\\nhttp get http://localhost:9090/health\\n```plaintext **Phase 2: Update Scripts Incrementally** ```nushell\\n# Before (HTTP)\\ndef encrypt_config [file: string] { let data = open $file let result = http post http://localhost:9998/encrypt { data: $data } $result.encrypted | save $\\"($file).enc\\"\\n} # After (Plugin with fallback)\\ndef encrypt_config [file: string] { let data = open $file let encrypted = try { kms encrypt $data --backend rustyvault } catch { # Fallback to HTTP if plugin unavailable (http post http://localhost:9998/encrypt { data: $data }).encrypted } $encrypted | save $\\"($file).enc\\"\\n}\\n```plaintext **Phase 3: Test Migration** ```nushell\\n# Run side-by-side comparison\\ndef test_migration [] { let test_data = \\"test secret data\\" # Plugin approach let start_plugin = date now let plugin_result = kms encrypt $test_data let plugin_time = ((date now) - $start_plugin) # HTTP approach let start_http = date now let http_result = (http post http://localhost:9998/encrypt { data: $test_data }).encrypted let http_time = ((date now) - $start_http) echo $\\"Plugin: ($plugin_time)ms\\" echo $\\"HTTP: ($http_time)ms\\" echo $\\"Speedup: (($http_time / $plugin_time))x\\"\\n}\\n```plaintext **Phase 4: Gradual Rollout** ```nushell\\n# Use feature flag for controlled rollout\\n$env.USE_PLUGINS = true def encrypt_with_flag [data: string] { if $env.USE_PLUGINS { kms encrypt $data } else { (http post http://localhost:9998/encrypt { data: $data }).encrypted }\\n}\\n```plaintext **Phase 5: Full Migration** ```nushell\\n# Replace all HTTP calls with plugin calls\\n# Remove fallback logic once stable\\ndef encrypt_config [file: string] { let data = open $file kms encrypt $data --backend rustyvault | save $\\"($file).enc\\"\\n}\\n```plaintext ### Rollback Strategy ```nushell\\n# If issues arise, quickly rollback\\ndef rollback_to_http [] { # Remove plugin registrations plugin rm nu_plugin_auth plugin rm nu_plugin_kms plugin rm nu_plugin_orchestrator # Restart Nushell exec nu\\n}\\n```plaintext --- ## Advanced Configuration ### Custom Plugin Paths ```nushell\\n# ~/.config/nushell/config.nu\\n$env.PLUGIN_PATH = \\"/opt/provisioning/plugins\\" # Register from custom location\\nplugin add $\\"($env.PLUGIN_PATH)/nu_plugin_auth\\"\\nplugin add $\\"($env.PLUGIN_PATH)/nu_plugin_kms\\"\\nplugin add $\\"($env.PLUGIN_PATH)/nu_plugin_orchestrator\\"\\n```plaintext ### Environment-Specific Configuration ```nushell\\n# ~/.config/nushell/env.nu # Development environment\\nif ($env.ENV? == \\"dev\\") { $env.RUSTYVAULT_ADDR = \\"http://localhost:8200\\" $env.CONTROL_CENTER_URL = \\"http://localhost:3000\\"\\n} # Staging environment\\nif ($env.ENV? == \\"staging\\") { $env.RUSTYVAULT_ADDR = \\"https://vault-staging.example.com\\" $env.CONTROL_CENTER_URL = \\"https://control-staging.example.com\\"\\n} # Production environment\\nif ($env.ENV? == \\"prod\\") { $env.RUSTYVAULT_ADDR = \\"https://vault.example.com\\" $env.CONTROL_CENTER_URL = \\"https://control.example.com\\"\\n}\\n```plaintext ### Plugin Aliases ```nushell\\n# ~/.config/nushell/config.nu # Auth shortcuts\\nalias login = auth login\\nalias logout = auth logout\\nalias whoami = auth verify | get user # KMS shortcuts\\nalias encrypt = kms encrypt\\nalias decrypt = kms decrypt # Orchestrator shortcuts\\nalias status = orch status\\nalias tasks = orch tasks\\nalias validate = orch validate\\n```plaintext ### Custom Commands ```nushell\\n# ~/.config/nushell/custom_commands.nu # Encrypt all files in directory\\ndef encrypt-dir [dir: string] { ls $\\"($dir)/**/*\\" | where type == file | each { |file| kms encrypt (open $file.name) | save $\\"($file.name).enc\\" echo $\\"✓ Encrypted ($file.name)\\" }\\n} # Decrypt all files in directory\\ndef decrypt-dir [dir: string] { ls $\\"($dir)/**/*.enc\\" | each { |file| kms decrypt (open $file.name) | save (echo $file.name | str replace \'.enc\' \'\') echo $\\"✓ Decrypted ($file.name)\\" }\\n} # Monitor deployments\\ndef watch-deployments [] { while true { clear echo \\"=== Active Deployments ===\\" orch tasks --status running | table sleep 5sec }\\n}\\n```plaintext --- ## Security Considerations ### Threat Model **What Plugins Protect Against:** - ✅ Network eavesdropping (no HTTP for KMS/orch)\\n- ✅ Token theft from files (keyring storage)\\n- ✅ Credential exposure in logs (prompt-based input)\\n- ✅ Man-in-the-middle attacks (local file access) **What Plugins Don\'t Protect Against:** - ❌ Memory dumping (decrypted data in RAM)\\n- ❌ Malicious plugins (trust registry only)\\n- ❌ Compromised OS keyring\\n- ❌ Physical access to machine ### Secure Deployment **1. Verify Plugin Integrity** ```bash\\n# Check plugin signatures (if available)\\nsha256sum target/release/nu_plugin_auth\\n# Compare with published checksums # Build from trusted source\\ngit clone https://github.com/provisioning-platform/plugins\\ncd plugins\\ncargo build --release --all\\n```plaintext **2. Restrict Plugin Access** ```bash\\n# Set plugin permissions (only owner can execute)\\nchmod 700 target/release/nu_plugin_* # Store in protected directory\\nsudo mkdir -p /opt/provisioning/plugins\\nsudo chown $(whoami):$(whoami) /opt/provisioning/plugins\\nsudo chmod 755 /opt/provisioning/plugins\\nmv target/release/nu_plugin_* /opt/provisioning/plugins/\\n```plaintext **3. Audit Plugin Usage** ```nushell\\n# Log plugin calls (for compliance)\\ndef logged_encrypt [data: string] { let timestamp = date now let result = kms encrypt $data { timestamp: $timestamp, action: \\"encrypt\\" } | save --append audit.log $result\\n}\\n```plaintext **4. Rotate Credentials Regularly** ```nushell\\n# Weekly credential rotation script\\ndef rotate_credentials [] { # Re-authenticate auth logout auth login admin # Rotate KMS keys (if supported) kms rotate-key --key provisioning-main # Update encrypted secrets ls secrets/*.enc | each { |file| let plain = kms decrypt (open $file.name) kms encrypt $plain | save $file.name }\\n}\\n```plaintext --- ## FAQ **Q: Can I use plugins without RustyVault/Age installed?** A: Yes, authentication and orchestrator plugins work independently. KMS plugin requires at least one backend configured (Age is easiest for local dev). **Q: Do plugins work in CI/CD pipelines?** A: Yes, plugins work great in CI/CD. For headless environments (no keyring), use environment variables for auth or file-based tokens. ```bash\\n# CI/CD example\\nexport CONTROL_CENTER_TOKEN=\\"jwt-token-here\\"\\nkms encrypt \\"data\\" --backend age\\n```plaintext **Q: How do I update plugins?** A: Rebuild and re-register: ```bash\\ncd provisioning/core/plugins/nushell-plugins\\ngit pull\\ncargo build --release --all\\nplugin add --force target/release/nu_plugin_auth\\nplugin add --force target/release/nu_plugin_kms\\nplugin add --force target/release/nu_plugin_orchestrator\\n```plaintext **Q: Can I use multiple KMS backends simultaneously?** A: Yes, specify `--backend` for each operation: ```nushell\\nkms encrypt \\"data1\\" --backend rustyvault\\nkms encrypt \\"data2\\" --backend age\\nkms encrypt \\"data3\\" --backend aws\\n```plaintext **Q: What happens if a plugin crashes?** A: Nushell isolates plugin crashes. The command fails with an error, but Nushell continues running. Check logs with `$env.RUST_LOG = \\"debug\\"`. **Q: Are plugins compatible with older Nushell versions?** A: Plugins require Nushell 0.107.1+. For older versions, use HTTP API. **Q: How do I backup MFA enrollment?** A: Save backup codes securely (password manager, encrypted file). QR code can be re-scanned from the same secret. ```nushell\\n# Save backup codes\\nauth mfa enroll totp | save mfa-backup-codes.txt\\nkms encrypt (open mfa-backup-codes.txt) | save mfa-backup-codes.enc\\nrm mfa-backup-codes.txt\\n```plaintext **Q: Can plugins work offline?** A: Partially: - ✅ `kms` with Age backend (fully offline)\\n- ✅ `orch` status/tasks (reads local files)\\n- ❌ `auth` (requires control center)\\n- ❌ `kms` with RustyVault/AWS/Vault (requires network) **Q: How do I troubleshoot plugin performance?** A: Use Nushell\'s timing: ```nushell\\ntimeit { kms encrypt \\"data\\" }\\n# 5ms 123μs 456ns timeit { http post http://localhost:9998/encrypt { data: \\"data\\" } }\\n# 52ms 789μs 123ns\\n```plaintext --- ## Related Documentation - **Security System**: `/Users/Akasha/project-provisioning/docs/architecture/ADR-009-security-system-complete.md`\\n- **JWT Authentication**: `/Users/Akasha/project-provisioning/docs/architecture/JWT_AUTH_IMPLEMENTATION.md`\\n- **Config Encryption**: `/Users/Akasha/project-provisioning/docs/user/CONFIG_ENCRYPTION_GUIDE.md`\\n- **RustyVault Integration**: `/Users/Akasha/project-provisioning/RUSTYVAULT_INTEGRATION_SUMMARY.md`\\n- **MFA Implementation**: `/Users/Akasha/project-provisioning/docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`\\n- **Nushell Plugins Reference**: `/Users/Akasha/project-provisioning/docs/user/NUSHELL_PLUGINS_GUIDE.md` --- **Version**: 1.0.0\\n**Maintained By**: Platform Team\\n**Last Updated**: 2025-10-09\\n**Feedback**: Open an issue or contact ","breadcrumbs":"Plugin Integration Guide » Architecture Benefits","id":"1617","title":"Architecture Benefits"},"1618":{"body":"Complete guide to authentication, KMS, and orchestrator plugins.","breadcrumbs":"NuShell Plugins Guide » Nushell Plugins for Provisioning Platform","id":"1618","title":"Nushell Plugins for Provisioning Platform"},"1619":{"body":"Three native Nushell plugins provide high-performance integration with the provisioning platform: nu_plugin_auth - JWT authentication and MFA operations nu_plugin_kms - Key management (RustyVault, Age, Cosmian, AWS, Vault) nu_plugin_orchestrator - Orchestrator operations (status, validate, tasks)","breadcrumbs":"NuShell Plugins Guide » Overview","id":"1619","title":"Overview"},"162":{"body":"Edit the generated configuration: # Edit with your preferred editor\\n$EDITOR workspace/infra/my-infra/settings.k Example configuration: import provisioning.settings as cfg # Infrastructure settings\\ninfra_settings = cfg.InfraSettings { name = \\"my-infra\\" provider = \\"local\\" # Start with local provider environment = \\"development\\"\\n} # Server configuration\\nservers = [ { hostname = \\"dev-server-01\\" cores = 2 memory = 4096 # MB disk = 50 # GB }\\n]","breadcrumbs":"First Deployment » Step 2: Edit Configuration","id":"162","title":"Step 2: Edit Configuration"},"1620":{"body":"Performance Advantages : 10x faster than HTTP API calls (KMS operations) Direct access to Rust libraries (no HTTP overhead) Native integration with Nushell pipelines Type safety with Nushell\'s type system Developer Experience : Pipeline friendly - Use Nushell pipes naturally Tab completion - All commands and flags Consistent interface - Follows Nushell conventions Error handling - Nushell-native error messages","breadcrumbs":"NuShell Plugins Guide » Why Native Plugins?","id":"1620","title":"Why Native Plugins?"},"1621":{"body":"","breadcrumbs":"NuShell Plugins Guide » Installation","id":"1621","title":"Installation"},"1622":{"body":"Nushell 0.107.1+ Rust toolchain (for building from source) Access to provisioning platform services","breadcrumbs":"NuShell Plugins Guide » Prerequisites","id":"1622","title":"Prerequisites"},"1623":{"body":"cd /Users/Akasha/project-provisioning/provisioning/core/plugins/nushell-plugins # Build all plugins\\ncargo build --release -p nu_plugin_auth\\ncargo build --release -p nu_plugin_kms\\ncargo build --release -p nu_plugin_orchestrator # Or build individually\\ncargo build --release -p nu_plugin_auth\\ncargo build --release -p nu_plugin_kms\\ncargo build --release -p nu_plugin_orchestrator\\n```plaintext ### Register with Nushell ```bash\\n# Register all plugins\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator # Verify registration\\nplugin list | where name =~ \\"provisioning\\"\\n```plaintext ### Verify Installation ```bash\\n# Test auth commands\\nauth --help # Test KMS commands\\nkms --help # Test orchestrator commands\\norch --help\\n```plaintext --- ## Plugin: nu_plugin_auth Authentication plugin for JWT login, MFA enrollment, and session management. ### Commands #### `auth login [password]` Login to provisioning platform and store JWT tokens securely. **Arguments**: - `username` (required): Username for authentication\\n- `password` (optional): Password (prompts interactively if not provided) **Flags**: - `--url `: Control center URL (default: `http://localhost:9080`)\\n- `--password `: Password (alternative to positional argument) **Examples**: ```nushell\\n# Interactive password prompt (recommended)\\nauth login admin # Password in command (not recommended for production)\\nauth login admin mypassword # Custom URL\\nauth login admin --url http://control-center:9080 # Pipeline usage\\n\\"admin\\" | auth login\\n```plaintext **Token Storage**:\\nTokens are stored securely in OS-native keyring: - **macOS**: Keychain Access\\n- **Linux**: Secret Service (gnome-keyring, kwallet)\\n- **Windows**: Credential Manager **Success Output**: ```plaintext\\n✓ Login successful\\nUser: admin\\nRole: Admin\\nExpires: 2025-10-09T14:30:00Z\\n```plaintext --- #### `auth logout` Logout from current session and remove stored tokens. **Examples**: ```nushell\\n# Simple logout\\nauth logout # Pipeline usage (conditional logout)\\nif (auth verify | get active) { auth logout }\\n```plaintext **Success Output**: ```plaintext\\n✓ Logged out successfully\\n```plaintext --- #### `auth verify` Verify current session and check token validity. **Examples**: ```nushell\\n# Check session status\\nauth verify # Pipeline usage\\nauth verify | if $in.active { echo \\"Session valid\\" } else { echo \\"Session expired\\" }\\n```plaintext **Success Output**: ```json\\n{ \\"active\\": true, \\"user\\": \\"admin\\", \\"role\\": \\"Admin\\", \\"expires_at\\": \\"2025-10-09T14:30:00Z\\", \\"mfa_verified\\": true\\n}\\n```plaintext --- #### `auth sessions` List all active sessions for current user. **Examples**: ```nushell\\n# List sessions\\nauth sessions # Filter by date\\nauth sessions | where created_at > (date now | date to-timezone UTC | into string)\\n```plaintext **Output Format**: ```json\\n[ { \\"session_id\\": \\"sess_abc123\\", \\"created_at\\": \\"2025-10-09T12:00:00Z\\", \\"expires_at\\": \\"2025-10-09T14:30:00Z\\", \\"ip_address\\": \\"192.168.1.100\\", \\"user_agent\\": \\"nushell/0.107.1\\" }\\n]\\n```plaintext --- #### `auth mfa enroll ` Enroll in MFA (TOTP or WebAuthn). **Arguments**: - `type` (required): MFA type (`totp` or `webauthn`) **Examples**: ```nushell\\n# Enroll TOTP (Google Authenticator, Authy)\\nauth mfa enroll totp # Enroll WebAuthn (YubiKey, Touch ID, Windows Hello)\\nauth mfa enroll webauthn\\n```plaintext **TOTP Enrollment Output**: ```plaintext\\n✓ TOTP enrollment initiated Scan this QR code with your authenticator app: ████ ▄▄▄▄▄ █▀█ █▄▀▀▀▄ ▄▄▄▄▄ ████ ████ █ █ █▀▀▀█▄ ▀▀█ █ █ ████ ████ █▄▄▄█ █ █▀▄ ▀▄▄█ █▄▄▄█ ████ ... Or enter manually:\\nSecret: JBSWY3DPEHPK3PXP\\nURL: otpauth://totp/Provisioning:admin?secret=JBSWY3DPEHPK3PXP&issuer=Provisioning Backup codes (save securely):\\n1. ABCD-EFGH-IJKL\\n2. MNOP-QRST-UVWX\\n...\\n```plaintext --- #### `auth mfa verify --code ` Verify MFA code (TOTP or backup code). **Flags**: - `--code ` (required): 6-digit TOTP code or backup code **Examples**: ```nushell\\n# Verify TOTP code\\nauth mfa verify --code 123456 # Verify backup code\\nauth mfa verify --code ABCD-EFGH-IJKL\\n```plaintext **Success Output**: ```plaintext\\n✓ MFA verification successful\\n```plaintext --- ### Environment Variables | Variable | Description | Default |\\n|----------|-------------|---------|\\n| `USER` | Default username | Current OS user |\\n| `CONTROL_CENTER_URL` | Control center URL | `http://localhost:9080` | --- ### Error Handling **Common Errors**: ```nushell\\n# \\"No active session\\"\\nError: No active session found\\n→ Run: auth login # \\"Invalid credentials\\"\\nError: Authentication failed: Invalid username or password\\n→ Check username and password # \\"Token expired\\"\\nError: Token has expired\\n→ Run: auth login # \\"MFA required\\"\\nError: MFA verification required\\n→ Run: auth mfa verify --code # \\"Keyring error\\" (macOS)\\nError: Failed to access keyring\\n→ Check Keychain Access permissions # \\"Keyring error\\" (Linux)\\nError: Failed to access keyring\\n→ Install gnome-keyring or kwallet\\n```plaintext --- ## Plugin: nu_plugin_kms Key Management Service plugin supporting multiple backends. ### Supported Backends | Backend | Description | Use Case |\\n|---------|-------------|----------|\\n| `rustyvault` | RustyVault Transit engine | Production KMS |\\n| `age` | Age encryption (local) | Development/testing |\\n| `cosmian` | Cosmian KMS (HTTP) | Cloud KMS |\\n| `aws` | AWS KMS | AWS environments |\\n| `vault` | HashiCorp Vault | Enterprise KMS | ### Commands #### `kms encrypt [--backend ]` Encrypt data using KMS. **Arguments**: - `data` (required): Data to encrypt (string or binary) **Flags**: - `--backend `: KMS backend (`rustyvault`, `age`, `cosmian`, `aws`, `vault`)\\n- `--key `: Key ID or recipient (backend-specific)\\n- `--context `: Additional authenticated data (AAD) **Examples**: ```nushell\\n# Auto-detect backend from environment\\nkms encrypt \\"secret data\\" # RustyVault\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main # Age (local encryption)\\nkms encrypt \\"data\\" --backend age --key age1xxxxxxxxx # AWS KMS\\nkms encrypt \\"data\\" --backend aws --key alias/provisioning # With context (AAD)\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main --context \\"user=admin\\"\\n```plaintext **Output Format**: ```plaintext\\nvault:v1:abc123def456...\\n```plaintext --- #### `kms decrypt [--backend ]` Decrypt KMS-encrypted data. **Arguments**: - `encrypted` (required): Encrypted data (base64 or KMS format) **Flags**: - `--backend `: KMS backend (auto-detected if not specified)\\n- `--context `: Additional authenticated data (AAD, must match encryption) **Examples**: ```nushell\\n# Auto-detect backend\\nkms decrypt \\"vault:v1:abc123def456...\\" # RustyVault explicit\\nkms decrypt \\"vault:v1:abc123...\\" --backend rustyvault # Age\\nkms decrypt \\"-----BEGIN AGE ENCRYPTED FILE-----...\\" --backend age # With context\\nkms decrypt \\"vault:v1:abc123...\\" --backend rustyvault --context \\"user=admin\\"\\n```plaintext **Output**: ```plaintext\\nsecret data\\n```plaintext --- #### `kms generate-key [--spec ]` Generate data encryption key (DEK) using KMS. **Flags**: - `--spec `: Key specification (`AES128` or `AES256`, default: `AES256`)\\n- `--backend `: KMS backend **Examples**: ```nushell\\n# Generate AES-256 key\\nkms generate-key # Generate AES-128 key\\nkms generate-key --spec AES128 # Specific backend\\nkms generate-key --backend rustyvault\\n```plaintext **Output Format**: ```json\\n{ \\"plaintext\\": \\"base64-encoded-key\\", \\"ciphertext\\": \\"vault:v1:encrypted-key\\", \\"spec\\": \\"AES256\\"\\n}\\n```plaintext --- #### `kms status` Show KMS backend status and configuration. **Examples**: ```nushell\\n# Show status\\nkms status # Filter to specific backend\\nkms status | where backend == \\"rustyvault\\"\\n```plaintext **Output Format**: ```json\\n{ \\"backend\\": \\"rustyvault\\", \\"status\\": \\"healthy\\", \\"url\\": \\"http://localhost:8200\\", \\"mount_point\\": \\"transit\\", \\"version\\": \\"0.1.0\\"\\n}\\n```plaintext --- ### Environment Variables **RustyVault Backend**: ```bash\\nexport RUSTYVAULT_ADDR=\\"http://localhost:8200\\"\\nexport RUSTYVAULT_TOKEN=\\"your-token-here\\"\\nexport RUSTYVAULT_MOUNT=\\"transit\\"\\n```plaintext **Age Backend**: ```bash\\nexport AGE_RECIPIENT=\\"age1xxxxxxxxx\\"\\nexport AGE_IDENTITY=\\"/path/to/key.txt\\"\\n```plaintext **HTTP Backend (Cosmian)**: ```bash\\nexport KMS_HTTP_URL=\\"http://localhost:9998\\"\\nexport KMS_HTTP_BACKEND=\\"cosmian\\"\\n```plaintext **AWS KMS**: ```bash\\nexport AWS_REGION=\\"us-east-1\\"\\nexport AWS_ACCESS_KEY_ID=\\"...\\"\\nexport AWS_SECRET_ACCESS_KEY=\\"...\\"\\n```plaintext --- ### Performance Comparison | Operation | HTTP API | Plugin | Improvement |\\n|-----------|----------|--------|-------------|\\n| Encrypt (RustyVault) | ~50ms | ~5ms | **10x faster** |\\n| Decrypt (RustyVault) | ~50ms | ~5ms | **10x faster** |\\n| Encrypt (Age) | ~30ms | ~3ms | **10x faster** |\\n| Decrypt (Age) | ~30ms | ~3ms | **10x faster** |\\n| Generate Key | ~60ms | ~8ms | **7.5x faster** | --- ## Plugin: nu_plugin_orchestrator Orchestrator operations plugin for status, validation, and task management. ### Commands #### `orch status [--data-dir ]` Get orchestrator status from local files (no HTTP). **Flags**: - `--data-dir `: Data directory (default: `provisioning/platform/orchestrator/data`) **Examples**: ```nushell\\n# Default data dir\\norch status # Custom dir\\norch status --data-dir ./custom/data # Pipeline usage\\norch status | if $in.active_tasks > 0 { echo \\"Tasks running\\" }\\n```plaintext **Output Format**: ```json\\n{ \\"active_tasks\\": 5, \\"completed_tasks\\": 120, \\"failed_tasks\\": 2, \\"pending_tasks\\": 3, \\"uptime\\": \\"2d 4h 15m\\", \\"health\\": \\"healthy\\"\\n}\\n```plaintext --- #### `orch validate [--strict]` Validate workflow KCL file. **Arguments**: - `workflow.k` (required): Path to KCL workflow file **Flags**: - `--strict`: Enable strict validation (all checks, warnings as errors) **Examples**: ```nushell\\n# Basic validation\\norch validate workflows/deploy.k # Strict mode\\norch validate workflows/deploy.k --strict # Pipeline usage\\nls workflows/*.k | each { |file| orch validate $file.name }\\n```plaintext **Output Format**: ```json\\n{ \\"valid\\": true, \\"workflow\\": { \\"name\\": \\"deploy_k8s_cluster\\", \\"version\\": \\"1.0.0\\", \\"operations\\": 5 }, \\"warnings\\": [], \\"errors\\": []\\n}\\n```plaintext **Validation Checks**: - KCL syntax errors\\n- Required fields present\\n- Dependency graph valid (no cycles)\\n- Resource limits within bounds\\n- Provider configurations valid --- #### `orch tasks [--status ] [--limit ]` List orchestrator tasks. **Flags**: - `--status `: Filter by status (`pending`, `running`, `completed`, `failed`)\\n- `--limit `: Limit number of results (default: 100)\\n- `--data-dir `: Data directory (default from `ORCHESTRATOR_DATA_DIR`) **Examples**: ```nushell\\n# All tasks\\norch tasks # Pending tasks only\\norch tasks --status pending # Running tasks (limit to 10)\\norch tasks --status running --limit 10 # Pipeline usage\\norch tasks --status failed | each { |task| echo $\\"Failed: ($task.name)\\" }\\n```plaintext **Output Format**: ```json\\n[ { \\"task_id\\": \\"task_abc123\\", \\"name\\": \\"deploy_kubernetes\\", \\"status\\": \\"running\\", \\"priority\\": 5, \\"created_at\\": \\"2025-10-09T12:00:00Z\\", \\"updated_at\\": \\"2025-10-09T12:05:00Z\\", \\"progress\\": 45 }\\n]\\n```plaintext --- ### Environment Variables | Variable | Description | Default |\\n|----------|-------------|---------|\\n| `ORCHESTRATOR_DATA_DIR` | Data directory | `provisioning/platform/orchestrator/data` | --- ### Performance Comparison | Operation | HTTP API | Plugin | Improvement |\\n|-----------|----------|--------|-------------|\\n| Status | ~30ms | ~3ms | **10x faster** |\\n| Validate | ~100ms | ~10ms | **10x faster** |\\n| Tasks List | ~50ms | ~5ms | **10x faster** | --- ## Pipeline Examples ### Authentication Flow ```nushell\\n# Login and verify in one pipeline\\nauth login admin | if $in.success { auth verify } | if $in.mfa_required { auth mfa verify --code (input \\"MFA code: \\") }\\n```plaintext ### KMS Operations ```nushell\\n# Encrypt multiple secrets\\n[\\"secret1\\", \\"secret2\\", \\"secret3\\"] | each { |data| kms encrypt $data --backend rustyvault } | save encrypted_secrets.json # Decrypt and process\\nopen encrypted_secrets.json | each { |enc| kms decrypt $enc } | each { |plain| echo $\\"Decrypted: ($plain)\\" }\\n```plaintext ### Orchestrator Monitoring ```nushell\\n# Monitor running tasks\\nwhile true { orch tasks --status running | each { |task| echo $\\"($task.name): ($task.progress)%\\" } sleep 5sec\\n}\\n```plaintext ### Combined Workflow ```nushell\\n# Complete deployment workflow\\nauth login admin | auth mfa verify --code (input \\"MFA: \\") | orch validate workflows/deploy.k | if $in.valid { orch tasks --status pending | where priority > 5 | each { |task| echo $\\"High priority: ($task.name)\\" } }\\n```plaintext --- ## Troubleshooting ### Auth Plugin **\\"No active session\\"**: ```nushell\\nauth login \\n```plaintext **\\"Keyring error\\" (macOS)**: - Check Keychain Access permissions\\n- Security & Privacy → Privacy → Full Disk Access → Add Nushell **\\"Keyring error\\" (Linux)**: ```bash\\n# Install keyring service\\nsudo apt install gnome-keyring # Ubuntu/Debian\\nsudo dnf install gnome-keyring # Fedora # Or use KWallet\\nsudo apt install kwalletmanager\\n```plaintext **\\"MFA verification failed\\"**: - Check time synchronization (TOTP requires accurate clocks)\\n- Use backup codes if TOTP not working\\n- Re-enroll MFA if device lost --- ### KMS Plugin **\\"RustyVault connection failed\\"**: ```bash\\n# Check RustyVault running\\ncurl http://localhost:8200/v1/sys/health # Set environment\\nexport RUSTYVAULT_ADDR=\\"http://localhost:8200\\"\\nexport RUSTYVAULT_TOKEN=\\"your-token\\"\\n```plaintext **\\"Age encryption failed\\"**: ```bash\\n# Check Age keys\\nls -la ~/.age/ # Generate new key if needed\\nage-keygen -o ~/.age/key.txt # Set environment\\nexport AGE_RECIPIENT=\\"age1xxxxxxxxx\\"\\nexport AGE_IDENTITY=\\"$HOME/.age/key.txt\\"\\n```plaintext **\\"AWS KMS access denied\\"**: ```bash\\n# Check AWS credentials\\naws sts get-caller-identity # Check KMS key policy\\naws kms describe-key --key-id alias/provisioning\\n```plaintext --- ### Orchestrator Plugin **\\"Failed to read status\\"**: ```bash\\n# Check data directory exists\\nls provisioning/platform/orchestrator/data/ # Create if missing\\nmkdir -p provisioning/platform/orchestrator/data\\n```plaintext **\\"Workflow validation failed\\"**: ```nushell\\n# Use strict mode for detailed errors\\norch validate workflows/deploy.k --strict\\n```plaintext **\\"No tasks found\\"**: ```bash\\n# Check orchestrator running\\nps aux | grep orchestrator # Start orchestrator\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background\\n```plaintext --- ## Development ### Building from Source ```bash\\ncd provisioning/core/plugins/nushell-plugins # Clean build\\ncargo clean # Build with debug info\\ncargo build -p nu_plugin_auth\\ncargo build -p nu_plugin_kms\\ncargo build -p nu_plugin_orchestrator # Run tests\\ncargo test -p nu_plugin_auth\\ncargo test -p nu_plugin_kms\\ncargo test -p nu_plugin_orchestrator # Run all tests\\ncargo test --all\\n```plaintext ### Adding to CI/CD ```yaml\\nname: Build Nushell Plugins on: [push, pull_request] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Install Rust uses: actions-rs/toolchain@v1 with: toolchain: stable - name: Build Plugins run: | cd provisioning/core/plugins/nushell-plugins cargo build --release --all - name: Test Plugins run: | cd provisioning/core/plugins/nushell-plugins cargo test --all - name: Upload Artifacts uses: actions/upload-artifact@v3 with: name: plugins path: provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_*\\n```plaintext --- ## Advanced Usage ### Custom Plugin Configuration Create `~/.config/nushell/plugin_config.nu`: ```nushell\\n# Auth plugin defaults\\n$env.CONTROL_CENTER_URL = \\"https://control-center.example.com\\" # KMS plugin defaults\\n$env.RUSTYVAULT_ADDR = \\"https://vault.example.com:8200\\"\\n$env.RUSTYVAULT_MOUNT = \\"transit\\" # Orchestrator plugin defaults\\n$env.ORCHESTRATOR_DATA_DIR = \\"/opt/orchestrator/data\\"\\n```plaintext ### Plugin Aliases Add to `~/.config/nushell/config.nu`: ```nushell\\n# Auth shortcuts\\nalias login = auth login\\nalias logout = auth logout # KMS shortcuts\\nalias encrypt = kms encrypt\\nalias decrypt = kms decrypt # Orchestrator shortcuts\\nalias status = orch status\\nalias validate = orch validate\\nalias tasks = orch tasks\\n```plaintext --- ## Security Best Practices ### Authentication ✅ **DO**: Use interactive password prompts\\n✅ **DO**: Enable MFA for production environments\\n✅ **DO**: Verify session before sensitive operations\\n❌ **DON\'T**: Pass passwords in command line (visible in history)\\n❌ **DON\'T**: Store tokens in plain text files ### KMS Operations ✅ **DO**: Use context (AAD) for encryption when available\\n✅ **DO**: Rotate KMS keys regularly\\n✅ **DO**: Use hardware-backed keys (WebAuthn, YubiKey) when possible\\n❌ **DON\'T**: Share Age private keys\\n❌ **DON\'T**: Log decrypted data ### Orchestrator ✅ **DO**: Validate workflows in strict mode before production\\n✅ **DO**: Monitor task status regularly\\n✅ **DO**: Use appropriate data directory permissions (700)\\n❌ **DON\'T**: Run orchestrator as root\\n❌ **DON\'T**: Expose data directory over network shares --- ## FAQ **Q: Why use plugins instead of HTTP API?**\\nA: Plugins are 10x faster, have better Nushell integration, and eliminate HTTP overhead. **Q: Can I use plugins without orchestrator running?**\\nA: `auth` and `kms` work independently. `orch` requires access to orchestrator data directory. **Q: How do I update plugins?**\\nA: Rebuild and re-register: `cargo build --release --all && plugin add target/release/nu_plugin_*` **Q: Are plugins cross-platform?**\\nA: Yes, plugins work on macOS, Linux, and Windows (with appropriate keyring services). **Q: Can I use multiple KMS backends simultaneously?**\\nA: Yes, specify `--backend` flag for each operation. **Q: How do I backup MFA enrollment?**\\nA: Save backup codes securely (password manager, encrypted file). QR code can be re-scanned. --- ## Related Documentation - **Security System**: `docs/architecture/ADR-009-security-system-complete.md`\\n- **JWT Auth**: `docs/architecture/JWT_AUTH_IMPLEMENTATION.md`\\n- **Config Encryption**: `docs/user/CONFIG_ENCRYPTION_GUIDE.md`\\n- **RustyVault Integration**: `RUSTYVAULT_INTEGRATION_SUMMARY.md`\\n- **MFA Implementation**: `docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md` --- **Version**: 1.0.0\\n**Last Updated**: 2025-10-09\\n**Maintained By**: Platform Team","breadcrumbs":"NuShell Plugins Guide » Build from Source","id":"1623","title":"Build from Source"},"1624":{"body":"For complete documentation on Nushell plugins including installation, configuration, and advanced usage, see: Complete Guide : Plugin Integration Guide (1500+ lines) Quick Reference : Nushell Plugins Guide","breadcrumbs":"NuShell Plugins System » Nushell Plugins Integration (v1.0.0) - See detailed guide for complete reference","id":"1624","title":"Nushell Plugins Integration (v1.0.0) - See detailed guide for complete reference"},"1625":{"body":"Native Nushell plugins eliminate HTTP overhead and provide direct Rust-to-Nushell integration for critical platform operations.","breadcrumbs":"NuShell Plugins System » Overview","id":"1625","title":"Overview"},"1626":{"body":"Plugin Operation HTTP Latency Plugin Latency Speedup nu_plugin_kms Encrypt (RustyVault) ~50ms ~5ms 10x nu_plugin_kms Decrypt (RustyVault) ~50ms ~5ms 10x nu_plugin_orchestrator Status query ~30ms ~1ms 30x nu_plugin_auth Verify session ~50ms ~10ms 5x","breadcrumbs":"NuShell Plugins System » Performance Improvements","id":"1626","title":"Performance Improvements"},"1627":{"body":"Authentication Plugin (nu_plugin_auth) JWT login/logout with password prompts MFA enrollment (TOTP, WebAuthn) Session management OS-native keyring integration KMS Plugin (nu_plugin_kms) Multiple backend support (RustyVault, Age, Cosmian, AWS KMS, Vault) 10x faster encryption/decryption Context-based encryption (AAD support) Orchestrator Plugin (nu_plugin_orchestrator) Direct file-based operations (no HTTP) 30-50x faster status queries KCL workflow validation","breadcrumbs":"NuShell Plugins System » Three Native Plugins","id":"1627","title":"Three Native Plugins"},"1628":{"body":"# Authentication\\nauth login admin\\nauth verify\\nauth mfa enroll totp # KMS Operations\\nkms encrypt \\"data\\"\\nkms decrypt \\"vault:v1:abc123...\\" # Orchestrator\\norch status\\norch validate workflows/deploy.k\\norch tasks --status running","breadcrumbs":"NuShell Plugins System » Quick Commands","id":"1628","title":"Quick Commands"},"1629":{"body":"cd provisioning/core/plugins/nushell-plugins\\ncargo build --release --all # Register with Nushell\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator","breadcrumbs":"NuShell Plugins System » Installation","id":"1629","title":"Installation"},"163":{"body":"First, run in check mode to see what would happen: # Check mode - no actual changes\\nprovisioning server create --infra my-infra --check # Expected output:\\n# ✓ Validation passed\\n# ⚠ Check mode: No changes will be made\\n# # Would create:\\n# - Server: dev-server-01 (2 cores, 4GB RAM, 50GB disk)","breadcrumbs":"First Deployment » Step 3: Create Server (Check Mode)","id":"163","title":"Step 3: Create Server (Check Mode)"},"1630":{"body":"✅ 10x faster KMS operations (5ms vs 50ms) ✅ 30-50x faster orchestrator queries (1ms vs 30-50ms) ✅ Native Nushell integration with data structures and pipelines ✅ Offline capability (KMS with Age, orchestrator local ops) ✅ OS-native keyring for secure token storage See Plugin Integration Guide for complete information.","breadcrumbs":"NuShell Plugins System » Benefits","id":"1630","title":"Benefits"},"1631":{"body":"","breadcrumbs":"Plugin Usage Guide » Provisioning Plugins Usage Guide","id":"1631","title":"Provisioning Plugins Usage Guide"},"1632":{"body":"Three high-performance Nushell plugins have been integrated into the provisioning system to provide 10-50x performance improvements over HTTP-based operations: nu_plugin_auth - JWT authentication with system keyring integration nu_plugin_kms - Multi-backend KMS encryption nu_plugin_orchestrator - Local orchestrator operations","breadcrumbs":"Plugin Usage Guide » Overview","id":"1632","title":"Overview"},"1633":{"body":"","breadcrumbs":"Plugin Usage Guide » Installation","id":"1633","title":"Installation"},"1634":{"body":"Nushell 0.107.1 or later All plugins are pre-compiled in provisioning/core/plugins/nushell-plugins/","breadcrumbs":"Plugin Usage Guide » Prerequisites","id":"1634","title":"Prerequisites"},"1635":{"body":"Run the installation script in a new Nushell session: nu provisioning/core/plugins/install-and-register.nu This will: Copy plugins to ~/.local/share/nushell/plugins/ Register plugins with Nushell Verify installation","breadcrumbs":"Plugin Usage Guide » Quick Install","id":"1635","title":"Quick Install"},"1636":{"body":"If the script doesn\'t work, run these commands: # Copy plugins\\ncp provisioning/core/plugins/nushell-plugins/nu_plugin_auth/target/release/nu_plugin_auth ~/.local/share/nushell/plugins/\\ncp provisioning/core/plugins/nushell-plugins/nu_plugin_kms/target/release/nu_plugin_kms ~/.local/share/nushell/plugins/\\ncp provisioning/core/plugins/nushell-plugins/nu_plugin_orchestrator/target/release/nu_plugin_orchestrator ~/.local/share/nushell/plugins/ chmod +x ~/.local/share/nushell/plugins/nu_plugin_* # Register with Nushell (run in a fresh session)\\nplugin add ~/.local/share/nushell/plugins/nu_plugin_auth\\nplugin add ~/.local/share/nushell/plugins/nu_plugin_kms\\nplugin add ~/.local/share/nushell/plugins/nu_plugin_orchestrator","breadcrumbs":"Plugin Usage Guide » Manual Installation","id":"1636","title":"Manual Installation"},"1637":{"body":"","breadcrumbs":"Plugin Usage Guide » Usage","id":"1637","title":"Usage"},"1638":{"body":"10x faster than HTTP fallback Login provisioning auth login [password] # Examples\\nprovisioning auth login admin\\nprovisioning auth login admin mypassword\\nprovisioning auth login --url http://localhost:8081 admin Verify Token provisioning auth verify [--local] # Examples\\nprovisioning auth verify\\nprovisioning auth verify --local Logout provisioning auth logout # Example\\nprovisioning auth logout List Sessions provisioning auth sessions [--active] # Examples\\nprovisioning auth sessions\\nprovisioning auth sessions --active","breadcrumbs":"Plugin Usage Guide » Authentication Plugin","id":"1638","title":"Authentication Plugin"},"1639":{"body":"10x faster than HTTP fallback Supports multiple backends: RustyVault, Age, AWS KMS, HashiCorp Vault, Cosmian Encrypt Data provisioning kms encrypt [--backend ] [--key ] # Examples\\nprovisioning kms encrypt \\"secret-data\\"\\nprovisioning kms encrypt \\"secret\\" --backend age\\nprovisioning kms encrypt \\"secret\\" --backend rustyvault --key my-key Decrypt Data provisioning kms decrypt [--backend ] [--key ] # Examples\\nprovisioning kms decrypt $encrypted_data\\nprovisioning kms decrypt $encrypted --backend age KMS Status provisioning kms status # Output shows current backend and availability List Backends provisioning kms list-backends # Shows all available KMS backends","breadcrumbs":"Plugin Usage Guide » KMS Plugin","id":"1639","title":"KMS Plugin"},"164":{"body":"If check mode looks good, create the server: # Create server\\nprovisioning server create --infra my-infra # Expected output:\\n# ✓ Creating server: dev-server-01\\n# ✓ Server created successfully\\n# ✓ IP Address: 192.168.1.100\\n# ✓ SSH access: ssh user@192.168.1.100","breadcrumbs":"First Deployment » Step 4: Create Server (Real)","id":"164","title":"Step 4: Create Server (Real)"},"1640":{"body":"30x faster than HTTP fallback Local file-based orchestration without network overhead. Check Status provisioning orch status [--data-dir ] # Examples\\nprovisioning orch status\\nprovisioning orch status --data-dir /custom/data List Tasks provisioning orch tasks [--status ] [--limit ] [--data-dir ] # Examples\\nprovisioning orch tasks\\nprovisioning orch tasks --status pending\\nprovisioning orch tasks --status running --limit 10 Validate Workflow provisioning orch validate [--strict] # Examples\\nprovisioning orch validate workflows/deployment.k\\nprovisioning orch validate workflows/deployment.k --strict Submit Workflow provisioning orch submit [--priority <0-100>] [--check] # Examples\\nprovisioning orch submit workflows/deployment.k\\nprovisioning orch submit workflows/critical.k --priority 90\\nprovisioning orch submit workflows/test.k --check Monitor Task provisioning orch monitor [--once] [--interval ] [--timeout ] # Examples\\nprovisioning orch monitor task-123\\nprovisioning orch monitor task-123 --once\\nprovisioning orch monitor task-456 --interval 5000 --timeout 600","breadcrumbs":"Plugin Usage Guide » Orchestrator Plugin","id":"1640","title":"Orchestrator Plugin"},"1641":{"body":"Check which plugins are installed: provisioning plugin status # Output:\\n# Provisioning Plugins Status\\n# ============================\\n# [OK] nu_plugin_auth - JWT authentication with keyring\\n# [OK] nu_plugin_kms - Multi-backend encryption\\n# [OK] nu_plugin_orchestrator - Local orchestrator (30x faster)\\n#\\n# All plugins loaded - using native high-performance mode","breadcrumbs":"Plugin Usage Guide » Plugin Status","id":"1641","title":"Plugin Status"},"1642":{"body":"provisioning plugin test # Runs quick tests on all installed plugins\\n# Output shows which plugins are responding","breadcrumbs":"Plugin Usage Guide » Testing Plugins","id":"1642","title":"Testing Plugins"},"1643":{"body":"provisioning plugin list # Shows all provisioning plugins registered with Nushell","breadcrumbs":"Plugin Usage Guide » List Registered Plugins","id":"1643","title":"List Registered Plugins"},"1644":{"body":"Operation With Plugin HTTP Fallback Speedup Auth verify ~10ms ~50ms 5x Auth login ~15ms ~100ms 7x KMS encrypt ~5-8ms ~50ms 10x KMS decrypt ~5-8ms ~50ms 10x Orch status ~1-5ms ~30ms 30x Orch tasks list ~2-10ms ~50ms 25x","breadcrumbs":"Plugin Usage Guide » Performance Comparison","id":"1644","title":"Performance Comparison"},"1645":{"body":"If plugins are not installed or fail to load, all commands automatically fall back to HTTP-based operations: # With plugins installed (fast)\\n$ provisioning auth verify\\nToken is valid # Without plugins (slower, but functional)\\n$ provisioning auth verify\\n[HTTP fallback mode]\\nToken is valid (slower) This ensures the system remains functional even if plugins aren\'t available.","breadcrumbs":"Plugin Usage Guide » Graceful Fallback","id":"1645","title":"Graceful Fallback"},"1646":{"body":"","breadcrumbs":"Plugin Usage Guide » Troubleshooting","id":"1646","title":"Troubleshooting"},"1647":{"body":"Make sure you: Have a fresh Nushell session Ran plugin add for all three plugins The plugin files are executable: chmod +x ~/.local/share/nushell/plugins/nu_plugin_*","breadcrumbs":"Plugin Usage Guide » Plugins not found after installation","id":"1647","title":"Plugins not found after installation"},"1648":{"body":"If you see \\"command not found\\" when running provisioning auth login, the auth plugin is not loaded. Run: plugin list | grep nu_plugin If you don\'t see the plugins, register them: plugin add ~/.local/share/nushell/plugins/nu_plugin_auth\\nplugin add ~/.local/share/nushell/plugins/nu_plugin_kms\\nplugin add ~/.local/share/nushell/plugins/nu_plugin_orchestrator","breadcrumbs":"Plugin Usage Guide » \\"Command not found\\" errors","id":"1648","title":"\\"Command not found\\" errors"},"1649":{"body":"Check the plugin logs: provisioning plugin test If a plugin fails, the system will automatically fall back to HTTP mode.","breadcrumbs":"Plugin Usage Guide » Plugins crash or are unresponsive","id":"1649","title":"Plugins crash or are unresponsive"},"165":{"body":"Check server status: # List all servers\\nprovisioning server list # Get detailed server info\\nprovisioning server info dev-server-01 # SSH to server (optional)\\nprovisioning server ssh dev-server-01","breadcrumbs":"First Deployment » Step 5: Verify Server","id":"165","title":"Step 5: Verify Server"},"1650":{"body":"All plugin commands are integrated into the main provisioning CLI: # Shortcuts available\\nprovisioning auth login admin # Full command\\nprovisioning login admin # Alias provisioning kms encrypt secret # Full command\\nprovisioning encrypt secret # Alias provisioning orch status # Full command\\nprovisioning orch-status # Alias","breadcrumbs":"Plugin Usage Guide » Integration with Provisioning CLI","id":"1650","title":"Integration with Provisioning CLI"},"1651":{"body":"","breadcrumbs":"Plugin Usage Guide » Advanced Configuration","id":"1651","title":"Advanced Configuration"},"1652":{"body":"For orchestrator operations, specify custom data directory: provisioning orch status --data-dir /custom/orchestrator/data\\nprovisioning orch tasks --data-dir /custom/orchestrator/data","breadcrumbs":"Plugin Usage Guide » Custom Data Directory","id":"1652","title":"Custom Data Directory"},"1653":{"body":"For auth operations with custom endpoint: provisioning auth login admin --url http://custom-auth-server:8081\\nprovisioning auth verify --url http://custom-auth-server:8081","breadcrumbs":"Plugin Usage Guide » Custom Auth URL","id":"1653","title":"Custom Auth URL"},"1654":{"body":"Specify which KMS backend to use: # Use Age encryption\\nprovisioning kms encrypt \\"data\\" --backend age # Use RustyVault\\nprovisioning kms encrypt \\"data\\" --backend rustyvault # Use AWS KMS\\nprovisioning kms encrypt \\"data\\" --backend aws # Decrypt with same backend\\nprovisioning kms decrypt $encrypted --backend age","breadcrumbs":"Plugin Usage Guide » KMS Backend Selection","id":"1654","title":"KMS Backend Selection"},"1655":{"body":"If you need to rebuild plugins: cd provisioning/core/plugins/nushell-plugins # Build auth plugin\\ncd nu_plugin_auth && cargo build --release && cd .. # Build KMS plugin\\ncd nu_plugin_kms && cargo build --release && cd .. # Build orchestrator plugin\\ncd nu_plugin_orchestrator && cargo build --release && cd .. # Run install script\\ncd ../..\\nnu install-and-register.nu","breadcrumbs":"Plugin Usage Guide » Building Plugins from Source","id":"1655","title":"Building Plugins from Source"},"1656":{"body":"The plugins follow Nushell\'s plugin protocol: Plugin Binary : Compiled Rust binary in target/release/ Registration : Via plugin add command IPC : Communication via Nushell\'s JSON protocol Fallback : HTTP API fallback if plugins unavailable","breadcrumbs":"Plugin Usage Guide » Architecture","id":"1656","title":"Architecture"},"1657":{"body":"Auth tokens are stored in system keyring (Keychain/Credential Manager/Secret Service) KMS keys are protected by the selected backend\'s security Orchestrator operations are local file-based (no network exposure) All operations are logged in provisioning audit logs","breadcrumbs":"Plugin Usage Guide » Security Notes","id":"1657","title":"Security Notes"},"1658":{"body":"For issues or questions: Check plugin status: provisioning plugin test Review logs: provisioning logs or /var/log/provisioning/ Test HTTP fallback by temporarily unregistering plugins Contact the provisioning team with plugin test output","breadcrumbs":"Plugin Usage Guide » Support","id":"1658","title":"Support"},"1659":{"body":"Status : Production Ready Date : 2025-11-19 Version : 1.0.0","breadcrumbs":"Secrets Management Guide » Secrets Management System - Configuration Guide","id":"1659","title":"Secrets Management System - Configuration Guide"},"166":{"body":"Install a task service on the server: # Check mode first\\nprovisioning taskserv create kubernetes --infra my-infra --check # Expected output:\\n# ✓ Validation passed\\n# ⚠ Check mode: No changes will be made\\n#\\n# Would install:\\n# - Kubernetes v1.28.0\\n# - Required dependencies: containerd, etcd\\n# - On servers: dev-server-01","breadcrumbs":"First Deployment » Step 6: Install Kubernetes (Check Mode)","id":"166","title":"Step 6: Install Kubernetes (Check Mode)"},"1660":{"body":"The provisioning system supports secure SSH key retrieval from multiple secret sources, eliminating hardcoded filesystem dependencies and enabling enterprise-grade security. SSH keys are retrieved from configured secret sources (SOPS, KMS, RustyVault) with automatic fallback to local-dev mode for development environments.","breadcrumbs":"Secrets Management Guide » Overview","id":"1660","title":"Overview"},"1661":{"body":"","breadcrumbs":"Secrets Management Guide » Secret Sources","id":"1661","title":"Secret Sources"},"1662":{"body":"Age-based encrypted secrets file with YAML structure. Pros : ✅ Age encryption (modern, performant) ✅ Easy to version in Git (encrypted) ✅ No external services required ✅ Simple YAML structure Cons : ❌ Requires Age key management ❌ No key rotation automation Environment Variables : PROVISIONING_SECRET_SOURCE=sops\\nPROVISIONING_SOPS_ENABLED=true\\nPROVISIONING_SOPS_SECRETS_FILE=/path/to/secrets.enc.yaml\\nPROVISIONING_SOPS_AGE_KEY_FILE=$HOME/.age/provisioning\\n```plaintext **Secrets File Structure** (provisioning/secrets.enc.yaml): ```yaml\\n# Encrypted with sops\\nssh: web-01: ubuntu: /path/to/id_rsa root: /path/to/root_id_rsa db-01: postgres: /path/to/postgres_id_rsa\\n```plaintext **Setup Instructions**: ```bash\\n# 1. Install sops and age\\nbrew install sops age # 2. Generate Age key (store securely!)\\nage-keygen -o $HOME/.age/provisioning # 3. Create encrypted secrets file\\ncat > secrets.yaml << \'EOF\'\\nssh: web-01: ubuntu: ~/.ssh/provisioning_web01 db-01: postgres: ~/.ssh/provisioning_db01\\nEOF # 4. Encrypt with sops\\nsops -e -i secrets.yaml # 5. Rename to enc version\\nmv secrets.yaml provisioning/secrets.enc.yaml # 6. Configure environment\\nexport PROVISIONING_SECRET_SOURCE=sops\\nexport PROVISIONING_SOPS_SECRETS_FILE=$(pwd)/provisioning/secrets.enc.yaml\\nexport PROVISIONING_SOPS_AGE_KEY_FILE=$HOME/.age/provisioning\\n```plaintext ### 2. KMS (Key Management Service) AWS KMS or compatible key management service. **Pros**: - ✅ Cloud-native security\\n- ✅ Automatic key rotation\\n- ✅ Audit logging built-in\\n- ✅ High availability **Cons**: - ❌ Requires AWS account/credentials\\n- ❌ API calls add latency (~50ms)\\n- ❌ Cost per API call **Environment Variables**: ```bash\\nPROVISIONING_SECRET_SOURCE=kms\\nPROVISIONING_KMS_ENABLED=true\\nPROVISIONING_KMS_REGION=us-east-1\\n```plaintext **Secret Storage Pattern**: ```plaintext\\nprovisioning/ssh-keys/{hostname}/{username}\\n```plaintext **Setup Instructions**: ```bash\\n# 1. Create KMS key (one-time)\\naws kms create-key \\\\ --description \\"Provisioning SSH Keys\\" \\\\ --region us-east-1 # 2. Store SSH keys in Secrets Manager\\naws secretsmanager create-secret \\\\ --name provisioning/ssh-keys/web-01/ubuntu \\\\ --secret-string \\"$(cat ~/.ssh/provisioning_web01)\\" \\\\ --region us-east-1 # 3. Configure environment\\nexport PROVISIONING_SECRET_SOURCE=kms\\nexport PROVISIONING_KMS_REGION=us-east-1 # 4. Ensure AWS credentials available\\nexport AWS_PROFILE=provisioning\\n# or\\nexport AWS_ACCESS_KEY_ID=...\\nexport AWS_SECRET_ACCESS_KEY=...\\n```plaintext ### 3. RustyVault (Hashicorp Vault-Compatible) Self-hosted or managed Vault instance for secrets. **Pros**: - ✅ Self-hosted option\\n- ✅ Fine-grained access control\\n- ✅ Multiple authentication methods\\n- ✅ Easy key rotation **Cons**: - ❌ Requires Vault instance\\n- ❌ More operational overhead\\n- ❌ Network latency **Environment Variables**: ```bash\\nPROVISIONING_SECRET_SOURCE=vault\\nPROVISIONING_VAULT_ENABLED=true\\nPROVISIONING_VAULT_ADDRESS=http://localhost:8200\\nPROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ...\\n```plaintext **Secret Storage Pattern**: ```plaintext\\nGET /v1/secret/ssh-keys/{hostname}/{username}\\n# Returns: {\\"key_content\\": \\"-----BEGIN OPENSSH PRIVATE KEY-----...\\"}\\n```plaintext **Setup Instructions**: ```bash\\n# 1. Start Vault (if not already running)\\ndocker run -p 8200:8200 \\\\ -e VAULT_DEV_ROOT_TOKEN_ID=provisioning \\\\ vault server -dev # 2. Create KV v2 mount (if not exists)\\nvault secrets enable -version=2 -path=secret kv # 3. Store SSH key\\nvault kv put secret/ssh-keys/web-01/ubuntu \\\\ key_content=@~/.ssh/provisioning_web01 # 4. Configure environment\\nexport PROVISIONING_SECRET_SOURCE=vault\\nexport PROVISIONING_VAULT_ADDRESS=http://localhost:8200\\nexport PROVISIONING_VAULT_TOKEN=provisioning # 5. Create AppRole for production\\nvault auth enable approle\\nvault write auth/approle/role/provisioning \\\\ token_ttl=1h \\\\ token_max_ttl=4h\\nvault read auth/approle/role/provisioning/role-id\\nvault write -f auth/approle/role/provisioning/secret-id\\n```plaintext ### 4. Local-Dev (Fallback) Local filesystem SSH keys (development only). **Pros**: - ✅ No setup required\\n- ✅ Fast (local filesystem)\\n- ✅ Works offline **Cons**: - ❌ NOT for production\\n- ❌ Hardcoded filesystem dependency\\n- ❌ No key rotation **Environment Variables**: ```bash\\nPROVISIONING_ENVIRONMENT=local-dev\\n```plaintext **Behavior**: Standard paths checked (in order): 1. `$HOME/.ssh/id_rsa`\\n2. `$HOME/.ssh/id_ed25519`\\n3. `$HOME/.ssh/provisioning`\\n4. `$HOME/.ssh/provisioning_rsa` ## Auto-Detection Logic When `PROVISIONING_SECRET_SOURCE` is not explicitly set, the system auto-detects in this order: ```plaintext\\n1. PROVISIONING_SOPS_ENABLED=true or PROVISIONING_SOPS_SECRETS_FILE set? → Use SOPS\\n2. PROVISIONING_KMS_ENABLED=true or PROVISIONING_KMS_REGION set? → Use KMS\\n3. PROVISIONING_VAULT_ENABLED=true or both VAULT_ADDRESS and VAULT_TOKEN set? → Use Vault\\n4. Otherwise → Use local-dev (with warnings in production environments)\\n```plaintext ## Configuration Matrix | Secret Source | Env Variables | Enabled in |\\n|---|---|---|\\n| **SOPS** | `PROVISIONING_SOPS_*` | Development, Staging, Production |\\n| **KMS** | `PROVISIONING_KMS_*` | Staging, Production (with AWS) |\\n| **Vault** | `PROVISIONING_VAULT_*` | Development, Staging, Production |\\n| **Local-dev** | `PROVISIONING_ENVIRONMENT=local-dev` | Development only | ## Production Recommended Setup ### Minimal Setup (Single Source) ```bash\\n# Using Vault (recommended for self-hosted)\\nexport PROVISIONING_SECRET_SOURCE=vault\\nexport PROVISIONING_VAULT_ADDRESS=https://vault.example.com:8200\\nexport PROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ...\\nexport PROVISIONING_ENVIRONMENT=production\\n```plaintext ### Enhanced Setup (Fallback Chain) ```bash\\n# Primary: Vault\\nexport PROVISIONING_VAULT_ADDRESS=https://vault.primary.com:8200\\nexport PROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ... # Fallback: SOPS\\nexport PROVISIONING_SOPS_SECRETS_FILE=/etc/provisioning/secrets.enc.yaml\\nexport PROVISIONING_SOPS_AGE_KEY_FILE=/etc/provisioning/.age/key # Environment\\nexport PROVISIONING_ENVIRONMENT=production\\nexport PROVISIONING_SECRET_SOURCE=vault # Explicit: use Vault first\\n```plaintext ### High-Availability Setup ```bash\\n# Use KMS (managed service)\\nexport PROVISIONING_SECRET_SOURCE=kms\\nexport PROVISIONING_KMS_REGION=us-east-1\\nexport AWS_PROFILE=provisioning-admin # Or use Vault with HA\\nexport PROVISIONING_VAULT_ADDRESS=https://vault-ha.example.com:8200\\nexport PROVISIONING_VAULT_NAMESPACE=provisioning\\nexport PROVISIONING_ENVIRONMENT=production\\n```plaintext ## Validation & Testing ### Check Configuration ```bash\\n# Nushell\\nprovisioning secrets status # Show secret source and configuration\\nprovisioning secrets validate # Detailed diagnostics\\nprovisioning secrets diagnose\\n```plaintext ### Test SSH Key Retrieval ```bash\\n# Test specific host/user\\nprovisioning secrets get-key web-01 ubuntu # Test all configured hosts\\nprovisioning secrets validate-all # Dry-run SSH with retrieved key\\nprovisioning ssh --test-key web-01 ubuntu\\n```plaintext ## Migration Path ### From Local-Dev to SOPS ```bash\\n# 1. Create SOPS secrets file with existing keys\\ncat > secrets.yaml << \'EOF\'\\nssh: web-01: ubuntu: ~/.ssh/provisioning_web01 db-01: postgres: ~/.ssh/provisioning_db01\\nEOF # 2. Encrypt with Age\\nsops -e -i secrets.yaml # 3. Move to repo\\nmv secrets.yaml provisioning/secrets.enc.yaml # 4. Update environment\\nexport PROVISIONING_SECRET_SOURCE=sops\\nexport PROVISIONING_SOPS_SECRETS_FILE=$(pwd)/provisioning/secrets.enc.yaml\\nexport PROVISIONING_SOPS_AGE_KEY_FILE=$HOME/.age/provisioning\\n```plaintext ### From SOPS to Vault ```bash\\n# 1. Decrypt SOPS file\\nsops -d provisioning/secrets.enc.yaml > /tmp/secrets.yaml # 2. Import to Vault\\nvault kv put secret/ssh-keys/web-01/ubuntu key_content=@~/.ssh/provisioning_web01 # 3. Update environment\\nexport PROVISIONING_SECRET_SOURCE=vault\\nexport PROVISIONING_VAULT_ADDRESS=http://vault.example.com:8200\\nexport PROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ... # 4. Validate retrieval works\\nprovisioning secrets validate-all\\n```plaintext ## Security Best Practices ### 1. Never Commit Secrets ```bash\\n# Add to .gitignore\\necho \\"provisioning/secrets.enc.yaml\\" >> .gitignore\\necho \\".age/provisioning\\" >> .gitignore\\necho \\".vault-token\\" >> .gitignore\\n```plaintext ### 2. Rotate Keys Regularly ```bash\\n# SOPS: Rotate Age key\\nage-keygen -o ~/.age/provisioning.new\\n# Update all secrets with new key # KMS: Enable automatic rotation\\naws kms enable-key-rotation --key-id alias/provisioning # Vault: Set TTL on secrets\\nvault write -f secret/metadata/ssh-keys/web-01/ubuntu \\\\ delete_version_after=2160h # 90 days\\n```plaintext ### 3. Restrict Access ```bash\\n# SOPS: Protect Age key\\nchmod 600 ~/.age/provisioning # KMS: Restrict IAM permissions\\naws iam put-user-policy --user-name provisioning \\\\ --policy-name ProvisioningSecretsAccess \\\\ --policy-document file://kms-policy.json # Vault: Use AppRole for applications\\nvault write auth/approle/role/provisioning \\\\ token_ttl=1h \\\\ secret_id_ttl=30m\\n```plaintext ### 4. Audit Logging ```bash\\n# KMS: Enable CloudTrail\\naws cloudtrail put-event-selectors \\\\ --trail-name provisioning-trail \\\\ --event-selectors ReadWriteType=All # Vault: Check audit logs\\nvault audit list # SOPS: Version control (encrypted)\\ngit log -p provisioning/secrets.enc.yaml\\n```plaintext ## Troubleshooting ### SOPS Issues ```bash\\n# Test Age decryption\\nsops -d provisioning/secrets.enc.yaml # Verify Age key\\nage-keygen -l ~/.age/provisioning # Regenerate if needed\\nrm ~/.age/provisioning\\nage-keygen -o ~/.age/provisioning\\n```plaintext ### KMS Issues ```bash\\n# Test AWS credentials\\naws sts get-caller-identity # Check KMS key permissions\\naws kms describe-key --key-id alias/provisioning # List secrets\\naws secretsmanager list-secrets --filters Name=name,Values=provisioning\\n```plaintext ### Vault Issues ```bash\\n# Check Vault status\\nvault status # Test authentication\\nvault token lookup # List secrets\\nvault kv list secret/ssh-keys/ # Check audit logs\\nvault audit list\\nvault read sys/audit\\n```plaintext ## FAQ **Q: Can I use multiple secret sources simultaneously?**\\nA: Yes, configure multiple sources and set `PROVISIONING_SECRET_SOURCE` to specify primary. If primary fails, manual fallback to secondary is supported. **Q: What happens if secret retrieval fails?**\\nA: System logs the error and fails fast. No automatic fallback to local filesystem (for security). **Q: Can I cache SSH keys?**\\nA: Currently not, keys are retrieved fresh for each operation. Use local caching at OS level (ssh-agent) if needed. **Q: How do I rotate keys?**\\nA: Update the secret in your configured source (SOPS/KMS/Vault) and retrieve fresh on next operation. **Q: Is local-dev mode secure?**\\nA: No - it\'s development only. Production requires SOPS/KMS/Vault. ## Architecture ```plaintext\\nSSH Operation ↓\\nSecretsManager (Nushell/Rust) ↓\\n[Detect Source] ↓\\n┌─────────────────────────────────────┐\\n│ SOPS KMS Vault LocalDev\\n│ (Encrypted (AWS KMS (Self- (Filesystem\\n│ Secrets) Service) Hosted) Dev Only)\\n│\\n└─────────────────────────────────────┘ ↓\\nReturn SSH Key Path/Content ↓\\nSSH Operation Completes\\n```plaintext ## Integration with SSH Utilities SSH operations automatically use secrets manager: ```nushell\\n# Automatic secret retrieval\\nssh-cmd-smart $settings $server false \\"command\\" $ip\\n# Internally:\\n# 1. Determine secret source\\n# 2. Retrieve SSH key for server.installer_user@ip\\n# 3. Execute SSH with retrieved key\\n# 4. Cleanup sensitive data # Batch operations also integrate\\nssh-batch-execute $servers $settings \\"command\\"\\n# Per-host: Retrieves key → executes → cleans up\\n```plaintext --- **For Support**: See `docs/user/TROUBLESHOOTING_GUIDE.md`\\n**For Integration**: See `provisioning/core/nulib/lib_provisioning/platform/secrets.nu`","breadcrumbs":"Secrets Management Guide » 1. SOPS (Secrets Operations)","id":"1662","title":"1. SOPS (Secrets Operations)"},"1663":{"body":"","breadcrumbs":"Auth Quick Reference » Auth Quick Reference","id":"1663","title":"Auth Quick Reference"},"1664":{"body":"","breadcrumbs":"Config Encryption Quick Reference » Config Encryption Quick Reference","id":"1664","title":"Config Encryption Quick Reference"},"1665":{"body":"A unified Key Management Service for the Provisioning platform with support for multiple backends. Source : provisioning/platform/kms-service/","breadcrumbs":"KMS Service » KMS Service - Key Management Service","id":"1665","title":"KMS Service - Key Management Service"},"1666":{"body":"Age : Fast, offline encryption (development) RustyVault : Self-hosted Vault-compatible API Cosmian KMS : Enterprise-grade with confidential computing AWS KMS : Cloud-native key management HashiCorp Vault : Enterprise secrets management","breadcrumbs":"KMS Service » Supported Backends","id":"1666","title":"Supported Backends"},"1667":{"body":"┌─────────────────────────────────────────────────────────┐\\n│ KMS Service │\\n├─────────────────────────────────────────────────────────┤\\n│ REST API (Axum) │\\n│ ├─ /api/v1/kms/encrypt POST │\\n│ ├─ /api/v1/kms/decrypt POST │\\n│ ├─ /api/v1/kms/generate-key POST │\\n│ ├─ /api/v1/kms/status GET │\\n│ └─ /api/v1/kms/health GET │\\n├─────────────────────────────────────────────────────────┤\\n│ Unified KMS Service Interface │\\n├─────────────────────────────────────────────────────────┤\\n│ Backend Implementations │\\n│ ├─ Age Client (local files) │\\n│ ├─ RustyVault Client (self-hosted) │\\n│ └─ Cosmian KMS Client (enterprise) │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext ## Quick Start ### Development Setup (Age) ```bash\\n# 1. Generate Age keys\\nmkdir -p ~/.config/provisioning/age\\nage-keygen -o ~/.config/provisioning/age/private_key.txt\\nage-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisioning/age/public_key.txt # 2. Set environment\\nexport PROVISIONING_ENV=dev # 3. Start KMS service\\ncd provisioning/platform/kms-service\\ncargo run --bin kms-service\\n```plaintext ### Production Setup (Cosmian) ```bash\\n# Set environment variables\\nexport PROVISIONING_ENV=prod\\nexport COSMIAN_KMS_URL=https://your-kms.example.com\\nexport COSMIAN_API_KEY=your-api-key-here # Start KMS service\\ncargo run --bin kms-service\\n```plaintext ## REST API Examples ### Encrypt Data ```bash\\ncurl -X POST http://localhost:8082/api/v1/kms/encrypt \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"plaintext\\": \\"SGVsbG8sIFdvcmxkIQ==\\", \\"context\\": \\"env=prod,service=api\\" }\'\\n```plaintext ### Decrypt Data ```bash\\ncurl -X POST http://localhost:8082/api/v1/kms/decrypt \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"ciphertext\\": \\"...\\", \\"context\\": \\"env=prod,service=api\\" }\'\\n```plaintext ## Nushell CLI Integration ```bash\\n# Encrypt data\\n\\"secret-data\\" | kms encrypt\\n\\"api-key\\" | kms encrypt --context \\"env=prod,service=api\\" # Decrypt data\\n$ciphertext | kms decrypt # Generate data key (Cosmian only)\\nkms generate-key # Check service status\\nkms status\\nkms health # Encrypt/decrypt files\\nkms encrypt-file config.yaml\\nkms decrypt-file config.yaml.enc\\n```plaintext ## Backend Comparison | Feature | Age | RustyVault | Cosmian KMS | AWS KMS | Vault |\\n|---------|-----|------------|-------------|---------|-------|\\n| **Setup** | Simple | Self-hosted | Server setup | AWS account | Enterprise |\\n| **Speed** | Very fast | Fast | Fast | Fast | Fast |\\n| **Network** | No | Yes | Yes | Yes | Yes |\\n| **Key Rotation** | Manual | Automatic | Automatic | Automatic | Automatic |\\n| **Data Keys** | No | Yes | Yes | Yes | Yes |\\n| **Audit Logging** | No | Yes | Full | Full | Full |\\n| **Confidential** | No | No | Yes (SGX/SEV) | No | No |\\n| **License** | MIT | Apache 2.0 | Proprietary | Proprietary | BSL/Enterprise |\\n| **Cost** | Free | Free | Paid | Paid | Paid |\\n| **Use Case** | Dev/Test | Self-hosted | Privacy | AWS Cloud | Enterprise | ## Integration Points 1. **Config Encryption** (SOPS Integration)\\n2. **Dynamic Secrets** (Provider API Keys)\\n3. **SSH Key Management**\\n4. **Orchestrator** (Workflow Data)\\n5. **Control Center** (Audit Logs) ## Deployment ### Docker ```dockerfile\\nFROM rust:1.70 as builder\\nWORKDIR /app\\nCOPY . .\\nRUN cargo build --release FROM debian:bookworm-slim\\nRUN apt-get update && \\\\ apt-get install -y ca-certificates && \\\\ rm -rf /var/lib/apt/lists/*\\nCOPY --from=builder /app/target/release/kms-service /usr/local/bin/\\nENTRYPOINT [\\"kms-service\\"]\\n```plaintext ### Kubernetes ```yaml\\napiVersion: apps/v1\\nkind: Deployment\\nmetadata: name: kms-service\\nspec: replicas: 2 template: spec: containers: - name: kms-service image: provisioning/kms-service:latest env: - name: PROVISIONING_ENV value: \\"prod\\" - name: COSMIAN_KMS_URL value: \\"https://kms.example.com\\" ports: - containerPort: 8082\\n```plaintext ## Security Best Practices 1. **Development**: Use Age for dev/test only, never for production secrets\\n2. **Production**: Always use Cosmian KMS with TLS verification enabled\\n3. **API Keys**: Never hardcode, use environment variables\\n4. **Key Rotation**: Enable automatic rotation (90 days recommended)\\n5. **Context Encryption**: Always use encryption context (AAD)\\n6. **Network Access**: Restrict KMS service access with firewall rules\\n7. **Monitoring**: Enable health checks and monitor operation metrics ## Related Documentation - **User Guide**: [KMS Guide](../user/RUSTYVAULT_KMS_GUIDE.md)\\n- **Migration**: [KMS Simplification](../migration/KMS_SIMPLIFICATION.md)","breadcrumbs":"KMS Service » Architecture","id":"1667","title":"Architecture"},"1668":{"body":"Complete guide to using Gitea integration for workspace management, extension distribution, and collaboration. Version: 1.0.0 Last Updated: 2025-10-06","breadcrumbs":"Gitea Integration Guide » Gitea Integration Guide","id":"1668","title":"Gitea Integration Guide"},"1669":{"body":"Overview Setup Workspace Git Integration Workspace Locking Extension Publishing Service Management API Reference Troubleshooting","breadcrumbs":"Gitea Integration Guide » Table of Contents","id":"1669","title":"Table of Contents"},"167":{"body":"Proceed with installation: # Install Kubernetes\\nprovisioning taskserv create kubernetes --infra my-infra --wait # This will:\\n# 1. Check dependencies\\n# 2. Install containerd\\n# 3. Install etcd\\n# 4. Install Kubernetes\\n# 5. Configure and start services # Monitor progress\\nprovisioning workflow monitor ","breadcrumbs":"First Deployment » Step 7: Install Kubernetes (Real)","id":"167","title":"Step 7: Install Kubernetes (Real)"},"1670":{"body":"The Gitea integration provides: Workspace Git Integration : Version control for workspaces Distributed Locking : Prevent concurrent workspace modifications Extension Distribution : Publish and download extensions via releases Collaboration : Share workspaces and extensions across teams Service Management : Deploy and manage local Gitea instance","breadcrumbs":"Gitea Integration Guide » Overview","id":"1670","title":"Overview"},"1671":{"body":"┌─────────────────────────────────────────────────────────┐\\n│ Provisioning System │\\n├─────────────────────────────────────────────────────────┤\\n│ │\\n│ ┌────────────┐ ┌──────────────┐ ┌─────────────────┐ │\\n│ │ Workspace │ │ Extension │ │ Locking │ │\\n│ │ Git │ │ Publishing │ │ (Issues) │ │\\n│ └─────┬──────┘ └──────┬───────┘ └────────┬────────┘ │\\n│ │ │ │ │\\n│ └────────────────┼─────────────────────┘ │\\n│ │ │\\n│ ┌──────▼──────┐ │\\n│ │ Gitea API │ │\\n│ │ Client │ │\\n│ └──────┬──────┘ │\\n│ │ │\\n└─────────────────────────┼────────────────────────────────┘ │ ┌───────▼────────┐ │ Gitea Service │ │ (Local/Remote)│ └────────────────┘\\n```plaintext --- ## Setup ### Prerequisites - **Nushell 0.107.1+**\\n- **Git** installed and configured\\n- **Docker** (for local Gitea deployment) or access to remote Gitea instance\\n- **SOPS** (for encrypted token storage) ### Configuration #### 1. Add Gitea Configuration to KCL Edit your `provisioning/kcl/modes.k` or workspace config: ```kcl\\nimport provisioning.gitea as gitea # Local Docker deployment\\n_gitea_config = gitea.GiteaConfig { mode = \\"local\\" local = gitea.LocalGitea { enabled = True deployment = \\"docker\\" port = 3000 auto_start = True docker = gitea.DockerGitea { image = \\"gitea/gitea:1.21\\" container_name = \\"provisioning-gitea\\" } } auth = gitea.GiteaAuth { token_path = \\"~/.provisioning/secrets/gitea-token.enc\\" username = \\"provisioning\\" }\\n} # Or remote Gitea instance\\n_gitea_remote = gitea.GiteaConfig { mode = \\"remote\\" remote = gitea.RemoteGitea { enabled = True url = \\"https://gitea.example.com\\" api_url = \\"https://gitea.example.com/api/v1\\" } auth = gitea.GiteaAuth { token_path = \\"~/.provisioning/secrets/gitea-token.enc\\" username = \\"myuser\\" }\\n}\\n```plaintext #### 2. Create Gitea Access Token For local Gitea: 1. Start Gitea: `provisioning gitea start`\\n2. Open \\n3. Register admin account\\n4. Go to Settings → Applications → Generate New Token\\n5. Save token to encrypted file: ```bash\\n# Create encrypted token file\\necho \\"your-gitea-token\\" | sops --encrypt /dev/stdin > ~/.provisioning/secrets/gitea-token.enc\\n```plaintext For remote Gitea: 1. Login to your Gitea instance\\n2. Generate personal access token\\n3. Save encrypted as above #### 3. Verify Setup ```bash\\n# Check Gitea status\\nprovisioning gitea status # Validate token\\nprovisioning gitea auth validate # Show current user\\nprovisioning gitea user\\n```plaintext --- ## Workspace Git Integration ### Initialize Workspace with Git When creating a new workspace, enable git integration: ```bash\\n# Initialize new workspace with Gitea\\nprovisioning workspace init my-workspace --git --remote gitea # Or initialize existing workspace\\ncd workspace_my-workspace\\nprovisioning gitea workspace init . my-workspace --remote gitea\\n```plaintext This will: 1. Initialize git repository in workspace\\n2. Create repository on Gitea (`workspaces/my-workspace`)\\n3. Add remote origin\\n4. Push initial commit ### Clone Existing Workspace ```bash\\n# Clone from Gitea\\nprovisioning workspace clone workspaces/my-workspace ./workspace_my-workspace # Or using full identifier\\nprovisioning workspace clone my-workspace ./workspace_my-workspace\\n```plaintext ### Push/Pull Changes ```bash\\n# Push workspace changes\\ncd workspace_my-workspace\\nprovisioning workspace push --message \\"Updated infrastructure configs\\" # Pull latest changes\\nprovisioning workspace pull # Sync (pull + push)\\nprovisioning workspace sync\\n```plaintext ### Branch Management ```bash\\n# Create branch\\nprovisioning workspace branch create feature-new-cluster # Switch branch\\nprovisioning workspace branch switch feature-new-cluster # List branches\\nprovisioning workspace branch list # Delete branch\\nprovisioning workspace branch delete feature-new-cluster\\n```plaintext ### Git Status ```bash\\n# Get workspace git status\\nprovisioning workspace git status # Show uncommitted changes\\nprovisioning workspace git diff # Show staged changes\\nprovisioning workspace git diff --staged\\n```plaintext --- ## Workspace Locking Distributed locking prevents concurrent modifications to workspaces using Gitea issues. ### Lock Types - **read**: Multiple readers allowed, blocks writers\\n- **write**: Exclusive access, blocks all other locks\\n- **deploy**: Exclusive access for deployments ### Acquire Lock ```bash\\n# Acquire write lock\\nprovisioning gitea lock acquire my-workspace write \\\\ --operation \\"Deploying servers\\" \\\\ --expiry \\"2025-10-06T14:00:00Z\\" # Output:\\n# ✓ Lock acquired for workspace: my-workspace\\n# Lock ID: 42\\n# Type: write\\n# User: provisioning\\n```plaintext ### Check Lock Status ```bash\\n# List locks for workspace\\nprovisioning gitea lock list my-workspace # List all active locks\\nprovisioning gitea lock list # Get lock details\\nprovisioning gitea lock info my-workspace 42\\n```plaintext ### Release Lock ```bash\\n# Release lock\\nprovisioning gitea lock release my-workspace 42\\n```plaintext ### Force Release Lock (Admin) ```bash\\n# Force release stuck lock\\nprovisioning gitea lock force-release my-workspace 42 \\\\ --reason \\"Deployment failed, releasing lock\\"\\n```plaintext ### Automatic Locking Use `with-workspace-lock` for automatic lock management: ```nushell\\nuse lib_provisioning/gitea/locking.nu * with-workspace-lock \\"my-workspace\\" \\"deploy\\" \\"Server deployment\\" { # Your deployment code here # Lock automatically released on completion or error\\n}\\n```plaintext ### Lock Cleanup ```bash\\n# Cleanup expired locks\\nprovisioning gitea lock cleanup\\n```plaintext --- ## Extension Publishing Publish taskservs, providers, and clusters as versioned releases on Gitea. ### Publish Extension ```bash\\n# Publish taskserv\\nprovisioning gitea extension publish \\\\ ./extensions/taskservs/database/postgres \\\\ 1.2.0 \\\\ --release-notes \\"Added connection pooling support\\" # Publish provider\\nprovisioning gitea extension publish \\\\ ./extensions/providers/aws_prov \\\\ 2.0.0 \\\\ --prerelease # Publish cluster\\nprovisioning gitea extension publish \\\\ ./extensions/clusters/buildkit \\\\ 1.0.0\\n```plaintext This will: 1. Validate extension structure\\n2. Create git tag (if workspace is git repo)\\n3. Package extension as `.tar.gz`\\n4. Create Gitea release\\n5. Upload package as release asset ### List Published Extensions ```bash\\n# List all extensions\\nprovisioning gitea extension list # Filter by type\\nprovisioning gitea extension list --type taskserv\\nprovisioning gitea extension list --type provider\\nprovisioning gitea extension list --type cluster\\n```plaintext ### Download Extension ```bash\\n# Download specific version\\nprovisioning gitea extension download postgres 1.2.0 \\\\ --destination ./extensions/taskservs/database # Extension is downloaded and extracted automatically\\n```plaintext ### Extension Metadata ```bash\\n# Get extension information\\nprovisioning gitea extension info postgres 1.2.0\\n```plaintext ### Publishing Workflow ```bash\\n# 1. Make changes to extension\\ncd extensions/taskservs/database/postgres # 2. Update version in kcl/kcl.mod\\n# 3. Update CHANGELOG.md # 4. Commit changes\\ngit add .\\ngit commit -m \\"Release v1.2.0\\" # 5. Publish to Gitea\\nprovisioning gitea extension publish . 1.2.0\\n```plaintext --- ## Service Management ### Start/Stop Gitea ```bash\\n# Start Gitea (local mode)\\nprovisioning gitea start # Stop Gitea\\nprovisioning gitea stop # Restart Gitea\\nprovisioning gitea restart\\n```plaintext ### Check Status ```bash\\n# Get service status\\nprovisioning gitea status # Output:\\n# Gitea Status:\\n# Mode: local\\n# Deployment: docker\\n# Running: true\\n# Port: 3000\\n# URL: http://localhost:3000\\n# Container: provisioning-gitea\\n# Health: ✓ OK\\n```plaintext ### View Logs ```bash\\n# View recent logs\\nprovisioning gitea logs # Follow logs\\nprovisioning gitea logs --follow # Show specific number of lines\\nprovisioning gitea logs --lines 200\\n```plaintext ### Install Gitea Binary ```bash\\n# Install latest version\\nprovisioning gitea install # Install specific version\\nprovisioning gitea install 1.21.0 # Custom install directory\\nprovisioning gitea install --install-dir ~/bin\\n```plaintext --- ## API Reference ### Repository Operations ```nushell\\nuse lib_provisioning/gitea/api_client.nu * # Create repository\\ncreate-repository \\"my-org\\" \\"my-repo\\" \\"Description\\" true # Get repository\\nget-repository \\"my-org\\" \\"my-repo\\" # Delete repository\\ndelete-repository \\"my-org\\" \\"my-repo\\" --force # List repositories\\nlist-repositories \\"my-org\\"\\n```plaintext ### Release Operations ```nushell\\n# Create release\\ncreate-release \\"my-org\\" \\"my-repo\\" \\"v1.0.0\\" \\"Release Name\\" \\"Notes\\" # Upload asset\\nupload-release-asset \\"my-org\\" \\"my-repo\\" 123 \\"./file.tar.gz\\" # Get release\\nget-release-by-tag \\"my-org\\" \\"my-repo\\" \\"v1.0.0\\" # List releases\\nlist-releases \\"my-org\\" \\"my-repo\\"\\n```plaintext ### Workspace Operations ```nushell\\nuse lib_provisioning/gitea/workspace_git.nu * # Initialize workspace git\\ninit-workspace-git \\"./workspace_test\\" \\"test\\" --remote \\"gitea\\" # Clone workspace\\nclone-workspace \\"workspaces/my-workspace\\" \\"./workspace_my-workspace\\" # Push changes\\npush-workspace \\"./workspace_my-workspace\\" \\"Updated configs\\" # Pull changes\\npull-workspace \\"./workspace_my-workspace\\"\\n```plaintext ### Locking Operations ```nushell\\nuse lib_provisioning/gitea/locking.nu * # Acquire lock\\nlet lock = acquire-workspace-lock \\"my-workspace\\" \\"write\\" \\"Deployment\\" # Release lock\\nrelease-workspace-lock \\"my-workspace\\" $lock.lock_id # Check if locked\\nis-workspace-locked \\"my-workspace\\" \\"write\\" # List locks\\nlist-workspace-locks \\"my-workspace\\"\\n```plaintext --- ## Troubleshooting ### Gitea Not Starting **Problem**: `provisioning gitea start` fails **Solutions**: ```bash\\n# Check Docker status\\ndocker ps # Check if port is in use\\nlsof -i :3000 # Check Gitea logs\\nprovisioning gitea logs # Remove old container\\ndocker rm -f provisioning-gitea\\nprovisioning gitea start\\n```plaintext ### Token Authentication Failed **Problem**: `provisioning gitea auth validate` returns false **Solutions**: ```bash\\n# Verify token file exists\\nls ~/.provisioning/secrets/gitea-token.enc # Test decryption\\nsops --decrypt ~/.provisioning/secrets/gitea-token.enc # Regenerate token in Gitea UI\\n# Save new token\\necho \\"new-token\\" | sops --encrypt /dev/stdin > ~/.provisioning/secrets/gitea-token.enc\\n```plaintext ### Cannot Push to Repository **Problem**: Git push fails with authentication error **Solutions**: ```bash\\n# Check remote URL\\ncd workspace_my-workspace\\ngit remote -v # Reconfigure remote with token\\ngit remote set-url origin http://username:token@localhost:3000/org/repo.git # Or use SSH\\ngit remote set-url origin git@localhost:workspaces/my-workspace.git\\n```plaintext ### Lock Already Exists **Problem**: Cannot acquire lock, workspace already locked **Solutions**: ```bash\\n# Check active locks\\nprovisioning gitea lock list my-workspace # Get lock details\\nprovisioning gitea lock info my-workspace 42 # If lock is stale, force release\\nprovisioning gitea lock force-release my-workspace 42 --reason \\"Stale lock\\"\\n```plaintext ### Extension Validation Failed **Problem**: Extension publishing fails validation **Solutions**: ```bash\\n# Check extension structure\\nls -la extensions/taskservs/myservice/\\n# Required:\\n# - kcl/kcl.mod\\n# - kcl/*.k (main schema file) # Verify kcl.mod format\\ncat extensions/taskservs/myservice/kcl/kcl.mod # Should have:\\n# [package]\\n# name = \\"myservice\\"\\n# version = \\"1.0.0\\"\\n```plaintext ### Docker Volume Permissions **Problem**: Gitea Docker container has permission errors **Solutions**: ```bash\\n# Fix data directory permissions\\nsudo chown -R 1000:1000 ~/.provisioning/gitea # Or recreate with correct permissions\\nprovisioning gitea stop --remove\\nrm -rf ~/.provisioning/gitea\\nprovisioning gitea start\\n```plaintext --- ## Best Practices ### Workspace Management 1. **Always use locking** for concurrent operations\\n2. **Commit frequently** with descriptive messages\\n3. **Use branches** for experimental changes\\n4. **Sync before operations** to get latest changes ### Extension Publishing 1. **Follow semantic versioning** (MAJOR.MINOR.PATCH)\\n2. **Update CHANGELOG.md** for each release\\n3. **Test extensions** before publishing\\n4. **Use prerelease flag** for beta versions ### Security 1. **Encrypt tokens** with SOPS\\n2. **Use private repositories** for sensitive workspaces\\n3. **Rotate tokens** regularly\\n4. **Audit lock history** via Gitea issues ### Performance 1. **Cleanup expired locks** periodically\\n2. **Use shallow clones** for large workspaces\\n3. **Archive old releases** to reduce storage\\n4. **Monitor Gitea resources** for local deployments --- ## Advanced Usage ### Custom Gitea Deployment Edit `docker-compose.yml`: ```yaml\\nservices: gitea: image: gitea/gitea:1.21 environment: - GITEA__server__DOMAIN=gitea.example.com - GITEA__server__ROOT_URL=https://gitea.example.com # Add custom settings volumes: - /custom/path/gitea:/data\\n```plaintext ### Webhooks Integration Configure webhooks for automated workflows: ```kcl\\nimport provisioning.gitea as gitea _webhook = gitea.GiteaWebhook { url = \\"https://provisioning.example.com/api/webhooks/gitea\\" events = [\\"push\\", \\"pull_request\\", \\"release\\"] secret = \\"webhook-secret\\"\\n}\\n```plaintext ### Batch Extension Publishing ```bash\\n# Publish all taskservs with same version\\nprovisioning gitea extension publish-batch \\\\ ./extensions/taskservs \\\\ 1.0.0 \\\\ --extension-type taskserv\\n```plaintext --- ## References - **Gitea API Documentation**: \\n- **KCL Schema**: `/Users/Akasha/project-provisioning/provisioning/kcl/gitea.k`\\n- **API Client**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/gitea/api_client.nu`\\n- **Workspace Git**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/gitea/workspace_git.nu`\\n- **Locking**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/gitea/locking.nu` --- **Version:** 1.0.0\\n**Maintained By:** Provisioning Team\\n**Last Updated:** 2025-10-06","breadcrumbs":"Gitea Integration Guide » Architecture","id":"1671","title":"Architecture"},"1672":{"body":"","breadcrumbs":"Service Mesh Ingress Guide » Service Mesh & Ingress Guide","id":"1672","title":"Service Mesh & Ingress Guide"},"1673":{"body":"This guide helps you choose between different service mesh and ingress controller options for your Kubernetes deployments.","breadcrumbs":"Service Mesh Ingress Guide » Comparison","id":"1673","title":"Comparison"},"1674":{"body":"Service Mesh Handles East-West traffic (service-to-service communication): Automatic mTLS encryption between services Traffic management and routing Observability and monitoring Service discovery Fault tolerance and resilience Ingress Controller Handles North-South traffic (external to internal): Route external traffic into the cluster TLS/HTTPS termination Virtual hosts and path routing Load balancing Can work with or without a service mesh","breadcrumbs":"Service Mesh Ingress Guide » Understanding the Difference","id":"1674","title":"Understanding the Difference"},"1675":{"body":"Istio Version : 1.24.0 Best for : Full-featured service mesh deployments with comprehensive observability Key Features : ✅ Comprehensive feature set ✅ Built-in Istio Gateway ingress controller ✅ Advanced traffic management ✅ Excellent observability (Kiali, Grafana, Jaeger) ✅ Virtual services, destination rules, traffic policies ✅ Mutual TLS (mTLS) with automatic certificate rotation ✅ Canary deployments and traffic mirroring Resource Requirements : CPU: 500m (Pilot) + 100m per gateway Memory: 2048Mi (Pilot) + 128Mi per gateway Relatively high overhead Pros : Industry-standard solution with large community Rich feature set for complex requirements Built-in ingress gateway (don\'t need external ingress) Strong observability capabilities Enterprise support available Cons : Significant resource overhead Complex configuration learning curve Can be overkill for simple applications Sidecar injection required for all services Use when : You need comprehensive traffic management Complex microservice patterns (canary deployments, traffic mirroring) Enterprise requirements You already understand service meshes Your team has Istio expertise Installation : provisioning taskserv create istio\\n```plaintext --- #### Linkerd **Version**: 2.16.0 **Best for**: Lightweight, high-performance service mesh with minimal complexity **Key Features**: - ✅ Ultra-lightweight (minimal resource footprint)\\n- ✅ Simple configuration\\n- ✅ Automatic mTLS with certificate rotation\\n- ✅ Fast sidecar startup (built in Rust)\\n- ✅ Live traffic visualization\\n- ✅ Service topology and dependency discovery\\n- ✅ Golden metrics out of the box (latency, success rate, throughput) **Resource Requirements**: - CPU proxy: 100m request, 1000m limit\\n- Memory proxy: 20Mi request, 250Mi limit\\n- Very lightweight compared to Istio **Pros**: - Minimal resource overhead\\n- Simple, intuitive configuration\\n- Fast startup and deployment\\n- Built in Rust for performance\\n- Excellent golden metrics\\n- Good for resource-constrained environments\\n- Can run alongside Istio **Cons**: - Fewer advanced features than Istio\\n- Requires external ingress controller\\n- Smaller ecosystem and fewer integrations\\n- Less feature-rich traffic management\\n- Requires cert-manager for mTLS **Use when**: - You want simplicity and minimal overhead\\n- Running on resource-constrained clusters\\n- You prefer straightforward configuration\\n- You don\'t need advanced traffic management\\n- You\'re using Kubernetes 1.21+ **Installation**: ```bash\\n# Linkerd requires cert-manager\\nprovisioning taskserv create cert-manager\\nprovisioning taskserv create linkerd\\nprovisioning taskserv create nginx-ingress # Or traefik/contour\\n```plaintext --- #### Cilium **Version**: See existing Cilium taskserv **Best for**: CNI-based networking with integrated service mesh **Key Features**: - ✅ CNI and service mesh in one solution\\n- ✅ eBPF-based for high performance\\n- ✅ Network policy enforcement\\n- ✅ Service mesh mode (optional)\\n- ✅ Hubble for observability\\n- ✅ Cluster mesh for multi-cluster **Pros**: - Replaces CNI plugin entirely\\n- High-performance eBPF kernel networking\\n- Can serve as both CNI and service mesh\\n- No sidecar needed (uses eBPF)\\n- Network policy support **Cons**: - Requires Linux kernel with eBPF support\\n- Service mesh mode is secondary feature\\n- More complex than Linkerd\\n- Not as mature in service mesh role **Use when**: - You need both CNI and service mesh\\n- You\'re on modern Linux kernels with eBPF\\n- You want kernel-level networking --- ### Ingress Controller Options #### Nginx Ingress **Version**: 1.12.0 **Best for**: Most Kubernetes deployments - proven, reliable, widely supported **Key Features**: - ✅ Battle-tested and production-proven\\n- ✅ Most popular ingress controller\\n- ✅ Extensive documentation and community\\n- ✅ Rich configuration options\\n- ✅ SSL/TLS termination\\n- ✅ URL rewriting and routing\\n- ✅ Rate limiting and DDoS protection **Pros**: - Proven stability in production\\n- Widest community and ecosystem\\n- Extensive documentation\\n- Multiple commercial support options\\n- Works with any service mesh\\n- Moderate resource footprint **Cons**: - Configuration can be verbose\\n- Limited middleware ecosystem (compared to Traefik)\\n- No automatic TLS with Let\'s Encrypt\\n- Configuration via annotations **Use when**: - You want proven stability\\n- Wide community support is important\\n- You need traditional ingress controller\\n- You\'re building production systems\\n- You want abundant documentation **Installation**: ```bash\\nprovisioning taskserv create nginx-ingress\\n```plaintext **With Linkerd**: ```bash\\nprovisioning taskserv create linkerd\\nprovisioning taskserv create nginx-ingress\\n```plaintext --- #### Traefik **Version**: 3.3.0 **Best for**: Modern cloud-native applications with dynamic service discovery **Key Features**: - ✅ Automatic service discovery\\n- ✅ Native Let\'s Encrypt support\\n- ✅ Middleware system for advanced routing\\n- ✅ Built-in dashboard and metrics\\n- ✅ API-driven configuration\\n- ✅ Dynamic configuration updates\\n- ✅ Support for multiple protocols (HTTP, TCP, gRPC) **Pros**: - Modern, cloud-native design\\n- Automatic TLS with Let\'s Encrypt\\n- Middleware ecosystem for extensibility\\n- Built-in dashboard for monitoring\\n- Dynamic configuration without restart\\n- API-driven approach\\n- Growing community **Cons**: - Different configuration paradigm (IngressRoute CRD)\\n- Smaller community than Nginx\\n- Learning curve for traditional ops\\n- Less mature than Nginx **Use when**: - You want modern cloud-native features\\n- Automatic TLS is important\\n- You like middleware-based routing\\n- You want dynamic configuration\\n- You\'re building microservices platforms **Installation**: ```bash\\nprovisioning taskserv create traefik\\n```plaintext **With Linkerd**: ```bash\\nprovisioning taskserv create linkerd\\nprovisioning taskserv create traefik\\n```plaintext --- #### Contour **Version**: 1.31.0 **Best for**: Envoy-based ingress with simple CRD configuration **Key Features**: - ✅ Envoy proxy backend (same as Istio)\\n- ✅ Simple CRD-based configuration\\n- ✅ HTTPProxy CRD for advanced routing\\n- ✅ Service delegation and composition\\n- ✅ External authorization\\n- ✅ Rate limiting support **Pros**: - Uses same Envoy proxy as Istio\\n- Simple but powerful configuration\\n- Good for multi-tenant clusters\\n- CRD-based (declarative)\\n- Good documentation **Cons**: - Smaller community than Nginx/Traefik\\n- Fewer integrations and plugins\\n- Less feature-rich than Traefik\\n- Fewer real-world examples **Use when**: - You want Envoy proxy for consistency with Istio\\n- You prefer simple configuration\\n- You like CRD-based approach\\n- You need multi-tenant support **Installation**: ```bash\\nprovisioning taskserv create contour\\n```plaintext --- #### HAProxy Ingress **Version**: 0.15.0 **Best for**: High-performance environments requiring advanced load balancing **Key Features**: - ✅ HAProxy backend for performance\\n- ✅ Advanced load balancing algorithms\\n- ✅ High throughput\\n- ✅ Flexible configuration\\n- ✅ Proven performance **Pros**: - Excellent performance\\n- Advanced load balancing options\\n- Battle-tested HAProxy backend\\n- Good for high-traffic scenarios **Cons**: - Less Kubernetes-native than others\\n- Smaller community\\n- Configuration complexity\\n- Fewer modern features **Use when**: - Performance is critical\\n- High traffic is expected\\n- You need advanced load balancing --- ## Recommended Combinations ### 1. Linkerd + Nginx Ingress (Recommended for most users) **Why**: Lightweight mesh + proven ingress = great balance ```bash\\nprovisioning taskserv create cert-manager\\nprovisioning taskserv create linkerd\\nprovisioning taskserv create nginx-ingress\\n```plaintext **Pros**: - Minimal overhead\\n- Simple to manage\\n- Proven stability\\n- Good observability **Cons**: - Less advanced features than Istio --- ### 2. Istio (Standalone) **Why**: All-in-one service mesh with built-in gateway ```bash\\nprovisioning taskserv create istio\\n```plaintext **Pros**: - Unified traffic management\\n- Powerful observability\\n- No external ingress needed\\n- Rich features **Cons**: - Higher resource usage\\n- More complex --- ### 3. Linkerd + Traefik **Why**: Lightweight mesh + modern ingress ```bash\\nprovisioning taskserv create cert-manager\\nprovisioning taskserv create linkerd\\nprovisioning taskserv create traefik\\n```plaintext **Pros**: - Minimal overhead\\n- Modern features\\n- Automatic TLS --- ### 4. No Mesh + Nginx Ingress (Simple deployments) **Why**: Just get traffic in without service mesh ```bash\\nprovisioning taskserv create nginx-ingress\\n```plaintext **Pros**: - Simplest setup\\n- Minimal overhead\\n- Proven stability --- ## Decision Matrix | Requirement | Istio | Linkerd | Cilium | Nginx | Traefik | Contour | HAProxy |\\n|-----------|-------|---------|--------|-------|---------|---------|---------|\\n| Lightweight | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |\\n| Simple Config | ❌ | ✅ | ⚠️ | ⚠️ | ✅ | ✅ | ❌ |\\n| Full Features | ✅ | ⚠️ | ✅ | ⚠️ | ✅ | ⚠️ | ✅ |\\n| Auto TLS | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ |\\n| Service Mesh | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ |\\n| Performance | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |\\n| Community | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | ⚠️ | ## Migration Paths ### From Istio to Linkerd 1. Install Linkerd alongside Istio\\n2. Gradually migrate services (add Linkerd annotations)\\n3. Verify Linkerd handles traffic correctly\\n4. Install external ingress controller (Nginx/Traefik)\\n5. Update Istio Virtual Services to use new ingress\\n6. Remove Istio once migration complete ### Between Ingress Controllers 1. Install new ingress controller\\n2. Create duplicate Ingress resources pointing to new controller\\n3. Test with new ingress (use IngressClassName)\\n4. Update DNS/load balancer to point to new ingress\\n5. Drain connections from old ingress\\n6. Remove old ingress controller --- ## Examples Complete examples of how to configure service meshes and ingress controllers in your workspace. ### Example 1: Linkerd + Nginx Ingress Deployment This is the recommended configuration for most deployments - lightweight and proven. #### Step 1: Create Taskserv Configurations **File**: `workspace/infra/my-cluster/taskservs/cert-manager.k` ```kcl\\nimport provisioning.extensions.taskservs.infrastructure.cert_manager as cm # Cert-manager is required for Linkerd\'s mTLS certificates\\n_taskserv = cm.CertManager { version = \\"v1.15.0\\" namespace = \\"cert-manager\\"\\n}\\n```plaintext **File**: `workspace/infra/my-cluster/taskservs/linkerd.k` ```kcl\\nimport provisioning.extensions.taskservs.networking.linkerd as linkerd # Lightweight service mesh with minimal overhead\\n_taskserv = linkerd.Linkerd { version = \\"2.16.0\\" namespace = \\"linkerd\\" # Enable observability ha_mode = False # Use True for production HA viz_enabled = True prometheus = True grafana = True # Use cert-manager for mTLS certificates cert_manager = True trust_domain = \\"cluster.local\\" # Resource configuration (very lightweight) resources = { proxy_cpu_request = \\"100m\\" proxy_cpu_limit = \\"1000m\\" proxy_memory_request = \\"20Mi\\" proxy_memory_limit = \\"250Mi\\" }\\n}\\n```plaintext **File**: `workspace/infra/my-cluster/taskservs/nginx-ingress.k` ```kcl\\nimport provisioning.extensions.taskservs.networking.nginx_ingress as nginx # Battle-tested ingress controller\\n_taskserv = nginx.NginxIngress { version = \\"1.12.0\\" namespace = \\"ingress-nginx\\" # Deployment configuration deployment_type = \\"Deployment\\" # Or \\"DaemonSet\\" for node-local ingress replicas = 2 # Enable metrics for observability prometheus_metrics = True # Resource allocation resources = { cpu_request = \\"100m\\" cpu_limit = \\"1000m\\" memory_request = \\"90Mi\\" memory_limit = \\"500Mi\\" }\\n}\\n```plaintext #### Step 2: Deploy Service Mesh Components ```bash\\n# Install cert-manager (prerequisite for Linkerd)\\nprovisioning taskserv create cert-manager # Install Linkerd service mesh\\nprovisioning taskserv create linkerd # Install Nginx ingress controller\\nprovisioning taskserv create nginx-ingress # Verify installation\\nlinkerd check\\nkubectl get deploy -n ingress-nginx\\n```plaintext #### Step 3: Configure Application Deployment **File**: `workspace/infra/my-cluster/clusters/web-api.k` ```kcl\\nimport provisioning.kcl.k8s_deploy as k8s\\nimport provisioning.extensions.taskservs.networking.nginx_ingress as nginx # Define the web API service with Linkerd service mesh and Nginx ingress\\nservice = k8s.K8sDeploy { # Basic information name = \\"web-api\\" namespace = \\"production\\" create_ns = True # Service mesh configuration - use Linkerd service_mesh = \\"linkerd\\" service_mesh_ns = \\"linkerd\\" service_mesh_config = { mtls_enabled = True tracing_enabled = False } # Ingress configuration - use Nginx ingress_controller = \\"nginx\\" ingress_ns = \\"ingress-nginx\\" ingress_config = { tls_enabled = True default_backend = \\"web-api:8080\\" } # Deployment spec spec = { replicas = 3 containers = [ { name = \\"api\\" image = \\"myregistry.azurecr.io/web-api:v1.0.0\\" imagePull = \\"Always\\" ports = [ { name = \\"http\\" typ = \\"TCP\\" container = 8080 } ] } ] } # Kubernetes service service = { name = \\"web-api\\" typ = \\"ClusterIP\\" ports = [ { name = \\"http\\" typ = \\"TCP\\" target = 8080 } ] }\\n}\\n```plaintext #### Step 4: Create Ingress Resource **File**: `workspace/infra/my-cluster/ingress/web-api-ingress.yaml` ```yaml\\napiVersion: networking.k8s.io/v1\\nkind: Ingress\\nmetadata: name: web-api namespace: production annotations: cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/rewrite-target: /\\nspec: ingressClassName: nginx tls: - hosts: - api.example.com secretName: web-api-tls rules: - host: api.example.com http: paths: - path: / pathType: Prefix backend: service: name: web-api port: number: 8080\\n```plaintext --- ### Example 2: Istio (Standalone) Deployment Complete service mesh with built-in ingress gateway. #### Step 1: Install Istio **File**: `workspace/infra/my-cluster/taskservs/istio.k` ```kcl\\nimport provisioning.extensions.taskservs.networking.istio as istio # Full-featured service mesh\\n_taskserv = istio.Istio { version = \\"1.24.0\\" profile = \\"default\\" # Options: default, demo, minimal, remote namespace = \\"istio-system\\" # Core features mtls_enabled = True mtls_mode = \\"PERMISSIVE\\" # Start with PERMISSIVE, switch to STRICT when ready # Traffic management ingress_gateway = True egress_gateway = False # Observability tracing = { enabled = True provider = \\"jaeger\\" sampling_rate = 0.1 # Sample 10% for production } prometheus = True grafana = True kiali = True # Resource configuration resources = { pilot_cpu = \\"500m\\" pilot_memory = \\"2048Mi\\" gateway_cpu = \\"100m\\" gateway_memory = \\"128Mi\\" }\\n}\\n```plaintext #### Step 2: Deploy Istio ```bash\\n# Install Istio\\nprovisioning taskserv create istio # Verify installation\\nistioctl verify-install\\n```plaintext #### Step 3: Configure Application with Istio **File**: `workspace/infra/my-cluster/clusters/api-service.k` ```kcl\\nimport provisioning.kcl.k8s_deploy as k8s service = k8s.K8sDeploy { name = \\"api-service\\" namespace = \\"production\\" create_ns = True # Use Istio for both service mesh AND ingress service_mesh = \\"istio\\" service_mesh_ns = \\"istio-system\\" ingress_controller = \\"istio-gateway\\" # Istio\'s built-in gateway spec = { replicas = 3 containers = [ { name = \\"api\\" image = \\"myregistry.azurecr.io/api:v1.0.0\\" ports = [ { name = \\"http\\", typ = \\"TCP\\", container = 8080 } ] } ] } service = { name = \\"api-service\\" typ = \\"ClusterIP\\" ports = [ { name = \\"http\\", typ = \\"TCP\\", target = 8080 } ] } # Istio-specific proxy configuration prxyGatewayServers = [ { port = { number = 80, protocol = \\"HTTP\\", name = \\"http\\" } hosts = [\\"api.example.com\\"] }, { port = { number = 443, protocol = \\"HTTPS\\", name = \\"https\\" } hosts = [\\"api.example.com\\"] tls = { mode = \\"SIMPLE\\" credentialName = \\"api-tls-cert\\" } } ] # Virtual service routing configuration prxyVirtualService = { hosts = [\\"api.example.com\\"] gateways = [\\"api-gateway\\"] matches = [ { typ = \\"http\\" location = [ { port = 80 } ] route_destination = [ { port_number = 8080, host = \\"api-service\\" } ] } ] }\\n}\\n```plaintext --- ### Example 3: Linkerd + Traefik (Modern Cloud-Native) Lightweight mesh with modern ingress controller and automatic TLS. #### Step 1: Create Configurations **File**: `workspace/infra/my-cluster/taskservs/linkerd.k` ```kcl\\nimport provisioning.extensions.taskservs.networking.linkerd as linkerd _taskserv = linkerd.Linkerd { version = \\"2.16.0\\" namespace = \\"linkerd\\" viz_enabled = True prometheus = True\\n}\\n```plaintext **File**: `workspace/infra/my-cluster/taskservs/traefik.k` ```kcl\\nimport provisioning.extensions.taskservs.networking.traefik as traefik # Modern ingress with middleware and auto-TLS\\n_taskserv = traefik.Traefik { version = \\"3.3.0\\" namespace = \\"traefik\\" replicas = 2 dashboard = True metrics = True access_logs = True # Enable Let\'s Encrypt for automatic TLS lets_encrypt = True lets_encrypt_email = \\"admin@example.com\\" resources = { cpu_request = \\"100m\\" cpu_limit = \\"1000m\\" memory_request = \\"128Mi\\" memory_limit = \\"512Mi\\" }\\n}\\n```plaintext #### Step 2: Deploy ```bash\\nprovisioning taskserv create cert-manager\\nprovisioning taskserv create linkerd\\nprovisioning taskserv create traefik\\n```plaintext #### Step 3: Create Traefik IngressRoute **File**: `workspace/infra/my-cluster/ingress/api-route.yaml` ```yaml\\napiVersion: traefik.io/v1alpha1\\nkind: IngressRoute\\nmetadata: name: api namespace: production\\nspec: entryPoints: - websecure routes: - match: Host(`api.example.com`) kind: Rule services: - name: api-service port: 8080 tls: certResolver: letsencrypt domains: - main: api.example.com\\n```plaintext --- ### Example 4: Minimal Setup (Just Nginx, No Service Mesh) For simple deployments that don\'t need service mesh. #### Step 1: Install Nginx **File**: `workspace/infra/my-cluster/taskservs/nginx-ingress.k` ```kcl\\nimport provisioning.extensions.taskservs.networking.nginx_ingress as nginx _taskserv = nginx.NginxIngress { version = \\"1.12.0\\" replicas = 2 prometheus_metrics = True\\n}\\n```plaintext #### Step 2: Deploy ```bash\\nprovisioning taskserv create nginx-ingress\\n```plaintext #### Step 3: Application Configuration **File**: `workspace/infra/my-cluster/clusters/simple-app.k` ```kcl\\nimport provisioning.kcl.k8s_deploy as k8s service = k8s.K8sDeploy { name = \\"simple-app\\" namespace = \\"default\\" # No service mesh - just ingress ingress_controller = \\"nginx\\" ingress_ns = \\"ingress-nginx\\" spec = { replicas = 2 containers = [ { name = \\"app\\" image = \\"nginx:latest\\" ports = [{ name = \\"http\\", typ = \\"TCP\\", container = 80 }] } ] } service = { name = \\"simple-app\\" typ = \\"ClusterIP\\" ports = [{ name = \\"http\\", typ = \\"TCP\\", target = 80 }] }\\n}\\n```plaintext #### Step 4: Create Ingress **File**: `workspace/infra/my-cluster/ingress/simple-app-ingress.yaml` ```yaml\\napiVersion: networking.k8s.io/v1\\nkind: Ingress\\nmetadata: name: simple-app namespace: default\\nspec: ingressClassName: nginx rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: simple-app port: number: 80\\n```plaintext --- ## Enable Sidecar Injection for Services ### For Linkerd ```bash\\n# Label namespace for automatic sidecar injection\\nkubectl annotate namespace production linkerd.io/inject=enabled # Or add annotation to specific deployment\\nkubectl annotate pod my-pod linkerd.io/inject=enabled\\n```plaintext ### For Istio ```bash\\n# Label namespace for automatic sidecar injection\\nkubectl label namespace production istio-injection=enabled # Verify injection\\nkubectl describe pod -n production | grep istio-proxy\\n```plaintext --- ## Monitoring and Observability ### Linkerd Dashboard ```bash\\n# Open Linkerd Viz dashboard\\nlinkerd viz dashboard # View service topology\\nlinkerd viz stat ns\\nlinkerd viz tap -n production\\n```plaintext ### Istio Dashboards ```bash\\n# Kiali (service mesh visualization)\\nkubectl port-forward -n istio-system svc/kiali 20000:20000\\n# http://localhost:20000 # Grafana (metrics)\\nkubectl port-forward -n istio-system svc/grafana 3000:3000\\n# http://localhost:3000 (default: admin/admin) # Jaeger (distributed tracing)\\nkubectl port-forward -n istio-system svc/jaeger-query 16686:16686\\n# http://localhost:16686\\n```plaintext ### Traefik Dashboard ```bash\\n# Forward Traefik dashboard\\nkubectl port-forward -n traefik svc/traefik 8080:8080\\n# http://localhost:8080/dashboard/\\n```plaintext --- ## Quick Reference ### Installation Commands #### Service Mesh - Istio ```bash\\n# Install Istio (includes built-in ingress gateway)\\nprovisioning taskserv create istio # Verify installation\\nistioctl verify-install # Enable sidecar injection on namespace\\nkubectl label namespace default istio-injection=enabled # View Kiali dashboard\\nkubectl port-forward -n istio-system svc/kiali 20000:20000\\n# Open: http://localhost:20000\\n```plaintext #### Service Mesh - Linkerd ```bash\\n# Install cert-manager first (Linkerd requirement)\\nprovisioning taskserv create cert-manager # Install Linkerd\\nprovisioning taskserv create linkerd # Verify installation\\nlinkerd check # Enable automatic sidecar injection\\nkubectl annotate namespace default linkerd.io/inject=enabled # View live dashboard\\nlinkerd viz dashboard\\n```plaintext #### Ingress Controllers ```bash\\n# Install Nginx Ingress (most popular)\\nprovisioning taskserv create nginx-ingress # Install Traefik (modern cloud-native)\\nprovisioning taskserv create traefik # Install Contour (Envoy-based)\\nprovisioning taskserv create contour # Install HAProxy Ingress (high-performance)\\nprovisioning taskserv create haproxy-ingress\\n```plaintext ### Common Installation Combinations #### Option 1: Linkerd + Nginx Ingress (Recommended) **Lightweight mesh + proven ingress** ```bash\\n# Step 1: Install cert-manager\\nprovisioning taskserv create cert-manager # Step 2: Install Linkerd\\nprovisioning taskserv create linkerd # Step 3: Install Nginx Ingress\\nprovisioning taskserv create nginx-ingress # Step 4: Verify installation\\nlinkerd check\\nkubectl get deploy -n ingress-nginx # Step 5: Create sample application with Linkerd\\nkubectl annotate namespace default linkerd.io/inject=enabled\\nkubectl apply -f my-app.yaml\\n```plaintext #### Option 2: Istio (Standalone) **Full-featured service mesh with built-in gateway** ```bash\\n# Install Istio\\nprovisioning taskserv create istio # Verify\\nistioctl verify-install # Enable sidecar injection\\nkubectl label namespace default istio-injection=enabled # Deploy applications\\nkubectl apply -f my-app.yaml\\n```plaintext #### Option 3: Linkerd + Traefik **Lightweight mesh + modern ingress with auto TLS** ```bash\\n# Install prerequisites\\nprovisioning taskserv create cert-manager # Install service mesh\\nprovisioning taskserv create linkerd # Install modern ingress with Let\'s Encrypt\\nprovisioning taskserv create traefik # Enable sidecar injection\\nkubectl annotate namespace default linkerd.io/inject=enabled\\n```plaintext #### Option 4: Just Nginx Ingress (No Mesh) **Simple deployments without service mesh** ```bash\\n# Install ingress controller\\nprovisioning taskserv create nginx-ingress # Deploy applications\\nkubectl apply -f ingress.yaml\\n```plaintext ### Verification Commands #### Check Linkerd ```bash\\n# Full system check\\nlinkerd check # Specific component checks\\nlinkerd check --pre # Pre-install checks\\nlinkerd check -n linkerd # Linkerd namespace\\nlinkerd check -n default # Custom namespace # View version\\nlinkerd version --client\\nlinkerd version --server\\n```plaintext #### Check Istio ```bash\\n# Full system analysis\\nistioctl analyze # By namespace\\nistioctl analyze -n default # Verify configuration\\nistioctl verify-install # Check version\\nistioctl version\\n```plaintext #### Check Ingress Controllers ```bash\\n# List ingress resources\\nkubectl get ingress -A # Get ingress details\\nkubectl describe ingress -n default # Nginx specific\\nkubectl get deploy -n ingress-nginx\\nkubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx # Traefik specific\\nkubectl get deploy -n traefik\\nkubectl logs -n traefik deployment/traefik\\n```plaintext ### Troubleshooting #### Service Mesh Issues ```bash\\n# Linkerd - Check proxy status\\nlinkerd check -n # Linkerd - View service topology\\nlinkerd tap -n deployment/ # Istio - Check sidecar injection\\nkubectl describe pod -n # Look for istio-proxy container # Istio - View traffic policies\\nistioctl analyze\\n```plaintext #### Ingress Controller Issues ```bash\\n# Check ingress controller logs\\nkubectl logs -n ingress-nginx deployment/ingress-nginx-controller\\nkubectl logs -n traefik deployment/traefik # Describe ingress resource\\nkubectl describe ingress -n # Check ingress controller service\\nkubectl get svc -n ingress-nginx\\nkubectl get svc -n traefik\\n```plaintext ### Uninstallation #### Remove Linkerd ```bash\\n# Remove annotations from namespaces\\nkubectl annotate namespace linkerd.io/inject- --all # Uninstall Linkerd\\nlinkerd uninstall | kubectl delete -f - # Remove Linkerd namespace\\nkubectl delete namespace linkerd\\n```plaintext #### Remove Istio ```bash\\n# Remove labels from namespaces\\nkubectl label namespace istio-injection- --all # Uninstall Istio\\nistioctl uninstall --purge # Remove Istio namespace\\nkubectl delete namespace istio-system\\n```plaintext #### Remove Ingress Controllers ```bash\\n# Nginx\\nhelm uninstall ingress-nginx -n ingress-nginx\\nkubectl delete namespace ingress-nginx # Traefik\\nhelm uninstall traefik -n traefik\\nkubectl delete namespace traefik\\n```plaintext ### Performance Tuning #### Linkerd Resource Limits ```bash\\n# Adjust proxy resource limits in linkerd.k\\n_taskserv = linkerd.Linkerd { resources: { proxy_cpu_limit = \\"2000m\\" # Increase if needed proxy_memory_limit = \\"512Mi\\" # Increase if needed }\\n}\\n```plaintext #### Istio Profile Selection ```bash\\n# Different resource profiles available\\nprofile = \\"default\\" # Full features (default)\\nprofile = \\"demo\\" # Demo mode (more resources)\\nprofile = \\"minimal\\" # Minimal (lower resources)\\nprofile = \\"remote\\" # Control plane only (advanced)\\n```plaintext --- ## Complete Workspace Directory Structure After implementing these examples, your workspace should look like: ```plaintext\\nworkspace/infra/my-cluster/\\n├── taskservs/\\n│ ├── cert-manager.k # For Linkerd mTLS\\n│ ├── linkerd.k # Service mesh option\\n│ ├── istio.k # OR Istio option\\n│ ├── nginx-ingress.k # Ingress controller\\n│ └── traefik.k # Alternative ingress\\n├── clusters/\\n│ ├── web-api.k # Application with Linkerd + Nginx\\n│ ├── api-service.k # Application with Istio\\n│ └── simple-app.k # App without service mesh\\n├── ingress/\\n│ ├── web-api-ingress.yaml # Nginx Ingress resource\\n│ ├── api-route.yaml # Traefik IngressRoute\\n│ └── simple-app-ingress.yaml # Simple Ingress\\n└── config.toml # Infrastructure-specific config\\n```plaintext --- ## Next Steps 1. **Choose your deployment model** (Linkerd+Nginx, Istio, or plain Nginx)\\n2. **Create taskserv KCL files** in `workspace/infra//taskservs/`\\n3. **Install components** using `provisioning taskserv create`\\n4. **Create application deployments** with appropriate mesh/ingress configuration\\n5. **Monitor and observe** using the appropriate dashboard --- ## Additional Resources - **Linkerd Documentation**: \\n- **Istio Documentation**: \\n- **Nginx Ingress**: \\n- **Traefik Documentation**: \\n- **Contour Documentation**: \\n- **Cilium Documentation**: ","breadcrumbs":"Service Mesh Ingress Guide » Service Mesh Options","id":"1675","title":"Service Mesh Options"},"1676":{"body":"Version : 1.0.0 Date : 2025-10-06 Audience : Users and Developers","breadcrumbs":"OCI Registry Guide » OCI Registry User Guide","id":"1676","title":"OCI Registry User Guide"},"1677":{"body":"Overview Quick Start OCI Commands Reference Dependency Management Extension Development Registry Setup Troubleshooting","breadcrumbs":"OCI Registry Guide » Table of Contents","id":"1677","title":"Table of Contents"},"1678":{"body":"The OCI registry integration enables distribution and management of provisioning extensions as OCI artifacts. This provides: Standard Distribution : Use industry-standard OCI registries Version Management : Proper semantic versioning for all extensions Dependency Resolution : Automatic dependency management Caching : Efficient caching to reduce downloads Security : TLS, authentication, and vulnerability scanning support","breadcrumbs":"OCI Registry Guide » Overview","id":"1678","title":"Overview"},"1679":{"body":"OCI (Open Container Initiative) artifacts are packaged files distributed through container registries. Unlike Docker images which contain applications, OCI artifacts can contain any type of content - in our case, provisioning extensions (KCL schemas, Nushell scripts, templates, etc.).","breadcrumbs":"OCI Registry Guide » What are OCI Artifacts?","id":"1679","title":"What are OCI Artifacts?"},"168":{"body":"Check that Kubernetes is running: # List installed task services\\nprovisioning taskserv list --infra my-infra # Check Kubernetes status\\nprovisioning server ssh dev-server-01\\nkubectl get nodes # On the server\\nexit # Or remotely\\nprovisioning server exec dev-server-01 -- kubectl get nodes","breadcrumbs":"First Deployment » Step 8: Verify Installation","id":"168","title":"Step 8: Verify Installation"},"1680":{"body":"","breadcrumbs":"OCI Registry Guide » Quick Start","id":"1680","title":"Quick Start"},"1681":{"body":"Install one of the following OCI tools: # ORAS (recommended)\\nbrew install oras # Crane (Google\'s tool)\\ngo install github.com/google/go-containerregistry/cmd/crane@latest # Skopeo (RedHat\'s tool)\\nbrew install skopeo\\n```plaintext ### 1. Start Local OCI Registry (Development) ```bash\\n# Start lightweight OCI registry (Zot)\\nprovisioning oci-registry start # Verify registry is running\\ncurl http://localhost:5000/v2/_catalog\\n```plaintext ### 2. Pull an Extension ```bash\\n# Pull Kubernetes extension from registry\\nprovisioning oci pull kubernetes:1.28.0 # Pull with specific registry\\nprovisioning oci pull kubernetes:1.28.0 \\\\ --registry harbor.company.com \\\\ --namespace provisioning-extensions\\n```plaintext ### 3. List Available Extensions ```bash\\n# List all extensions\\nprovisioning oci list # Search for specific extension\\nprovisioning oci search kubernetes # Show available versions\\nprovisioning oci tags kubernetes\\n```plaintext ### 4. Configure Workspace to Use OCI Edit `workspace/config/provisioning.yaml`: ```yaml\\ndependencies: extensions: source_type: \\"oci\\" oci: registry: \\"localhost:5000\\" namespace: \\"provisioning-extensions\\" tls_enabled: false modules: taskservs: - \\"oci://localhost:5000/provisioning-extensions/kubernetes:1.28.0\\" - \\"oci://localhost:5000/provisioning-extensions/containerd:1.7.0\\"\\n```plaintext ### 5. Resolve Dependencies ```bash\\n# Resolve and install all dependencies\\nprovisioning dep resolve # Check what will be installed\\nprovisioning dep resolve --dry-run # Show dependency tree\\nprovisioning dep tree kubernetes\\n```plaintext --- ## OCI Commands Reference ### Pull Extension **Download extension from OCI registry** ```bash\\nprovisioning oci pull : [OPTIONS] # Examples:\\nprovisioning oci pull kubernetes:1.28.0\\nprovisioning oci pull redis:7.0.0 --registry harbor.company.com\\nprovisioning oci pull postgres:15.0 --insecure # Skip TLS verification\\n```plaintext **Options**: - `--registry `: Override registry (default: from config)\\n- `--namespace `: Override namespace (default: provisioning-extensions)\\n- `--destination `: Local installation path\\n- `--insecure`: Skip TLS certificate verification --- ### Push Extension **Publish extension to OCI registry** ```bash\\nprovisioning oci push [OPTIONS] # Examples:\\nprovisioning oci push ./extensions/taskservs/redis redis 1.0.0\\nprovisioning oci push ./my-provider aws 2.1.0 --registry localhost:5000\\n```plaintext **Options**: - `--registry `: Target registry\\n- `--namespace `: Target namespace\\n- `--insecure`: Skip TLS verification **Prerequisites**: - Extension must have valid `manifest.yaml`\\n- Must be logged in to registry (see `oci login`) --- ### List Extensions **Show available extensions in registry** ```bash\\nprovisioning oci list [OPTIONS] # Examples:\\nprovisioning oci list\\nprovisioning oci list --namespace provisioning-platform\\nprovisioning oci list --registry harbor.company.com\\n```plaintext **Output**: ```plaintext\\n┬───────────────┬──────────────────┬─────────────────────────┬─────────────────────────────────────────────┐\\n│ name │ registry │ namespace │ reference │\\n├───────────────┼──────────────────┼─────────────────────────┼─────────────────────────────────────────────┤\\n│ kubernetes │ localhost:5000 │ provisioning-extensions │ localhost:5000/provisioning-extensions/... │\\n│ containerd │ localhost:5000 │ provisioning-extensions │ localhost:5000/provisioning-extensions/... │\\n│ cilium │ localhost:5000 │ provisioning-extensions │ localhost:5000/provisioning-extensions/... │\\n└───────────────┴──────────────────┴─────────────────────────┴─────────────────────────────────────────────┘\\n```plaintext --- ### Search Extensions **Search for extensions matching query** ```bash\\nprovisioning oci search [OPTIONS] # Examples:\\nprovisioning oci search kube\\nprovisioning oci search postgres\\nprovisioning oci search \\"container-*\\"\\n```plaintext --- ### Show Tags (Versions) **Display all available versions of an extension** ```bash\\nprovisioning oci tags [OPTIONS] # Examples:\\nprovisioning oci tags kubernetes\\nprovisioning oci tags redis --registry harbor.company.com\\n```plaintext **Output**: ```plaintext\\n┬────────────┬─────────┬──────────────────────────────────────────────────────┐\\n│ artifact │ version │ reference │\\n├────────────┼─────────┼──────────────────────────────────────────────────────┤\\n│ kubernetes │ 1.29.0 │ localhost:5000/provisioning-extensions/kubernetes... │\\n│ kubernetes │ 1.28.0 │ localhost:5000/provisioning-extensions/kubernetes... │\\n│ kubernetes │ 1.27.0 │ localhost:5000/provisioning-extensions/kubernetes... │\\n└────────────┴─────────┴──────────────────────────────────────────────────────┘\\n```plaintext --- ### Inspect Extension **Show detailed manifest and metadata** ```bash\\nprovisioning oci inspect : [OPTIONS] # Examples:\\nprovisioning oci inspect kubernetes:1.28.0\\nprovisioning oci inspect redis:7.0.0 --format json\\n```plaintext **Output**: ```yaml\\nname: kubernetes\\ntype: taskserv\\nversion: 1.28.0\\ndescription: Kubernetes container orchestration platform\\nauthor: Provisioning Team\\nlicense: MIT\\ndependencies: containerd: \\">=1.7.0\\" etcd: \\">=3.5.0\\"\\nplatforms: - linux/amd64 - linux/arm64\\n```plaintext --- ### Login to Registry **Authenticate with OCI registry** ```bash\\nprovisioning oci login [OPTIONS] # Examples:\\nprovisioning oci login localhost:5000\\nprovisioning oci login harbor.company.com --username admin\\nprovisioning oci login registry.io --password-stdin < token.txt\\nprovisioning oci login registry.io --token-file ~/.provisioning/tokens/registry\\n```plaintext **Options**: - `--username `: Username (default: `_token`)\\n- `--password-stdin`: Read password from stdin\\n- `--token-file `: Read token from file **Note**: Credentials are stored in Docker config (`~/.docker/config.json`) --- ### Logout from Registry **Remove stored credentials** ```bash\\nprovisioning oci logout # Example:\\nprovisioning oci logout harbor.company.com\\n```plaintext --- ### Delete Extension **Remove extension from registry** ```bash\\nprovisioning oci delete : [OPTIONS] # Examples:\\nprovisioning oci delete kubernetes:1.27.0\\nprovisioning oci delete redis:6.0.0 --force # Skip confirmation\\n```plaintext **Options**: - `--force`: Skip confirmation prompt\\n- `--registry `: Target registry\\n- `--namespace `: Target namespace **Warning**: This operation is irreversible. Use with caution. --- ### Copy Extension **Copy extension between registries** ```bash\\nprovisioning oci copy [OPTIONS] # Examples:\\n# Copy between namespaces in same registry\\nprovisioning oci copy \\\\ localhost:5000/test/kubernetes:1.28.0 \\\\ localhost:5000/production/kubernetes:1.28.0 # Copy between different registries\\nprovisioning oci copy \\\\ localhost:5000/provisioning-extensions/kubernetes:1.28.0 \\\\ harbor.company.com/provisioning/kubernetes:1.28.0\\n```plaintext --- ### Show OCI Configuration **Display current OCI settings** ```bash\\nprovisioning oci config # Output:\\n{ tool: \\"oras\\" registry: \\"localhost:5000\\" namespace: { extensions: \\"provisioning-extensions\\" platform: \\"provisioning-platform\\" } cache_dir: \\"~/.provisioning/oci-cache\\" tls_enabled: false\\n}\\n```plaintext --- ## Dependency Management ### Dependency Configuration Dependencies are configured in `workspace/config/provisioning.yaml`: ```yaml\\ndependencies: # Core provisioning system core: source: \\"oci://harbor.company.com/provisioning-core:v3.5.0\\" # Extensions (providers, taskservs, clusters) extensions: source_type: \\"oci\\" oci: registry: \\"localhost:5000\\" namespace: \\"provisioning-extensions\\" tls_enabled: false auth_token_path: \\"~/.provisioning/tokens/oci\\" modules: providers: - \\"oci://localhost:5000/provisioning-extensions/aws:2.0.0\\" - \\"oci://localhost:5000/provisioning-extensions/upcloud:1.5.0\\" taskservs: - \\"oci://localhost:5000/provisioning-extensions/kubernetes:1.28.0\\" - \\"oci://localhost:5000/provisioning-extensions/containerd:1.7.0\\" - \\"oci://localhost:5000/provisioning-extensions/etcd:3.5.0\\" clusters: - \\"oci://localhost:5000/provisioning-extensions/buildkit:0.12.0\\" # Platform services platform: source_type: \\"oci\\" oci: registry: \\"harbor.company.com\\" namespace: \\"provisioning-platform\\"\\n```plaintext ### Resolve Dependencies ```bash\\n# Resolve and install all configured dependencies\\nprovisioning dep resolve # Dry-run (show what would be installed)\\nprovisioning dep resolve --dry-run # Resolve with specific version constraints\\nprovisioning dep resolve --update # Update to latest versions\\n```plaintext ### Check for Updates ```bash\\n# Check all dependencies for updates\\nprovisioning dep check-updates # Output:\\n┬─────────────┬─────────┬────────┬──────────────────┐\\n│ name │ current │ latest │ update_available │\\n├─────────────┼─────────┼────────┼──────────────────┤\\n│ kubernetes │ 1.28.0 │ 1.29.0 │ true │\\n│ containerd │ 1.7.0 │ 1.7.0 │ false │\\n│ etcd │ 3.5.0 │ 3.5.1 │ true │\\n└─────────────┴─────────┴────────┴──────────────────┘\\n```plaintext ### Update Dependency ```bash\\n# Update specific extension to latest version\\nprovisioning dep update kubernetes # Update to specific version\\nprovisioning dep update kubernetes --version 1.29.0\\n```plaintext ### Dependency Tree ```bash\\n# Show dependency tree for extension\\nprovisioning dep tree kubernetes # Output:\\nkubernetes:1.28.0\\n├── containerd:1.7.0\\n│ └── runc:1.1.0\\n├── etcd:3.5.0\\n└── kubectl:1.28.0\\n```plaintext ### Validate Dependencies ```bash\\n# Validate dependency graph (check for cycles, conflicts)\\nprovisioning dep validate # Validate specific extension\\nprovisioning dep validate kubernetes\\n```plaintext --- ## Extension Development ### Create New Extension ```bash\\n# Generate extension from template\\nprovisioning generate extension taskserv redis # Directory structure created:\\n# extensions/taskservs/redis/\\n# ├── kcl/\\n# │ ├── kcl.mod\\n# │ ├── redis.k\\n# │ ├── version.k\\n# │ └── dependencies.k\\n# ├── scripts/\\n# │ ├── install.nu\\n# │ ├── check.nu\\n# │ └── uninstall.nu\\n# ├── templates/\\n# ├── docs/\\n# │ └── README.md\\n# ├── tests/\\n# └── manifest.yaml\\n```plaintext ### Extension Manifest Edit `manifest.yaml`: ```yaml\\nname: redis\\ntype: taskserv\\nversion: 1.0.0\\ndescription: Redis in-memory data structure store\\nauthor: Your Name\\nlicense: MIT\\nhomepage: https://redis.io\\nrepository: https://gitea.example.com/provisioning-extensions/redis dependencies: os: \\">=1.0.0\\" # Required OS taskserv tags: - database - cache - key-value platforms: - linux/amd64 - linux/arm64 min_provisioning_version: \\"3.0.0\\"\\n```plaintext ### Test Extension Locally ```bash\\n# Load extension from local path\\nprovisioning module load taskserv workspace_dev redis --source local # Test installation\\nprovisioning taskserv create redis --infra test-env --check # Run tests\\nprovisioning test extension redis\\n```plaintext ### Validate Extension ```bash\\n# Validate extension structure\\nprovisioning oci package validate ./extensions/taskservs/redis # Output:\\n✓ Extension structure valid\\nWarnings: - Missing docs/README.md (recommended)\\n```plaintext ### Package Extension ```bash\\n# Package as OCI artifact\\nprovisioning oci package ./extensions/taskservs/redis # Output: redis-1.0.0.tar.gz # Inspect package\\nprovisioning oci inspect-artifact redis-1.0.0.tar.gz\\n```plaintext ### Publish Extension ```bash\\n# Login to registry (one-time)\\nprovisioning oci login localhost:5000 # Publish extension\\nprovisioning oci push ./extensions/taskservs/redis redis 1.0.0 # Verify publication\\nprovisioning oci tags redis # Share with team\\necho \\"Published: oci://localhost:5000/provisioning-extensions/redis:1.0.0\\"\\n```plaintext --- ## Registry Setup ### Local Registry (Development) **Using Zot (lightweight)**: ```bash\\n# Start Zot registry\\nprovisioning oci-registry start # Configuration:\\n# - Endpoint: localhost:5000\\n# - Storage: ~/.provisioning/oci-registry/\\n# - No authentication\\n# - TLS disabled # Stop registry\\nprovisioning oci-registry stop # Check status\\nprovisioning oci-registry status\\n```plaintext **Manual Zot Setup**: ```bash\\n# Install Zot\\nbrew install project-zot/tap/zot # Create config\\ncat > zot-config.json <=1.7.0\\" etcd: \\"^3.5.0\\" # 3.5.x compatible\\n```plaintext ❌ **DON\'T**: Leave dependencies unversioned ```yaml\\ndependencies: containerd: \\"*\\" # Too permissive\\n```plaintext --- ### Security ✅ **DO**: - Use TLS for remote registries\\n- Rotate authentication tokens regularly\\n- Scan images for vulnerabilities (Harbor)\\n- Sign artifacts (cosign) ❌ **DON\'T**: - Use `--insecure` in production\\n- Store passwords in config files\\n- Skip certificate verification --- ## Related Documentation - [Multi-Repository Architecture](../architecture/MULTI_REPO_ARCHITECTURE.md) - Overall architecture\\n- [Extension Development Guide](extension-development.md) - Create extensions\\n- [Dependency Resolution](dependency-resolution.md) - How dependencies work\\n- OCI Client Library - Low-level API --- **Maintained By**: Documentation Team\\n**Last Updated**: 2025-10-06\\n**Next Review**: 2026-01-06","breadcrumbs":"OCI Registry Guide » Dependency Resolution Failed","id":"1684","title":"Dependency Resolution Failed"},"1685":{"body":"Date : 2025-11-23 Version : 1.0.0 For : provisioning v3.6.0+ Access powerful functionality from prov-ecosystem and provctl directly through provisioning CLI.","breadcrumbs":"Integrations Quick Start » Prov-Ecosystem & Provctl Integrations - Quick Start Guide","id":"1685","title":"Prov-Ecosystem & Provctl Integrations - Quick Start Guide"},"1686":{"body":"Four integrated feature sets: Feature Purpose Best For Runtime Abstraction Unified Docker/Podman/OrbStack/Colima/nerdctl Multi-platform deployments SSH Advanced Pooling, circuit breaker, retry strategies Large-scale distributed operations Backup System Multi-backend backups (Restic, Borg, Tar, Rsync) Data protection & disaster recovery GitOps Events Event-driven deployments from Git Continuous deployment automation Service Management Cross-platform services (systemd, launchd, runit) Infrastructure service orchestration","breadcrumbs":"Integrations Quick Start » Overview","id":"1686","title":"Overview"},"1687":{"body":"","breadcrumbs":"Integrations Quick Start » Quick Start Commands","id":"1687","title":"Quick Start Commands"},"1688":{"body":"# 1. Check what runtimes you have available\\nprovisioning runtime list # 2. Detect which runtime provisioning will use\\nprovisioning runtime detect # 3. Verify runtime works\\nprovisioning runtime info\\n```plaintext **Expected Output**: ```plaintext\\nAvailable runtimes: • docker • podman\\n```plaintext --- ## 1️⃣ Runtime Abstraction ### What It Does Automatically detects and uses Docker, Podman, OrbStack, Colima, or nerdctl - whichever is available on your system. Eliminates hardcoding \\"docker\\" commands. ### Commands ```bash\\n# Detect available runtime\\nprovisioning runtime detect\\n# Output: \\"Detected runtime: docker\\" # Execute command in runtime\\nprovisioning runtime exec \\"docker images\\"\\n# Runs: docker images # Get runtime info\\nprovisioning runtime info\\n# Shows: name, command, version # List all available runtimes\\nprovisioning runtime list\\n# Shows: docker, podman, orbstack... # Adapt docker-compose for detected runtime\\nprovisioning runtime compose ./docker-compose.yml\\n# Output: docker compose -f ./docker-compose.yml\\n```plaintext ### Examples **Use Case 1: Works on macOS with OrbStack, Linux with Docker** ```bash\\n# User on macOS with OrbStack\\n$ provisioning runtime exec \\"docker run -it ubuntu bash\\"\\n# Automatically uses orbctl (OrbStack) # User on Linux with Docker\\n$ provisioning runtime exec \\"docker run -it ubuntu bash\\"\\n# Automatically uses docker\\n```plaintext **Use Case 2: Run docker-compose with detected runtime** ```bash\\n# Detect and run compose\\n$ compose_cmd=$(provisioning runtime compose ./docker-compose.yml)\\n$ eval $compose_cmd up -d\\n# Works with docker, podman, nerdctl automatically\\n```plaintext ### Configuration No configuration needed! Runtime is auto-detected in order: 1. Docker (macOS: OrbStack first; Linux: Docker first)\\n2. Podman\\n3. OrbStack (macOS)\\n4. Colima (macOS)\\n5. nerdctl --- ## 2️⃣ SSH Advanced Operations ### What It Does Advanced SSH with connection pooling (90% faster), circuit breaker for fault isolation, and deployment strategies (rolling, blue-green, canary). ### Commands ```bash\\n# Create SSH pool connection to host\\nprovisioning ssh pool connect server.example.com root --port 22 --timeout 30 # Check pool status\\nprovisioning ssh pool status # List available deployment strategies\\nprovisioning ssh strategies\\n# Output: rolling, blue-green, canary # Configure retry strategy\\nprovisioning ssh retry-config exponential --max-retries 3 # Check circuit breaker status\\nprovisioning ssh circuit-breaker\\n# Output: state=closed, failures=0/5\\n```plaintext ### Deployment Strategies | Strategy | Use Case | Risk |\\n|----------|----------|------|\\n| **Rolling** | Gradual rollout across hosts | Low (but slower) |\\n| **Blue-Green** | Zero-downtime, instant rollback | Very low |\\n| **Canary** | Test on small % before full rollout | Very low (5% at risk) | ### Example: Multi-Host Deployment ```bash\\n# Set up SSH pool\\nprovisioning ssh pool connect srv01.example.com root\\nprovisioning ssh pool connect srv02.example.com root\\nprovisioning ssh pool connect srv03.example.com root # Execute on pool (all 3 hosts in parallel)\\nprovisioning ssh pool exec [srv01, srv02, srv03] \\"systemctl restart myapp\\" --strategy rolling # Check status\\nprovisioning ssh pool status\\n# Output: connections=3, active=0, idle=3, circuit_breaker=green\\n```plaintext ### Retry Strategies ```bash\\n# Exponential backoff: 100ms, 200ms, 400ms, 800ms...\\nprovisioning ssh retry-config exponential --max-retries 5 # Linear backoff: 100ms, 200ms, 300ms, 400ms...\\nprovisioning ssh retry-config linear --max-retries 3 # Fibonacci backoff: 100ms, 100ms, 200ms, 300ms, 500ms...\\nprovisioning ssh retry-config fibonacci --max-retries 4\\n```plaintext --- ## 3️⃣ Backup System ### What It Does Multi-backend backup management with Restic, BorgBackup, Tar, or Rsync. Supports local, S3, SFTP, REST API, and Backblaze B2 repositories. ### Commands ```bash\\n# Create backup job\\nprovisioning backup create daily-backup /data /var/lib \\\\ --backend restic \\\\ --repository s3://my-bucket/backups # Restore from snapshot\\nprovisioning backup restore snapshot-001 --restore_path /data # List available snapshots\\nprovisioning backup list # Schedule regular backups\\nprovisioning backup schedule daily-backup \\"0 2 * * *\\" \\\\ --paths [\\"/data\\" \\"/var/lib\\"] \\\\ --backend restic # Show retention policy\\nprovisioning backup retention\\n# Output: daily=7, weekly=4, monthly=12, yearly=5 # Check backup job status\\nprovisioning backup status backup-job-001\\n```plaintext ### Backend Comparison | Backend | Speed | Compression | Best For |\\n|---------|-------|-------------|----------|\\n| Restic | ⚡⚡⚡ | Excellent | Cloud backups |\\n| BorgBackup | ⚡⚡ | Excellent | Large archives |\\n| Tar | ⚡⚡⚡ | Good | Simple backups |\\n| Rsync | ⚡⚡⚡ | None | Incremental syncs | ### Example: Automated Daily Backups to S3 ```bash\\n# Create backup configuration\\nprovisioning backup create app-backup /opt/myapp /var/lib/myapp \\\\ --backend restic \\\\ --repository s3://prod-backups/myapp # Schedule daily at 2 AM\\nprovisioning backup schedule app-backup \\"0 2 * * *\\" # Set retention: keep 7 days, 4 weeks, 12 months, 5 years\\nprovisioning backup retention \\\\ --daily 7 \\\\ --weekly 4 \\\\ --monthly 12 \\\\ --yearly 5 # Verify backup was created\\nprovisioning backup list\\n```plaintext ### Dry-Run (Test First) ```bash\\n# Test backup without actually creating it\\nprovisioning backup create test-backup /data --check # Test restore without actually restoring\\nprovisioning backup restore snapshot-001 --check\\n```plaintext --- ## 4️⃣ GitOps Event-Driven Deployments ### What It Does Automatically trigger deployments from Git events (push, PR, webhook, scheduled). Supports GitHub, GitLab, Gitea. ### Commands ```bash\\n# Load GitOps rules from configuration file\\nprovisioning gitops rules ./gitops-rules.yaml # Watch for Git events (starts webhook listener)\\nprovisioning gitops watch --provider github --webhook-port 8080 # List supported events\\nprovisioning gitops events\\n# Output: push, pull-request, webhook, scheduled, health-check, manual # Manually trigger deployment\\nprovisioning gitops trigger deploy-prod --environment prod # List active deployments\\nprovisioning gitops deployments --status running # Show GitOps status\\nprovisioning gitops status\\n# Output: active_rules=5, total=42, successful=40, failed=2\\n```plaintext ### Example: GitOps Configuration **File: `gitops-rules.yaml`** ```yaml\\nrules: - name: deploy-prod provider: github repository: https://github.com/myorg/myrepo branch: main events: - push targets: - prod command: \\"provisioning deploy\\" require_approval: true - name: deploy-staging provider: github repository: https://github.com/myorg/myrepo branch: develop events: - push - pull-request targets: - staging command: \\"provisioning deploy\\" require_approval: false\\n```plaintext **Then:** ```bash\\n# Load rules\\nprovisioning gitops rules ./gitops-rules.yaml # Watch for events\\nprovisioning gitops watch --provider github # When you push to main, deployment auto-triggers!\\n# git push origin main → provisioning deploy runs automatically\\n```plaintext --- ## 5️⃣ Service Management ### What It Does Install, start, stop, and manage services across systemd (Linux), launchd (macOS), runit, and OpenRC. ### Commands ```bash\\n# Install service\\nprovisioning service install myapp /usr/local/bin/myapp \\\\ --user myapp \\\\ --working-dir /opt/myapp # Start service\\nprovisioning service start myapp # Stop service\\nprovisioning service stop myapp # Restart service\\nprovisioning service restart myapp # Check service status\\nprovisioning service status myapp\\n# Output: running=true, uptime=86400s, restarts=2 # List all services\\nprovisioning service list # Detect init system\\nprovisioning service detect-init\\n# Output: systemd (Linux), launchd (macOS), etc.\\n```plaintext ### Example: Install Custom Service ```bash\\n# On Linux (systemd)\\nprovisioning service install provisioning-worker \\\\ /usr/local/bin/provisioning-worker \\\\ --user provisioning \\\\ --working-dir /opt/provisioning # On macOS (launchd) - works the same!\\nprovisioning service install provisioning-worker \\\\ /usr/local/bin/provisioning-worker \\\\ --user provisioning \\\\ --working-dir /opt/provisioning # Service file is generated automatically for your platform\\nprovisioning service start provisioning-worker\\nprovisioning service status provisioning-worker\\n```plaintext --- ## 🎯 Common Workflows ### Workflow 1: Multi-Platform Deployment ```bash\\n# Works on macOS with OrbStack, Linux with Docker, etc.\\nprovisioning runtime detect # Detects your platform\\nprovisioning runtime exec \\"docker ps\\" # Uses your runtime\\n```plaintext ### Workflow 2: Large-Scale SSH Operations ```bash\\n# Connect to multiple servers\\nfor host in srv01 srv02 srv03; do provisioning ssh pool connect $host.example.com root\\ndone # Execute in parallel with 3x retry\\nprovisioning ssh pool exec [srv01, srv02, srv03] \\\\ \\"systemctl restart app\\" \\\\ --strategy rolling \\\\ --retry exponential\\n```plaintext ### Workflow 3: Automated Backups ```bash\\n# Create backup job\\nprovisioning backup create daily /opt/app /data \\\\ --backend restic \\\\ --repository s3://backups # Schedule for 2 AM every day\\nprovisioning backup schedule daily \\"0 2 * * *\\" # Verify it works\\nprovisioning backup list\\n```plaintext ### Workflow 4: Continuous Deployment from Git ```bash\\n# Define rules in YAML\\ncat > gitops-rules.yaml << \'EOF\'\\nrules: - name: deploy-prod provider: github repository: https://github.com/myorg/repo branch: main events: [push] targets: [prod] command: \\"provisioning deploy\\"\\nEOF # Load and activate\\nprovisioning gitops rules ./gitops-rules.yaml\\nprovisioning gitops watch --provider github # Now pushing to main auto-deploys!\\n```plaintext --- ## 🔧 Advanced Configuration ### Using with KCL Configuration All integrations support KCL schemas for advanced configuration: ```kcl\\nimport provisioning.integrations as integ # Runtime configuration\\nintegrations: integ.IntegrationConfig = { runtime = { preferred = \\"podman\\" check_order = [\\"podman\\", \\"docker\\", \\"nerdctl\\"] timeout_secs = 5 enable_cache = True } # Backup with retention policy backup = { default_backend = \\"restic\\" default_repository = { type = \\"s3\\" bucket = \\"prod-backups\\" prefix = \\"daily\\" } jobs = [] verify_after_backup = True } # GitOps rules with approval gitops = { rules = [] default_strategy = \\"blue-green\\" dry_run_by_default = False enable_audit_log = True }\\n}\\n```plaintext --- ## 💡 Tips & Tricks ### Tip 1: Dry-Run Mode All major operations support `--check` for testing: ```bash\\nprovisioning runtime exec \\"systemctl restart app\\" --check\\n# Output: Would execute: [docker exec ...] provisioning backup create test /data --check\\n# Output: Backup would be created: [test] provisioning gitops trigger deploy-test --check\\n# Output: Deployment would trigger\\n```plaintext ### Tip 2: Output Formats Some commands support JSON output: ```bash\\nprovisioning runtime list --out json\\nprovisioning backup list --out json\\nprovisioning gitops deployments --out json\\n```plaintext ### Tip 3: Integration with Scripts Chain commands in shell scripts: ```bash\\n#!/bin/bash # Detect runtime and use it\\nRUNTIME=$(provisioning runtime detect | grep -oP \'docker|podman|nerdctl\') # Execute using detected runtime\\nprovisioning runtime exec \\"docker ps\\" # Create backup before deploy\\nprovisioning backup create pre-deploy-$(date +%s) /opt/app # Deploy\\nprovisioning deploy # Verify with GitOps\\nprovisioning gitops status\\n```plaintext --- ## 🐛 Troubleshooting ### Problem: \\"No container runtime detected\\" **Solution**: Install Docker, Podman, or OrbStack: ```bash\\n# macOS\\nbrew install orbstack # Linux\\nsudo apt-get install docker.io # Then verify\\nprovisioning runtime detect\\n```plaintext ### Problem: SSH connection timeout **Solution**: Check port and timeout settings: ```bash\\n# Use different port\\nprovisioning ssh pool connect server.example.com root --port 2222 # Increase timeout\\nprovisioning ssh pool connect server.example.com root --timeout 60\\n```plaintext ### Problem: Backup fails with \\"Permission denied\\" **Solution**: Check permissions on backup path: ```bash\\n# Check if user can read target paths\\nls -l /data # Should be readable # Run with elevated privileges if needed\\nsudo provisioning backup create mybak /data --backend restic\\n```plaintext --- ## 📚 Learn More | Topic | Location |\\n|-------|----------|\\n| Architecture | `docs/architecture/ECOSYSTEM_INTEGRATION.md` |\\n| CLI Help | `provisioning help integrations` |\\n| Rust Bridge | `provisioning/platform/integrations/provisioning-bridge/` |\\n| Nushell Modules | `provisioning/core/nulib/lib_provisioning/integrations/` |\\n| KCL Schemas | `provisioning/kcl/integrations/` | --- ## 🆘 Need Help? ```bash\\n# General help\\nprovisioning help integrations # Specific command help\\nprovisioning runtime --help\\nprovisioning backup --help\\nprovisioning gitops --help # System diagnostics\\nprovisioning status\\nprovisioning health\\n```plaintext --- **Last Updated**: 2025-11-23\\n**Version**: 1.0.0","breadcrumbs":"Integrations Quick Start » 🏃 30-Second Test","id":"1688","title":"🏃 30-Second Test"},"1689":{"body":"Status : ✅ COMPLETED - All phases (1-6) implemented and tested Date : December 2025 Tests : 25/25 passing (100%)","breadcrumbs":"Secrets Service Layer Complete » Secrets Service Layer (SST) - Complete User Guide","id":"1689","title":"Secrets Service Layer (SST) - Complete User Guide"},"169":{"body":"","breadcrumbs":"First Deployment » Common Deployment Patterns","id":"169","title":"Common Deployment Patterns"},"1690":{"body":"The Secrets Service Layer (SST) is an enterprise-grade unified solution for managing all types of secrets (database credentials, SSH keys, API tokens, provider credentials) through a REST API controlled by Cedar policies with workspace isolation and real-time monitoring.","breadcrumbs":"Secrets Service Layer Complete » 📋 Executive Summary","id":"1690","title":"📋 Executive Summary"},"1691":{"body":"Feature Description Status Centralized Management Unified API for all secrets ✅ Complete Cedar Authorization Mandatory configurable policies ✅ Complete Workspace Isolation Secrets isolated by workspace and domain ✅ Complete Auto Rotation Automatic scheduling and rotation ✅ Complete Secret Sharing Cross-workspace sharing with access control ✅ Complete Real-time Monitoring Dashboard, expiration alerts ✅ Complete Complete Audit Full operation logging ✅ Complete KMS Encryption Envelope-based key encryption ✅ Complete Temporal + Permanent Support for SSH and provider credentials ✅ Complete","breadcrumbs":"Secrets Service Layer Complete » ✨ Key Features","id":"1691","title":"✨ Key Features"},"1692":{"body":"","breadcrumbs":"Secrets Service Layer Complete » 🚀 Quick Start (5 minutes)","id":"1692","title":"🚀 Quick Start (5 minutes)"},"1693":{"body":"# Register workspace\\nprovisioning workspace register librecloud /Users/Akasha/project-provisioning/workspace_librecloud # Verify\\nprovisioning workspace list\\nprovisioning workspace active\\n```plaintext ### 2. Create your first database secret ```bash\\n# Create PostgreSQL credential\\nprovisioning secrets create database postgres \\\\ --workspace librecloud \\\\ --infra wuji \\\\ --user admin \\\\ --password \\"secure_password\\" \\\\ --host db.local \\\\ --port 5432 \\\\ --database myapp\\n```plaintext ### 3. Retrieve the secret ```bash\\n# Get credential (requires Cedar authorization)\\nprovisioning secrets get librecloud/wuji/postgres/admin_password\\n```plaintext ### 4. List secrets by domain ```bash\\n# List all PostgreSQL secrets\\nprovisioning secrets list --workspace librecloud --domain postgres # List all infrastructure secrets\\nprovisioning secrets list --workspace librecloud --infra wuji\\n```plaintext --- ## 📚 Complete Guide by Phases ### Phase 1: Database and Application Secrets #### 1.1 Create Database Credentials **REST Endpoint**: ```bash\\nPOST /api/v1/secrets/database\\nContent-Type: application/json { \\"workspace_id\\": \\"librecloud\\", \\"infra_id\\": \\"wuji\\", \\"db_type\\": \\"postgresql\\", \\"host\\": \\"db.librecloud.internal\\", \\"port\\": 5432, \\"database\\": \\"production_db\\", \\"username\\": \\"admin\\", \\"password\\": \\"encrypted_password\\"\\n}\\n```plaintext **CLI Command**: ```bash\\nprovisioning secrets create database postgres \\\\ --workspace librecloud \\\\ --infra wuji \\\\ --user admin \\\\ --password \\"password\\" \\\\ --host db.librecloud.internal \\\\ --port 5432 \\\\ --database production_db\\n```plaintext **Result**: Secret stored in SurrealDB with KMS encryption ```plaintext\\n✓ Secret created: librecloud/wuji/postgres/admin_password Workspace: librecloud Infrastructure: wuji Domain: postgres Type: Database Encrypted: Yes (KMS)\\n```plaintext #### 1.2 Create Application Secrets **REST API**: ```bash\\nPOST /api/v1/secrets/application\\n{ \\"workspace_id\\": \\"librecloud\\", \\"app_name\\": \\"myapp-web\\", \\"key_type\\": \\"api_token\\", \\"value\\": \\"sk_live_abc123xyz\\"\\n}\\n```plaintext **CLI**: ```bash\\nprovisioning secrets create app myapp-web \\\\ --workspace librecloud \\\\ --domain web \\\\ --type api_token \\\\ --value \\"sk_live_abc123xyz\\"\\n```plaintext #### 1.3 List Secrets **REST API**: ```bash\\nGET /api/v1/secrets/list?workspace=librecloud&domain=postgres Response:\\n{ \\"secrets\\": [ { \\"path\\": \\"librecloud/wuji/postgres/admin_password\\", \\"workspace_id\\": \\"librecloud\\", \\"domain\\": \\"postgres\\", \\"secret_type\\": \\"Database\\", \\"created_at\\": \\"2025-12-06T10:00:00Z\\", \\"created_by\\": \\"admin\\" } ]\\n}\\n```plaintext **CLI**: ```bash\\n# All workspace secrets\\nprovisioning secrets list --workspace librecloud # Filter by domain\\nprovisioning secrets list --workspace librecloud --domain postgres # Filter by infrastructure\\nprovisioning secrets list --workspace librecloud --infra wuji\\n```plaintext #### 1.4 Retrieve a Secret **REST API**: ```bash\\nGET /api/v1/secrets/librecloud/wuji/postgres/admin_password Requires:\\n- Header: Authorization: Bearer \\n- Cedar verification: [user has read permission]\\n- If MFA required: mfa_verified=true in JWT\\n```plaintext **CLI**: ```bash\\n# Get full secret\\nprovisioning secrets get librecloud/wuji/postgres/admin_password # Output:\\n# Host: db.librecloud.internal\\n# Port: 5432\\n# User: admin\\n# Database: production_db\\n# Password: [encrypted in transit]\\n```plaintext --- ### Phase 2: SSH Keys and Provider Credentials #### 2.1 Temporal SSH Keys (Auto-expiring) **Use Case**: Temporary server access (max 24 hours) ```bash\\n# Generate temporary SSH key (TTL 2 hours)\\nprovisioning secrets create ssh \\\\ --workspace librecloud \\\\ --infra wuji \\\\ --server web01 \\\\ --ttl 2h # Result:\\n# ✓ SSH key generated\\n# Server: web01\\n# TTL: 2 hours\\n# Expires at: 2025-12-06T12:00:00Z\\n# Private Key: [encrypted]\\n```plaintext **Technical Details**: - Generated in real-time by Orchestrator\\n- Stored in memory (TTL-based)\\n- Automatic revocation on expiry\\n- Complete audit trail in vault_audit #### 2.2 Permanent SSH Keys (Stored) **Use Case**: Long-duration infrastructure keys ```bash\\n# Create permanent SSH key (stored in DB)\\nprovisioning secrets create ssh \\\\ --workspace librecloud \\\\ --infra wuji \\\\ --server web01 \\\\ --permanent # Result:\\n# ✓ Permanent SSH key created\\n# Storage: SurrealDB (encrypted)\\n# Rotation: Manual (or automatic if configured)\\n# Access: Cedar controlled\\n```plaintext #### 2.3 Provider Credentials **UpCloud API (Temporal)**: ```bash\\nprovisioning secrets create provider upcloud \\\\ --workspace librecloud \\\\ --roles \\"server,network,storage\\" \\\\ --ttl 4h # Result:\\n# ✓ UpCloud credential generated\\n# Token: tmp_upcloud_abc123\\n# Roles: server, network, storage\\n# TTL: 4 hours\\n```plaintext **UpCloud API (Permanent)**: ```bash\\nprovisioning secrets create provider upcloud \\\\ --workspace librecloud \\\\ --roles \\"server,network\\" \\\\ --permanent # Result:\\n# ✓ Permanent UpCloud credential created\\n# Token: upcloud_live_xyz789\\n# Storage: SurrealDB\\n# Rotation: Manual\\n```plaintext --- ### Phase 3: Auto Rotation #### 3.1 Plan Automatic Rotation **Predefined Rotation Policies**: | Type | Prod | Dev |\\n|------|------|-----|\\n| **Database** | Every 30d | Every 90d |\\n| **Application** | Every 60d | Every 14d |\\n| **SSH** | Every 365d | Every 90d |\\n| **Provider** | Every 180d | Every 30d | **Force Immediate Rotation**: ```bash\\n# Force rotation now\\nprovisioning secrets rotate librecloud/wuji/postgres/admin_password # Result:\\n# ✓ Rotation initiated\\n# Status: In Progress\\n# New password: [generated]\\n# Old password: [archived]\\n# Next rotation: 2025-01-05\\n```plaintext **Check Rotation Status**: ```bash\\nGET /api/v1/secrets/{path}/rotation-status Response:\\n{ \\"path\\": \\"librecloud/wuji/postgres/admin_password\\", \\"status\\": \\"pending\\", \\"next_rotation\\": \\"2025-01-05T10:00:00Z\\", \\"last_rotation\\": \\"2025-12-05T10:00:00Z\\", \\"days_remaining\\": 30, \\"failure_count\\": 0\\n}\\n```plaintext #### 3.2 Rotation Job Scheduler (Background) System automatically runs rotations every hour: ```plaintext\\n┌─────────────────────────────────┐\\n│ Rotation Job Scheduler │\\n│ - Interval: 1 hour │\\n│ - Max concurrency: 5 rotations │\\n│ - Auto retry │\\n└─────────────────────────────────┘ ↓ Get due secrets ↓ Generate new credentials ↓ Validate functionality ↓ Update SurrealDB ↓ Log to audit trail\\n```plaintext **Check Scheduler Status**: ```bash\\nprovisioning secrets scheduler status # Result:\\n# Status: Running\\n# Last check: 2025-12-06T11:00:00Z\\n# Completed rotations: 24\\n# Failed rotations: 0\\n```plaintext --- ### Phase 3.2: Share Secrets Across Workspaces #### Create a Grant (Access Authorization) **Scenario**: Share DB credential between `librecloud` and `staging` ```bash\\n# REST API\\nPOST /api/v1/secrets/{path}/grant { \\"source_workspace\\": \\"librecloud\\", \\"target_workspace\\": \\"staging\\", \\"permission\\": \\"read\\", # read, write, rotate \\"require_approval\\": false\\n} # Response:\\n{ \\"grant_id\\": \\"grant-12345\\", \\"secret_path\\": \\"librecloud/wuji/postgres/admin_password\\", \\"source_workspace\\": \\"librecloud\\", \\"target_workspace\\": \\"staging\\", \\"permission\\": \\"read\\", \\"status\\": \\"active\\", \\"granted_at\\": \\"2025-12-06T10:00:00Z\\", \\"access_count\\": 0\\n}\\n```plaintext **CLI**: ```bash\\nprovisioning secrets grant \\\\ --secret librecloud/wuji/postgres/admin_password \\\\ --target-workspace staging \\\\ --permission read # ✓ Grant created: grant-12345\\n# Source workspace: librecloud\\n# Target workspace: staging\\n# Permission: Read\\n# Approval required: No\\n```plaintext #### Revoke a Grant ```bash\\n# Revoke access immediately\\nPOST /api/v1/secrets/grant/{grant_id}/revoke\\n{ \\"reason\\": \\"User left the team\\"\\n} # CLI\\nprovisioning secrets revoke-grant grant-12345 \\\\ --reason \\"User left the team\\" # ✓ Grant revoked\\n# Status: Revoked\\n# Access records: 42\\n```plaintext #### List Grants ```bash\\n# All workspace grants\\nGET /api/v1/secrets/grants?workspace=librecloud # Response:\\n{ \\"grants\\": [ { \\"grant_id\\": \\"grant-12345\\", \\"secret_path\\": \\"librecloud/wuji/postgres/admin_password\\", \\"target_workspace\\": \\"staging\\", \\"permission\\": \\"read\\", \\"status\\": \\"active\\", \\"access_count\\": 42, \\"last_accessed\\": \\"2025-12-06T10:30:00Z\\" } ]\\n}\\n```plaintext --- ### Phase 3.4: Monitoring and Alerts #### Dashboard Metrics ```bash\\nGET /api/v1/secrets/monitoring/dashboard Response:\\n{ \\"total_secrets\\": 45, \\"temporal_secrets\\": 12, \\"permanent_secrets\\": 33, \\"expiring_secrets\\": [ { \\"path\\": \\"librecloud/wuji/postgres/admin_password\\", \\"domain\\": \\"postgres\\", \\"days_remaining\\": 5, \\"severity\\": \\"critical\\" } ], \\"failed_access_attempts\\": [ { \\"user\\": \\"alice\\", \\"secret_path\\": \\"librecloud/wuji/postgres/admin_password\\", \\"reason\\": \\"insufficient_permissions\\", \\"timestamp\\": \\"2025-12-06T10:00:00Z\\" } ], \\"rotation_metrics\\": { \\"total\\": 45, \\"completed\\": 40, \\"pending\\": 3, \\"failed\\": 2 }\\n}\\n```plaintext **CLI**: ```bash\\nprovisioning secrets monitoring dashboard # ✓ Secrets Dashboard - Librecloud\\n#\\n# Total secrets: 45\\n# Temporal secrets: 12\\n# Permanent secrets: 33\\n#\\n# ⚠️ CRITICAL (next 3 days): 2\\n# - librecloud/wuji/postgres/admin_password (5 days)\\n# - librecloud/wuji/redis/password (1 day)\\n#\\n# ⚡ WARNING (next 7 days): 3\\n# - librecloud/app/api_token (7 days)\\n#\\n# 📊 Rotations completed: 40/45 (89%)\\n```plaintext #### Expiring Secrets Alerts ```bash\\nGET /api/v1/secrets/monitoring/expiring?days=7 Response:\\n{ \\"expiring_secrets\\": [ { \\"path\\": \\"librecloud/wuji/postgres/admin_password\\", \\"domain\\": \\"postgres\\", \\"expires_in_days\\": 5, \\"type\\": \\"database\\", \\"last_rotation\\": \\"2025-11-05T10:00:00Z\\" } ]\\n}\\n```plaintext --- ## 🔐 Cedar Authorization All operations are protected by **Cedar policies**: ### Example Policy: Production Secret Access ```cedar\\n// Requires MFA for production secrets\\n@id(\\"prod-secret-access-mfa\\")\\npermit ( principal, action == Provisioning::Action::\\"access\\", resource is Provisioning::Secret in Provisioning::Environment::\\"production\\"\\n) when { context.mfa_verified == true && resource.is_expired == false\\n}; // Only admins can create permanent secrets\\n@id(\\"permanent-secret-admin-only\\")\\npermit ( principal in Provisioning::Role::\\"security_admin\\", action == Provisioning::Action::\\"create\\", resource is Provisioning::Secret\\n) when { resource.lifecycle == \\"permanent\\"\\n};\\n```plaintext ### Verify Authorization ```bash\\n# Test Cedar decision\\nprovisioning policies check alice can access secret:librecloud/postgres/password # Result:\\n# User: alice\\n# Resource: secret:librecloud/postgres/password\\n# Decision: ✅ ALLOWED\\n# - Role: database_admin\\n# - MFA verified: Yes\\n# - Workspace: librecloud\\n```plaintext --- ## 🏗️ Data Structure ### Secret in Database ```sql\\n-- Table vault_secrets (SurrealDB)\\n{ id: \\"secret:uuid123\\", path: \\"librecloud/wuji/postgres/admin_password\\", workspace_id: \\"librecloud\\", infra_id: \\"wuji\\", domain: \\"postgres\\", secret_type: \\"Database\\", encrypted_value: \\"U2FsdGVkX1...\\", -- AES-256-GCM encrypted version: 1, created_at: \\"2025-12-05T10:00:00Z\\", created_by: \\"admin\\", updated_at: \\"2025-12-05T10:00:00Z\\", updated_by: \\"admin\\", tags: [\\"production\\", \\"critical\\"], auto_rotate: true, rotation_interval_days: 30, ttl_seconds: null, -- null = no auto expiry deleted: false, metadata: { db_host: \\"db.librecloud.internal\\", db_port: 5432, db_name: \\"production_db\\", username: \\"admin\\" }\\n}\\n```plaintext ### Secret Hierarchy ```plaintext\\nlibrecloud (Workspace) ├── wuji (Infrastructure) │ ├── postgres (Domain) │ │ ├── admin_password │ │ ├── readonly_user │ │ └── replication_user │ ├── redis (Domain) │ │ └── master_password │ └── ssh (Domain) │ ├── web01_key │ └── db01_key └── web (Infrastructure) ├── api (Domain) │ ├── stripe_token │ ├── github_token │ └── sendgrid_key └── auth (Domain) ├── jwt_secret └── oauth_client_secret\\n```plaintext --- ## 🔄 Complete Workflows ### Workflow 1: Create and Rotate Database Credential ```plaintext\\n1. Admin creates credential POST /api/v1/secrets/database 2. System encrypts with KMS ├─ Generates data key ├─ Encrypts secret with data key └─ Encrypts data key with KMS master key 3. Stores in SurrealDB ├─ vault_secrets (encrypted value) ├─ vault_versions (history) └─ vault_audit (audit record) 4. System schedules auto rotation ├─ Calculates next date (30 days) └─ Creates rotation_scheduler entry 5. Every hour, background job checks ├─ Any secrets due for rotation? ├─ Yes → Generate new password ├─ Validate functionality (connect to DB) ├─ Update SurrealDB └─ Log to audit 6. Monitoring alerts ├─ If 7 days remaining → WARNING alert ├─ If 3 days remaining → CRITICAL alert └─ If expired → EXPIRED alert\\n```plaintext ### Workflow 2: Share Secret Between Workspaces ```plaintext\\n1. Admin of librecloud creates grant POST /api/v1/secrets/{path}/grant 2. Cedar verifies authorization ├─ Is user admin of source workspace? └─ Is target workspace valid? 3. Grant created and recorded ├─ Unique ID: grant-xxxxx ├─ Status: active └─ Audit: who, when, why 4. Staging workspace user accesses secret GET /api/v1/secrets/{path} 5. System verifies access ├─ Cedar: Is grant active? ├─ Cedar: Sufficient permission? ├─ Cedar: MFA if required? └─ Yes → Return decrypted secret 6. Audit records access ├─ User who accessed ├─ Source IP ├─ Exact timestamp ├─ Success/failure └─ Increment access count in grant\\n```plaintext ### Workflow 3: Access Temporal SSH Secret ```plaintext\\n1. User requests temporary SSH key POST /api/v1/secrets/ssh {ttl: \\"2h\\"} 2. Cedar authorizes (requires MFA) ├─ User has role? ├─ MFA verified? └─ TTL within limit (max 24h)? 3. Orchestrator generates key ├─ Generates SSH key pair (RSA 4096) ├─ Stores in memory (TTL-based) ├─ Logs to audit └─ Returns private key 4. User downloads key └─ Valid for 2 hours 5. Automatic expiration ├─ 2-hour timer starts ├─ TTL expires → Auto revokes ├─ Later attempts → Access denied └─ Audit: automatic revocation\\n```plaintext --- ## 📝 Practical Examples ### Example 1: Manage PostgreSQL Secrets ```bash\\n# 1. Create credential\\nprovisioning secrets create database postgres \\\\ --workspace librecloud \\\\ --infra wuji \\\\ --user admin \\\\ --password \\"P@ssw0rd123!\\" \\\\ --host db.librecloud.internal \\\\ --port 5432 \\\\ --database myapp_prod # 2. List PostgreSQL secrets\\nprovisioning secrets list --workspace librecloud --domain postgres # 3. Get for connection\\nprovisioning secrets get librecloud/wuji/postgres/admin_password # 4. Share with staging team\\nprovisioning secrets grant \\\\ --secret librecloud/wuji/postgres/admin_password \\\\ --target-workspace staging \\\\ --permission read # 5. Force rotation\\nprovisioning secrets rotate librecloud/wuji/postgres/admin_password # 6. Check status\\nprovisioning secrets monitoring dashboard | grep postgres\\n```plaintext ### Example 2: Temporary SSH Access ```bash\\n# 1. Generate temporary SSH key (4 hours)\\nprovisioning secrets create ssh \\\\ --workspace librecloud \\\\ --infra wuji \\\\ --server web01 \\\\ --ttl 4h # 2. Download private key\\nprovisioning secrets get librecloud/wuji/ssh/web01_key > ~/.ssh/web01_temp # 3. Connect to server\\nchmod 600 ~/.ssh/web01_temp\\nssh -i ~/.ssh/web01_temp ubuntu@web01.librecloud.internal # 4. After 4 hours\\n# → Key revoked automatically\\n# → New SSH attempts fail\\n# → Access logged in audit\\n```plaintext ### Example 3: CI/CD Integration ```yaml\\n# GitLab CI / GitHub Actions\\njobs: deploy: script: # 1. Get DB credential - export DB_PASSWORD=$(provisioning secrets get librecloud/prod/postgres/admin_password) # 2. Get API token - export API_TOKEN=$(provisioning secrets get librecloud/app/api_token) # 3. Deploy application - docker run -e DB_PASSWORD=$DB_PASSWORD -e API_TOKEN=$API_TOKEN myapp:latest # 4. System logs access in audit # → User: ci-deploy # → Workspace: librecloud # → Secrets accessed: 2 # → Status: success\\n```plaintext --- ## 🛡️ Security ### Encryption - **At Rest**: AES-256-GCM with KMS key rotation\\n- **In Transit**: TLS 1.3\\n- **In Memory**: Automatic cleanup of sensitive variables ### Access Control - **Cedar**: All operations evaluated against policies\\n- **MFA**: Required for production secrets\\n- **Workspace Isolation**: Data separation at DB level ### Audit ```json\\n{ \\"timestamp\\": \\"2025-12-06T10:30:45Z\\", \\"user_id\\": \\"alice\\", \\"workspace\\": \\"librecloud\\", \\"action\\": \\"secrets:get\\", \\"resource\\": \\"librecloud/wuji/postgres/admin_password\\", \\"result\\": \\"success\\", \\"ip_address\\": \\"192.168.1.100\\", \\"mfa_verified\\": true, \\"cedar_policy\\": \\"prod-secret-access-mfa\\"\\n}\\n```plaintext --- ## 📊 Test Results ### All 25 Integration Tests Passing ```plaintext\\n✅ Phase 3.1: Rotation Scheduler (9 tests) - Schedule creation - Status transitions - Failure tracking ✅ Phase 3.2: Secret Sharing (8 tests) - Grant creation with permissions - Permission hierarchy - Access logging ✅ Phase 3.4: Monitoring (4 tests) - Dashboard metrics - Expiring alerts - Failed access recording ✅ Phase 5: Rotation Job Scheduler (4 tests) - Background job lifecycle - Configuration management ✅ Integration Tests (3 tests) - Multi-service workflows - End-to-end scenarios\\n```plaintext **Execution**: ```bash\\ncargo test --test secrets_phases_integration_test test result: ok. 25 passed; 0 failed\\n```plaintext --- ## 🆘 Troubleshooting ### Problem: \\"Authorization denied by Cedar policy\\" **Cause**: User lacks permissions in policy\\n**Solution**: ```bash\\n# Check user and permission\\nprovisioning policies check $USER can access secret:librecloud/postgres/admin_password # Check roles\\nprovisioning auth whoami # Request access from admin\\nprovisioning secrets grant \\\\ --secret librecloud/wuji/postgres/admin_password \\\\ --target-workspace $WORKSPACE \\\\ --permission read\\n```plaintext ### Problem: \\"Secret not found\\" **Cause**: Typo in path or workspace doesn\'t exist\\n**Solution**: ```bash\\n# List available secrets\\nprovisioning secrets list --workspace librecloud # Check active workspace\\nprovisioning workspace active # Switch workspace if needed\\nprovisioning workspace switch librecloud\\n```plaintext ### Problem: \\"MFA required\\" **Cause**: Operation requires MFA but not verified\\n**Solution**: ```bash\\n# Check MFA status\\nprovisioning auth status # Enroll if not configured\\nprovisioning mfa totp enroll # Use MFA token on next access\\nprovisioning secrets get librecloud/wuji/postgres/admin_password --mfa-code 123456\\n```plaintext --- ## 📚 Complete Documentation - **REST API**: `/docs/api/secrets-api.md`\\n- **CLI Reference**: `provisioning secrets --help`\\n- **Cedar Policies**: `provisioning/config/cedar-policies/secrets.cedar`\\n- **Architecture**: `/docs/architecture/SECRETS_SERVICE_LAYER.md`\\n- **Security**: `/docs/user/SECRETS_SECURITY_GUIDE.md` --- ## 🎯 Next Steps (Future) 1. **Phase 7**: Web UI Dashboard for visual management\\n2. **Phase 8**: HashiCorp Vault integration\\n3. **Phase 9**: Multi-datacenter secret replication --- **Status**: ✅ Secrets Service Layer - COMPLETED AND TESTED","breadcrumbs":"Secrets Service Layer Complete » 1. Register the workspace librecloud","id":"1693","title":"1. Register the workspace librecloud"},"1694":{"body":"Comprehensive OCI (Open Container Initiative) registry deployment and management for the provisioning system. Source : provisioning/platform/oci-registry/","breadcrumbs":"OCI Registry Platform » OCI Registry Service","id":"1694","title":"OCI Registry Service"},"1695":{"body":"Zot (Recommended for Development): Lightweight, fast, OCI-native with UI Harbor (Recommended for Production): Full-featured enterprise registry Distribution (OCI Reference): Official OCI reference implementation","breadcrumbs":"OCI Registry Platform » Supported Registries","id":"1695","title":"Supported Registries"},"1696":{"body":"Multi-Registry Support : Zot, Harbor, Distribution Namespace Organization : Logical separation of artifacts Access Control : RBAC, policies, authentication Monitoring : Prometheus metrics, health checks Garbage Collection : Automatic cleanup of unused artifacts High Availability : Optional HA configurations TLS/SSL : Secure communication UI Interface : Web-based management (Zot, Harbor)","breadcrumbs":"OCI Registry Platform » Features","id":"1696","title":"Features"},"1697":{"body":"","breadcrumbs":"OCI Registry Platform » Quick Start","id":"1697","title":"Quick Start"},"1698":{"body":"cd provisioning/platform/oci-registry/zot\\ndocker-compose up -d # Initialize with namespaces and policies\\nnu ../scripts/init-registry.nu --registry-type zot # Access UI\\nopen http://localhost:5000","breadcrumbs":"OCI Registry Platform » Start Zot Registry (Default)","id":"1698","title":"Start Zot Registry (Default)"},"1699":{"body":"cd provisioning/platform/oci-registry/harbor\\ndocker-compose up -d\\nsleep 120 # Wait for services # Initialize\\nnu ../scripts/init-registry.nu --registry-type harbor --admin-password Harbor12345 # Access UI\\nopen http://localhost\\n# Login: admin / Harbor12345","breadcrumbs":"OCI Registry Platform » Start Harbor Registry","id":"1699","title":"Start Harbor Registry"},"17":{"body":"Component Minimum Recommended CPU 2 cores 4+ cores RAM 4 GB 8+ GB Storage 2 GB free 10+ GB free Network Internet connection Broadband connection","breadcrumbs":"Installation Guide » Hardware Requirements","id":"17","title":"Hardware Requirements"},"170":{"body":"Create multiple servers at once: servers = [ {hostname = \\"web-01\\", cores = 2, memory = 4096}, {hostname = \\"web-02\\", cores = 2, memory = 4096}, {hostname = \\"db-01\\", cores = 4, memory = 8192}\\n] provisioning server create --infra my-infra --servers web-01,web-02,db-01","breadcrumbs":"First Deployment » Pattern 1: Multiple Servers","id":"170","title":"Pattern 1: Multiple Servers"},"1700":{"body":"Namespace Description Public Retention provisioning-extensions Extension packages No 10 tags, 90 days provisioning-kcl KCL schemas No 20 tags, 180 days provisioning-platform Platform images No 5 tags, 30 days provisioning-test Test artifacts Yes 3 tags, 7 days","breadcrumbs":"OCI Registry Platform » Default Namespaces","id":"1700","title":"Default Namespaces"},"1701":{"body":"","breadcrumbs":"OCI Registry Platform » Management","id":"1701","title":"Management"},"1702":{"body":"# Start registry\\nnu -c \\"use provisioning/core/nulib/lib_provisioning/oci_registry; oci-registry start --type zot\\" # Check status\\nnu -c \\"use provisioning/core/nulib/lib_provisioning/oci_registry; oci-registry status --type zot\\" # View logs\\nnu -c \\"use provisioning/core/nulib/lib_provisioning/oci_registry; oci-registry logs --type zot --follow\\" # Health check\\nnu -c \\"use provisioning/core/nulib/lib_provisioning/oci_registry; oci-registry health --type zot\\" # List namespaces\\nnu -c \\"use provisioning/core/nulib/lib_provisioning/oci_registry; oci-registry namespaces\\"","breadcrumbs":"OCI Registry Platform » Nushell Commands","id":"1702","title":"Nushell Commands"},"1703":{"body":"# Start\\ndocker-compose up -d # Stop\\ndocker-compose down # View logs\\ndocker-compose logs -f # Remove (including volumes)\\ndocker-compose down -v","breadcrumbs":"OCI Registry Platform » Docker Compose","id":"1703","title":"Docker Compose"},"1704":{"body":"Feature Zot Harbor Distribution Setup Simple Complex Simple UI Built-in Full-featured None Search Yes Yes No Scanning No Trivy No Replication No Yes No RBAC Basic Advanced Basic Best For Dev/CI Production Compliance","breadcrumbs":"OCI Registry Platform » Registry Comparison","id":"1704","title":"Registry Comparison"},"1705":{"body":"","breadcrumbs":"OCI Registry Platform » Security","id":"1705","title":"Security"},"1706":{"body":"Zot/Distribution (htpasswd) : htpasswd -Bc htpasswd provisioning\\ndocker login localhost:5000 Harbor (Database) : docker login localhost\\n# Username: admin / Password: Harbor12345","breadcrumbs":"OCI Registry Platform » Authentication","id":"1706","title":"Authentication"},"1707":{"body":"","breadcrumbs":"OCI Registry Platform » Monitoring","id":"1707","title":"Monitoring"},"1708":{"body":"# API check\\ncurl http://localhost:5000/v2/ # Catalog check\\ncurl http://localhost:5000/v2/_catalog","breadcrumbs":"OCI Registry Platform » Health Checks","id":"1708","title":"Health Checks"},"1709":{"body":"Zot : curl http://localhost:5000/metrics Harbor : curl http://localhost:9090/metrics","breadcrumbs":"OCI Registry Platform » Metrics","id":"1709","title":"Metrics"},"171":{"body":"Install multiple services on one server: provisioning taskserv create kubernetes,cilium,postgres --infra my-infra --servers web-01","breadcrumbs":"First Deployment » Pattern 2: Server with Multiple Task Services","id":"171","title":"Pattern 2: Server with Multiple Task Services"},"1710":{"body":"Architecture : OCI Integration User Guide : OCI Registry Guide","breadcrumbs":"OCI Registry Platform » Related Documentation","id":"1710","title":"Related Documentation"},"1711":{"body":"Version : 1.0.0 Date : 2025-10-06 Status : Production Ready","breadcrumbs":"Test Environment Guide » Test Environment Guide","id":"1711","title":"Test Environment Guide"},"1712":{"body":"The Test Environment Service provides automated containerized testing for taskservs, servers, and multi-node clusters. Built into the orchestrator, it eliminates manual Docker management and provides realistic test scenarios.","breadcrumbs":"Test Environment Guide » Overview","id":"1712","title":"Overview"},"1713":{"body":"┌─────────────────────────────────────────────────┐\\n│ Orchestrator (port 8080) │\\n│ ┌──────────────────────────────────────────┐ │\\n│ │ Test Orchestrator │ │\\n│ │ • Container Manager (Docker API) │ │\\n│ │ • Network Isolation │ │\\n│ │ • Multi-node Topologies │ │\\n│ │ • Test Execution │ │\\n│ └──────────────────────────────────────────┘ │\\n└─────────────────────────────────────────────────┘ ↓ ┌────────────────────────┐ │ Docker Containers │ │ • Isolated Networks │ │ • Resource Limits │ │ • Volume Mounts │ └────────────────────────┘\\n```plaintext ## Test Environment Types ### 1. Single Taskserv Test Test individual taskserv in isolated container. ```bash\\n# Basic test\\nprovisioning test env single kubernetes # With resource limits\\nprovisioning test env single redis --cpu 2000 --memory 4096 # Auto-start and cleanup\\nprovisioning test quick postgres\\n```plaintext ### 2. Server Simulation Simulate complete server with multiple taskservs. ```bash\\n# Server with taskservs\\nprovisioning test env server web-01 [containerd kubernetes cilium] # With infrastructure context\\nprovisioning test env server db-01 [postgres redis] --infra prod-stack\\n```plaintext ### 3. Cluster Topology Multi-node cluster simulation from templates. ```bash\\n# 3-node Kubernetes cluster\\nprovisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start # etcd cluster\\nprovisioning test topology load etcd_cluster | test env cluster etcd\\n```plaintext ## Quick Start ### Prerequisites 1. **Docker running:** ```bash docker ps # Should work without errors Orchestrator running: cd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background","breadcrumbs":"Test Environment Guide » Architecture","id":"1713","title":"Architecture"},"1714":{"body":"# 1. Quick test (fastest)\\nprovisioning test quick kubernetes # 2. Or step-by-step\\n# Create environment\\nprovisioning test env single kubernetes --auto-start # List environments\\nprovisioning test env list # Check status\\nprovisioning test env status # View logs\\nprovisioning test env logs # Cleanup\\nprovisioning test env cleanup \\n```plaintext ## Topology Templates ### Available Templates ```bash\\n# List templates\\nprovisioning test topology list\\n```plaintext | Template | Description | Nodes |\\n|----------|-------------|-------|\\n| `kubernetes_3node` | K8s HA cluster | 1 CP + 2 workers |\\n| `kubernetes_single` | All-in-one K8s | 1 node |\\n| `etcd_cluster` | etcd cluster | 3 members |\\n| `containerd_test` | Standalone containerd | 1 node |\\n| `postgres_redis` | Database stack | 2 nodes | ### Using Templates ```bash\\n# Load and use template\\nprovisioning test topology load kubernetes_3node | test env cluster kubernetes # View template\\nprovisioning test topology load etcd_cluster\\n```plaintext ### Custom Topology Create `my-topology.toml`: ```toml\\n[my_cluster]\\nname = \\"My Custom Cluster\\"\\ncluster_type = \\"custom\\" [[my_cluster.nodes]]\\nname = \\"node-01\\"\\nrole = \\"primary\\"\\ntaskservs = [\\"postgres\\", \\"redis\\"]\\n[my_cluster.nodes.resources]\\ncpu_millicores = 2000\\nmemory_mb = 4096 [[my_cluster.nodes]]\\nname = \\"node-02\\"\\nrole = \\"replica\\"\\ntaskservs = [\\"postgres\\"]\\n[my_cluster.nodes.resources]\\ncpu_millicores = 1000\\nmemory_mb = 2048 [my_cluster.network]\\nsubnet = \\"172.30.0.0/16\\"\\n```plaintext ## Commands Reference ### Environment Management ```bash\\n# Create from config\\nprovisioning test env create # Single taskserv\\nprovisioning test env single [--cpu N] [--memory MB] # Server simulation\\nprovisioning test env server [--infra NAME] # Cluster topology\\nprovisioning test env cluster # List environments\\nprovisioning test env list # Get details\\nprovisioning test env get # Show status\\nprovisioning test env status \\n```plaintext ### Test Execution ```bash\\n# Run tests\\nprovisioning test env run [--tests [test1, test2]] # View logs\\nprovisioning test env logs # Cleanup\\nprovisioning test env cleanup \\n```plaintext ### Quick Test ```bash\\n# One-command test (create, run, cleanup)\\nprovisioning test quick [--infra NAME]\\n```plaintext ## REST API ### Create Environment ```bash\\ncurl -X POST http://localhost:9090/test/environments/create \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"config\\": { \\"type\\": \\"single_taskserv\\", \\"taskserv\\": \\"kubernetes\\", \\"base_image\\": \\"ubuntu:22.04\\", \\"environment\\": {}, \\"resources\\": { \\"cpu_millicores\\": 2000, \\"memory_mb\\": 4096 } }, \\"infra\\": \\"my-project\\", \\"auto_start\\": true, \\"auto_cleanup\\": false }\'\\n```plaintext ### List Environments ```bash\\ncurl http://localhost:9090/test/environments\\n```plaintext ### Run Tests ```bash\\ncurl -X POST http://localhost:9090/test/environments/{id}/run \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"tests\\": [], \\"timeout_seconds\\": 300 }\'\\n```plaintext ### Cleanup ```bash\\ncurl -X DELETE http://localhost:9090/test/environments/{id}\\n```plaintext ## Use Cases ### 1. Taskserv Development Test taskserv before deployment: ```bash\\n# Test new taskserv version\\nprovisioning test env single my-taskserv --auto-start # Check logs\\nprovisioning test env logs \\n```plaintext ### 2. Multi-Taskserv Integration Test taskserv combinations: ```bash\\n# Test kubernetes + cilium + containerd\\nprovisioning test env server k8s-test [kubernetes cilium containerd] --auto-start\\n```plaintext ### 3. Cluster Validation Test cluster configurations: ```bash\\n# Test 3-node etcd cluster\\nprovisioning test topology load etcd_cluster | test env cluster etcd --auto-start\\n```plaintext ### 4. CI/CD Integration ```yaml\\n# .gitlab-ci.yml\\ntest-taskserv: stage: test script: - provisioning test quick kubernetes - provisioning test quick redis - provisioning test quick postgres\\n```plaintext ## Advanced Features ### Resource Limits ```bash\\n# Custom CPU and memory\\nprovisioning test env single postgres \\\\ --cpu 4000 \\\\ --memory 8192\\n```plaintext ### Network Isolation Each environment gets isolated network: - Subnet: 172.20.0.0/16 (default)\\n- DNS enabled\\n- Container-to-container communication ### Auto-Cleanup ```bash\\n# Auto-cleanup after tests\\nprovisioning test env single redis --auto-start --auto-cleanup\\n```plaintext ### Multiple Environments Run tests in parallel: ```bash\\n# Create multiple environments\\nprovisioning test env single kubernetes --auto-start &\\nprovisioning test env single postgres --auto-start &\\nprovisioning test env single redis --auto-start & wait # List all\\nprovisioning test env list\\n```plaintext ## Troubleshooting ### Docker not running ```plaintext\\nError: Failed to connect to Docker\\n```plaintext **Solution:** ```bash\\n# Check Docker\\ndocker ps # Start Docker daemon\\nsudo systemctl start docker # Linux\\nopen -a Docker # macOS\\n```plaintext ### Orchestrator not running ```plaintext\\nError: Connection refused (port 8080)\\n```plaintext **Solution:** ```bash\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background\\n```plaintext ### Environment creation fails Check logs: ```bash\\nprovisioning test env logs \\n```plaintext Check Docker: ```bash\\ndocker ps -a\\ndocker logs \\n```plaintext ### Out of resources ```plaintext\\nError: Cannot allocate memory\\n```plaintext **Solution:** ```bash\\n# Cleanup old environments\\nprovisioning test env list | each {|env| provisioning test env cleanup $env.id } # Or cleanup Docker\\ndocker system prune -af\\n```plaintext ## Best Practices ### 1. Use Templates Reuse topology templates instead of recreating: ```bash\\nprovisioning test topology load kubernetes_3node | test env cluster kubernetes\\n```plaintext ### 2. Auto-Cleanup Always use auto-cleanup in CI/CD: ```bash\\nprovisioning test quick # Includes auto-cleanup\\n```plaintext ### 3. Resource Planning Adjust resources based on needs: - Development: 1-2 cores, 2GB RAM\\n- Integration: 2-4 cores, 4-8GB RAM\\n- Production-like: 4+ cores, 8+ GB RAM ### 4. Parallel Testing Run independent tests in parallel: ```bash\\nfor taskserv in [kubernetes postgres redis] { provisioning test quick $taskserv &\\n}\\nwait\\n```plaintext ## Configuration ### Default Settings - Base image: `ubuntu:22.04`\\n- CPU: 1000 millicores (1 core)\\n- Memory: 2048 MB (2GB)\\n- Network: 172.20.0.0/16 ### Custom Config ```bash\\n# Override defaults\\nprovisioning test env single postgres \\\\ --base-image debian:12 \\\\ --cpu 2000 \\\\ --memory 4096\\n```plaintext --- ## Related Documentation - [Test Environment API](../api/test-environment-api.md)\\n- [Topology Templates](../architecture/test-topologies.md)\\n- [Orchestrator Guide](orchestrator-guide.md)\\n- [Taskserv Development](taskserv-development.md) --- ## Version History | Version | Date | Changes |\\n|---------|------|---------|\\n| 1.0.0 | 2025-10-06 | Initial test environment service | --- **Maintained By**: Infrastructure Team","breadcrumbs":"Test Environment Guide » Basic Workflow","id":"1714","title":"Basic Workflow"},"1715":{"body":"","breadcrumbs":"Test Environment Usage » Test Environment Usage","id":"1715","title":"Test Environment Usage"},"1716":{"body":"","breadcrumbs":"Test Environment System » Test Environment Service (v3.4.0)","id":"1716","title":"Test Environment Service (v3.4.0)"},"1717":{"body":"A comprehensive containerized test environment service has been integrated into the orchestrator, enabling automated testing of taskservs, complete servers, and multi-node clusters without manual Docker management.","breadcrumbs":"Test Environment System » 🚀 Test Environment Service Completed (2025-10-06)","id":"1717","title":"🚀 Test Environment Service Completed (2025-10-06)"},"1718":{"body":"Automated Container Management : No manual Docker operations required Three Test Environment Types : Single taskserv, server simulation, multi-node clusters Multi-Node Support : Test complex topologies (Kubernetes HA, etcd clusters) Network Isolation : Each test environment gets dedicated Docker networks Resource Management : Configurable CPU, memory, and disk limits Topology Templates : Predefined cluster configurations for common scenarios Auto-Cleanup : Optional automatic cleanup after tests complete CI/CD Integration : Easy integration into automated pipelines","breadcrumbs":"Test Environment System » Key Features","id":"1718","title":"Key Features"},"1719":{"body":"","breadcrumbs":"Test Environment System » Test Environment Types","id":"1719","title":"Test Environment Types"},"172":{"body":"Deploy a complete cluster configuration: provisioning cluster create buildkit --infra my-infra","breadcrumbs":"First Deployment » Pattern 3: Complete Cluster","id":"172","title":"Pattern 3: Complete Cluster"},"1720":{"body":"Test individual taskserv in isolated container: # Quick test (create, run, cleanup)\\nprovisioning test quick kubernetes # With custom resources\\nprovisioning test env single postgres --cpu 2000 --memory 4096 --auto-start --auto-cleanup # With infrastructure context\\nprovisioning test env single redis --infra my-project\\n```plaintext ### 2. Server Simulation Test complete server configurations with multiple taskservs: ```bash\\n# Simulate web server\\nprovisioning test env server web-01 [containerd kubernetes cilium] --auto-start # Simulate database server\\nprovisioning test env server db-01 [postgres redis] --infra prod-stack --auto-start\\n```plaintext ### 3. Multi-Node Cluster Topology Test complex cluster configurations before deployment: ```bash\\n# 3-node Kubernetes HA cluster\\nprovisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start # etcd cluster\\nprovisioning test topology load etcd_cluster | test env cluster etcd --auto-start # Single-node Kubernetes\\nprovisioning test topology load kubernetes_single | test env cluster kubernetes\\n```plaintext ## Test Environment Management ```bash\\n# List all test environments\\nprovisioning test env list # Check environment status\\nprovisioning test env status # View environment logs\\nprovisioning test env logs # Run tests in environment\\nprovisioning test env run # Cleanup environment\\nprovisioning test env cleanup \\n```plaintext ## Available Topology Templates Predefined multi-node cluster templates in `provisioning/config/test-topologies.toml`: | Template | Description | Nodes | Use Case |\\n|----------|-------------|-------|----------|\\n| `kubernetes_3node` | K8s HA cluster | 1 CP + 2 workers | Production-like testing |\\n| `kubernetes_single` | All-in-one K8s | 1 node | Development testing |\\n| `etcd_cluster` | etcd cluster | 3 members | Distributed consensus |\\n| `containerd_test` | Standalone containerd | 1 node | Container runtime |\\n| `postgres_redis` | Database stack | 2 nodes | Database integration | ## REST API Endpoints The orchestrator exposes test environment endpoints: - **Create Environment**: `POST http://localhost:9090/v1/test/environments/create`\\n- **List Environments**: `GET http://localhost:9090/v1/test/environments`\\n- **Get Environment**: `GET http://localhost:9090/v1/test/environments/{id}`\\n- **Run Tests**: `POST http://localhost:9090/v1/test/environments/{id}/run`\\n- **Cleanup**: `DELETE http://localhost:9090/v1/test/environments/{id}`\\n- **Get Logs**: `GET http://localhost:9090/v1/test/environments/{id}/logs` ## Prerequisites 1. **Docker Running**: Test environments require Docker daemon ```bash docker ps # Should work without errors Orchestrator Running : Start the orchestrator to manage test containers cd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background","breadcrumbs":"Test Environment System » 1. Single Taskserv Testing","id":"1720","title":"1. Single Taskserv Testing"},"1721":{"body":"User Command (CLI/API) ↓\\nTest Orchestrator (Rust) ↓\\nContainer Manager (bollard) ↓\\nDocker API ↓\\nIsolated Test Containers • Dedicated networks • Resource limits • Volume mounts • Multi-node support\\n```plaintext ## Configuration - **Topology Templates**: `provisioning/config/test-topologies.toml`\\n- **Default Resources**: 1000 millicores CPU, 2048 MB memory\\n- **Network**: 172.20.0.0/16 (default subnet)\\n- **Base Image**: ubuntu:22.04 (configurable) ## Use Cases 1. **Taskserv Development**: Test new taskservs before deployment\\n2. **Integration Testing**: Validate taskserv combinations\\n3. **Cluster Validation**: Test multi-node configurations\\n4. **CI/CD Integration**: Automated infrastructure testing\\n5. **Production Simulation**: Test production-like deployments safely ## CI/CD Integration Example ```yaml\\n# GitLab CI\\ntest-infrastructure: stage: test script: - ./scripts/start-orchestrator.nu --background - provisioning test quick kubernetes - provisioning test quick postgres - provisioning test quick redis - provisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start artifacts: when: on_failure paths: - test-logs/\\n```plaintext ## Documentation Complete documentation available: - **User Guide**: [Test Environment Guide](../testing/test-environment-guide.md)\\n- **Detailed Usage**: [Test Environment Usage](../testing/test-environment-usage.md)\\n- **Orchestrator README**: [Orchestrator](../operations/orchestrator-system.md) ## Command Shortcuts Test commands are integrated into the CLI with shortcuts: - `test` or `tst` - Test command prefix\\n- `test quick ` - One-command test\\n- `test env single/server/cluster` - Create test environments\\n- `test topology load/list` - Manage topology templates","breadcrumbs":"Test Environment System » Architecture","id":"1721","title":"Architecture"},"1722":{"body":"Version : 1.0.0 Date : 2025-10-06 Status : Production Ready","breadcrumbs":"TaskServ Validation Guide » Taskserv Validation and Testing Guide","id":"1722","title":"Taskserv Validation and Testing Guide"},"1723":{"body":"The taskserv validation and testing system provides comprehensive evaluation of infrastructure services before deployment, reducing errors and increasing confidence in deployments.","breadcrumbs":"TaskServ Validation Guide » Overview","id":"1723","title":"Overview"},"1724":{"body":"","breadcrumbs":"TaskServ Validation Guide » Validation Levels","id":"1724","title":"Validation Levels"},"1725":{"body":"Validates configuration files, templates, and scripts without requiring infrastructure access. What it checks: KCL schema syntax and semantics Jinja2 template syntax Shell script syntax (with shellcheck if available) File structure and naming conventions Command: provisioning taskserv validate kubernetes --level static\\n```plaintext ### 2. Dependency Validation Checks taskserv dependencies, conflicts, and requirements. **What it checks:** - Required dependencies are available\\n- Optional dependencies status\\n- Conflicting taskservs\\n- Resource requirements (memory, CPU, disk)\\n- Health check configuration **Command:** ```bash\\nprovisioning taskserv validate kubernetes --level dependencies\\n```plaintext **Check against infrastructure:** ```bash\\nprovisioning taskserv check-deps kubernetes --infra my-project\\n```plaintext ### 3. Check Mode (Dry-Run) Enhanced check mode that performs validation and previews deployment without making changes. **What it does:** - Runs static validation\\n- Validates dependencies\\n- Previews configuration generation\\n- Lists files to be deployed\\n- Checks prerequisites (without SSH in check mode) **Command:** ```bash\\nprovisioning taskserv create kubernetes --check\\n```plaintext ### 4. Sandbox Testing Tests taskserv in isolated container environment before actual deployment. **What it tests:** - Package prerequisites\\n- Configuration validity\\n- Script execution\\n- Health check simulation **Command:** ```bash\\n# Test with Docker\\nprovisioning taskserv test kubernetes --runtime docker # Test with Podman\\nprovisioning taskserv test kubernetes --runtime podman # Keep container for inspection\\nprovisioning taskserv test kubernetes --runtime docker --keep\\n```plaintext --- ## Complete Validation Workflow ### Recommended Validation Sequence ```bash\\n# 1. Static validation (fastest, no infrastructure needed)\\nprovisioning taskserv validate kubernetes --level static -v # 2. Dependency validation\\nprovisioning taskserv check-deps kubernetes --infra my-project # 3. Check mode (dry-run with full validation)\\nprovisioning taskserv create kubernetes --check -v # 4. Sandbox testing (optional, requires Docker/Podman)\\nprovisioning taskserv test kubernetes --runtime docker # 5. Actual deployment (after all validations pass)\\nprovisioning taskserv create kubernetes\\n```plaintext ### Quick Validation (All Levels) ```bash\\n# Run all validation levels\\nprovisioning taskserv validate kubernetes --level all -v\\n```plaintext --- ## Validation Commands Reference ### `provisioning taskserv validate ` Multi-level validation framework. **Options:** - `--level ` - Validation level: static, dependencies, health, all (default: all)\\n- `--infra ` - Infrastructure context\\n- `--settings ` - Settings file path\\n- `--verbose` - Verbose output\\n- `--out ` - Output format: json, yaml, text **Examples:** ```bash\\n# Complete validation\\nprovisioning taskserv validate kubernetes # Only static validation\\nprovisioning taskserv validate kubernetes --level static # With verbose output\\nprovisioning taskserv validate kubernetes -v # JSON output\\nprovisioning taskserv validate kubernetes --out json\\n```plaintext ### `provisioning taskserv check-deps ` Check dependencies against infrastructure. **Options:** - `--infra ` - Infrastructure context\\n- `--settings ` - Settings file path\\n- `--verbose` - Verbose output **Examples:** ```bash\\n# Check dependencies\\nprovisioning taskserv check-deps kubernetes --infra my-project # Verbose output\\nprovisioning taskserv check-deps kubernetes --infra my-project -v\\n```plaintext ### `provisioning taskserv create --check` Enhanced check mode with full validation and preview. **Options:** - `--check` - Enable check mode (no actual deployment)\\n- `--verbose` - Verbose output\\n- All standard create options **Examples:** ```bash\\n# Check mode with verbose output\\nprovisioning taskserv create kubernetes --check -v # Check specific server\\nprovisioning taskserv create kubernetes server-01 --check\\n```plaintext ### `provisioning taskserv test ` Sandbox testing in isolated environment. **Options:** - `--runtime ` - Runtime: docker, podman, native (default: docker)\\n- `--infra ` - Infrastructure context\\n- `--settings ` - Settings file path\\n- `--keep` - Keep container after test\\n- `--verbose` - Verbose output **Examples:** ```bash\\n# Test with Docker\\nprovisioning taskserv test kubernetes --runtime docker # Test with Podman\\nprovisioning taskserv test kubernetes --runtime podman # Keep container for debugging\\nprovisioning taskserv test kubernetes --keep -v # Connect to kept container\\ndocker exec -it taskserv-test-kubernetes bash\\n```plaintext --- ## Validation Output ### Static Validation ```plaintext\\nTaskserv Validation\\nTaskserv: kubernetes\\nLevel: static Validating KCL schemas for kubernetes... Checking kubernetes.k... ✓ Valid Checking version.k... ✓ Valid Checking dependencies.k... ✓ Valid Validating templates for kubernetes... Checking env-kubernetes.j2... ✓ Basic syntax OK Checking install-kubernetes.sh... ✓ Basic syntax OK Validation Summary\\n✓ kcl: 0 errors, 0 warnings\\n✓ templates: 0 errors, 0 warnings\\n✓ scripts: 0 errors, 0 warnings Overall Status\\n✓ VALID - 0 warnings\\n```plaintext ### Dependency Validation ```plaintext\\nDependency Validation Report\\nTaskserv: kubernetes Status: VALID Required Dependencies: • containerd • etcd • os Optional Dependencies: • cilium • helm Conflicts: • docker • podman\\n```plaintext ### Check Mode Output ```plaintext\\nCheck Mode: kubernetes on server-01 → Running static validation... ✓ Static validation passed → Checking dependencies... ✓ Dependencies OK Required: containerd, etcd, os → Previewing configuration generation... ✓ Configuration preview generated Files to process: 15 → Checking prerequisites... ℹ Prerequisite checks (preview mode): ⊘ Server accessibility: Check mode - SSH not tested ℹ Directory /tmp: Would verify directory exists ℹ Command bash: Would verify command is available Check Mode Summary\\n✓ All validations passed 💡 Taskserv can be deployed with: provisioning taskserv create kubernetes\\n```plaintext ### Test Output ```plaintext\\nTaskserv Sandbox Testing\\nTaskserv: kubernetes\\nRuntime: docker → Running pre-test validation...\\n✓ Validation passed → Preparing sandbox environment... Using base image: ubuntu:22.04\\n✓ Sandbox prepared: a1b2c3d4e5f6 → Running tests in sandbox... Test 1: Package prerequisites... Test 2: Configuration validity... Test 3: Script execution... Test 4: Health check simulation... Test Summary\\nTotal tests: 4\\nPassed: 4\\nFailed: 0\\nSkipped: 0 Detailed Results: ✓ Package prerequisites: Package manager accessible ✓ Configuration validity: 3 configuration files validated ✓ Script execution: 2 scripts validated ✓ Health check: Health check configuration valid: http://localhost:6443/healthz ✓ All tests passed\\n```plaintext --- ## Integration with CI/CD ### GitLab CI Example ```yaml\\nvalidate-taskservs: stage: validate script: - provisioning taskserv validate kubernetes --level all --out json - provisioning taskserv check-deps kubernetes --infra production test-taskservs: stage: test script: - provisioning taskserv test kubernetes --runtime docker dependencies: - validate-taskservs deploy-taskservs: stage: deploy script: - provisioning taskserv create kubernetes dependencies: - test-taskservs only: - main\\n```plaintext ### GitHub Actions Example ```yaml\\nname: Taskserv Validation on: [push, pull_request] jobs: validate: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Validate Taskservs run: | provisioning taskserv validate kubernetes --level all -v - name: Check Dependencies run: | provisioning taskserv check-deps kubernetes --infra production - name: Test in Sandbox run: | provisioning taskserv test kubernetes --runtime docker\\n```plaintext --- ## Troubleshooting ### shellcheck not found If shellcheck is not available, script validation will be skipped with a warning. **Install shellcheck:** ```bash\\n# macOS\\nbrew install shellcheck # Ubuntu/Debian\\napt install shellcheck # Fedora\\ndnf install shellcheck\\n```plaintext ### Docker/Podman not available Sandbox testing requires Docker or Podman. **Check runtime:** ```bash\\n# Docker\\ndocker ps # Podman\\npodman ps # Use native mode (limited testing)\\nprovisioning taskserv test kubernetes --runtime native\\n```plaintext ### KCL validation errors KCL schema errors indicate syntax or semantic problems. **Common fixes:** - Check schema syntax in `.k` files\\n- Validate imports and dependencies\\n- Run `kcl fmt` to format files\\n- Check `kcl.mod` dependencies ### Dependency conflicts If conflicting taskservs are detected: - Remove conflicting taskserv first\\n- Check infrastructure configuration\\n- Review dependency declarations in `dependencies.k` --- ## Advanced Usage ### Custom Validation Scripts You can create custom validation scripts by extending the validation framework: ```nushell\\n# custom_validation.nu\\nuse provisioning/core/nulib/taskservs/validate.nu * def custom-validate [taskserv: string] { # Custom validation logic let result = (validate-kcl-schemas $taskserv --verbose=true) # Additional custom checks # ... return $result\\n}\\n```plaintext ### Batch Validation Validate multiple taskservs: ```bash\\n# Validate all taskservs in infrastructure\\nfor taskserv in (provisioning taskserv list | get name) { provisioning taskserv validate $taskserv\\n}\\n```plaintext ### Automated Testing Create test suite for all taskservs: ```bash\\n#!/usr/bin/env nu let taskservs = [\\"kubernetes\\", \\"containerd\\", \\"cilium\\", \\"etcd\\"] for ts in $taskservs { print $\\"Testing ($ts)...\\" provisioning taskserv test $ts --runtime docker\\n}\\n```plaintext --- ## Best Practices ### Before Deployment 1. **Always validate** before deploying to production\\n2. **Run check mode** to preview changes\\n3. **Test in sandbox** for critical services\\n4. **Check dependencies** in infrastructure context ### During Development 1. **Validate frequently** during taskserv development\\n2. **Use verbose mode** to understand validation details\\n3. **Fix warnings** even if validation passes\\n4. **Keep containers** for debugging test failures ### In CI/CD 1. **Fail fast** on validation errors\\n2. **Require all tests pass** before merge\\n3. **Generate reports** in JSON format for analysis\\n4. **Archive test results** for audit trail --- ## Related Documentation - [Taskserv Development Guide](taskserv-development-guide.md)\\n- KCL Schema Reference\\n- [Dependency Management](dependency-management.md)\\n- [CI/CD Integration](cicd-integration.md) --- ## Version History | Version | Date | Changes |\\n|---------|------|---------|\\n| 1.0.0 | 2025-10-06 | Initial validation and testing guide | --- **Maintained By**: Infrastructure Team\\n**Review Cycle**: Quarterly","breadcrumbs":"TaskServ Validation Guide » 1. Static Validation","id":"1725","title":"1. Static Validation"},"1726":{"body":"This comprehensive troubleshooting guide helps you diagnose and resolve common issues with Infrastructure Automation.","breadcrumbs":"Troubleshooting Guide » Troubleshooting Guide","id":"1726","title":"Troubleshooting Guide"},"1727":{"body":"Common issues and their solutions Diagnostic commands and techniques Error message interpretation Performance optimization Recovery procedures Prevention strategies","breadcrumbs":"Troubleshooting Guide » What You\'ll Learn","id":"1727","title":"What You\'ll Learn"},"1728":{"body":"","breadcrumbs":"Troubleshooting Guide » General Troubleshooting Approach","id":"1728","title":"General Troubleshooting Approach"},"1729":{"body":"# Check overall system status\\nprovisioning env\\nprovisioning validate config # Check specific component status\\nprovisioning show servers --infra my-infra\\nprovisioning taskserv list --infra my-infra --installed\\n```plaintext ### 2. Gather Information ```bash\\n# Enable debug mode for detailed output\\nprovisioning --debug # Check logs and errors\\nprovisioning show logs --infra my-infra\\n```plaintext ### 3. Use Diagnostic Commands ```bash\\n# Validate configuration\\nprovisioning validate config --detailed # Test connectivity\\nprovisioning provider test aws\\nprovisioning network test --infra my-infra\\n```plaintext ## Installation and Setup Issues ### Issue: Installation Fails **Symptoms:** - Installation script errors\\n- Missing dependencies\\n- Permission denied errors **Diagnosis:** ```bash\\n# Check system requirements\\nuname -a\\ndf -h\\nwhoami # Check permissions\\nls -la /usr/local/\\nsudo -l\\n```plaintext **Solutions:** #### Permission Issues ```bash\\n# Run installer with sudo\\nsudo ./install-provisioning # Or install to user directory\\n./install-provisioning --prefix=$HOME/provisioning\\nexport PATH=\\"$HOME/provisioning/bin:$PATH\\"\\n```plaintext #### Missing Dependencies ```bash\\n# Ubuntu/Debian\\nsudo apt update\\nsudo apt install -y curl wget tar build-essential # RHEL/CentOS\\nsudo dnf install -y curl wget tar gcc make\\n```plaintext #### Architecture Issues ```bash\\n# Check architecture\\nuname -m # Download correct architecture package\\n# x86_64: Intel/AMD 64-bit\\n# arm64: ARM 64-bit (Apple Silicon)\\nwget https://releases.example.com/provisioning-linux-x86_64.tar.gz\\n```plaintext ### Issue: Command Not Found **Symptoms:** ```plaintext\\nbash: provisioning: command not found\\n```plaintext **Diagnosis:** ```bash\\n# Check if provisioning is installed\\nwhich provisioning\\nls -la /usr/local/bin/provisioning # Check PATH\\necho $PATH\\n```plaintext **Solutions:** ```bash\\n# Add to PATH\\nexport PATH=\\"/usr/local/bin:$PATH\\" # Make permanent (add to shell profile)\\necho \'export PATH=\\"/usr/local/bin:$PATH\\"\' >> ~/.bashrc\\nsource ~/.bashrc # Create symlink if missing\\nsudo ln -sf /usr/local/provisioning/core/nulib/provisioning /usr/local/bin/provisioning\\n```plaintext ### Issue: Nushell Plugin Errors **Symptoms:** ```plaintext\\nPlugin not found: nu_plugin_kcl\\nPlugin registration failed\\n```plaintext **Diagnosis:** ```bash\\n# Check Nushell version\\nnu --version # Check KCL installation (required for nu_plugin_kcl)\\nkcl version # Check plugin registration\\nnu -c \\"version | get installed_plugins\\"\\n```plaintext **Solutions:** ```bash\\n# Install KCL CLI (required for nu_plugin_kcl)\\n# Download from: https://github.com/kcl-lang/cli/releases # Re-register plugins\\nnu -c \\"plugin add /usr/local/provisioning/plugins/nu_plugin_kcl\\"\\nnu -c \\"plugin add /usr/local/provisioning/plugins/nu_plugin_tera\\" # Restart Nushell after plugin registration\\n```plaintext ## Configuration Issues ### Issue: Configuration Not Found **Symptoms:** ```plaintext\\nConfiguration file not found\\nFailed to load configuration\\n```plaintext **Diagnosis:** ```bash\\n# Check configuration file locations\\nprovisioning env | grep config # Check if files exist\\nls -la ~/.config/provisioning/\\nls -la /usr/local/provisioning/config.defaults.toml\\n```plaintext **Solutions:** ```bash\\n# Initialize user configuration\\nprovisioning init config # Create missing directories\\nmkdir -p ~/.config/provisioning # Copy template\\ncp /usr/local/provisioning/config-examples/config.user.toml ~/.config/provisioning/config.toml # Verify configuration\\nprovisioning validate config\\n```plaintext ### Issue: Configuration Validation Errors **Symptoms:** ```plaintext\\nConfiguration validation failed\\nInvalid configuration value\\nMissing required field\\n```plaintext **Diagnosis:** ```bash\\n# Detailed validation\\nprovisioning validate config --detailed # Check specific sections\\nprovisioning config show --section paths\\nprovisioning config show --section providers\\n```plaintext **Solutions:** #### Path Configuration Issues ```bash\\n# Check base path exists\\nls -la /path/to/provisioning # Update configuration\\nnano ~/.config/provisioning/config.toml # Fix paths section\\n[paths]\\nbase = \\"/correct/path/to/provisioning\\"\\n```plaintext #### Provider Configuration Issues ```bash\\n# Test provider connectivity\\nprovisioning provider test aws # Check credentials\\naws configure list # For AWS\\nupcloud-cli config # For UpCloud # Update provider configuration\\n[providers.aws]\\ninterface = \\"CLI\\" # or \\"API\\"\\n```plaintext ### Issue: Interpolation Failures **Symptoms:** ```plaintext\\nInterpolation pattern not resolved: {{env.VARIABLE}}\\nTemplate rendering failed\\n```plaintext **Diagnosis:** ```bash\\n# Test interpolation\\nprovisioning validate interpolation test # Check environment variables\\nenv | grep VARIABLE # Debug interpolation\\nprovisioning --debug validate interpolation validate\\n```plaintext **Solutions:** ```bash\\n# Set missing environment variables\\nexport MISSING_VARIABLE=\\"value\\" # Use fallback values in configuration\\nconfig_value = \\"{{env.VARIABLE || \'default_value\'}}\\" # Check interpolation syntax\\n# Correct: {{env.HOME}}\\n# Incorrect: ${HOME} or $HOME\\n```plaintext ## Server Management Issues ### Issue: Server Creation Fails **Symptoms:** ```plaintext\\nFailed to create server\\nProvider API error\\nInsufficient quota\\n```plaintext **Diagnosis:** ```bash\\n# Check provider status\\nprovisioning provider status aws # Test connectivity\\nping api.provider.com\\ncurl -I https://api.provider.com # Check quota\\nprovisioning provider quota --infra my-infra # Debug server creation\\nprovisioning --debug server create web-01 --infra my-infra --check\\n```plaintext **Solutions:** #### API Authentication Issues ```bash\\n# AWS\\naws configure list\\naws sts get-caller-identity # UpCloud\\nupcloud-cli account show # Update credentials\\naws configure # For AWS\\nexport UPCLOUD_USERNAME=\\"your-username\\"\\nexport UPCLOUD_PASSWORD=\\"your-password\\"\\n```plaintext #### Quota/Limit Issues ```bash\\n# Check current usage\\nprovisioning show costs --infra my-infra # Request quota increase from provider\\n# Or reduce resource requirements # Use smaller instance types\\n# Reduce number of servers\\n```plaintext #### Network/Connectivity Issues ```bash\\n# Test network connectivity\\ncurl -v https://api.aws.amazon.com\\ncurl -v https://api.upcloud.com # Check DNS resolution\\nnslookup api.aws.amazon.com # Check firewall rules\\n# Ensure outbound HTTPS (port 443) is allowed\\n```plaintext ### Issue: SSH Access Fails **Symptoms:** ```plaintext\\nConnection refused\\nPermission denied\\nHost key verification failed\\n```plaintext **Diagnosis:** ```bash\\n# Check server status\\nprovisioning server list --infra my-infra # Test SSH manually\\nssh -v user@server-ip # Check SSH configuration\\nprovisioning show servers web-01 --infra my-infra\\n```plaintext **Solutions:** #### Connection Issues ```bash\\n# Wait for server to be fully ready\\nprovisioning server list --infra my-infra --status # Check security groups/firewall\\n# Ensure SSH (port 22) is allowed # Use correct IP address\\nprovisioning show servers web-01 --infra my-infra | grep ip\\n```plaintext #### Authentication Issues ```bash\\n# Check SSH key\\nls -la ~/.ssh/\\nssh-add -l # Generate new key if needed\\nssh-keygen -t ed25519 -f ~/.ssh/provisioning_key # Use specific key\\nprovisioning server ssh web-01 --key ~/.ssh/provisioning_key --infra my-infra\\n```plaintext #### Host Key Issues ```bash\\n# Remove old host key\\nssh-keygen -R server-ip # Accept new host key\\nssh -o StrictHostKeyChecking=accept-new user@server-ip\\n```plaintext ## Task Service Issues ### Issue: Service Installation Fails **Symptoms:** ```plaintext\\nService installation failed\\nPackage not found\\nDependency conflicts\\n```plaintext **Diagnosis:** ```bash\\n# Check service prerequisites\\nprovisioning taskserv check kubernetes --infra my-infra # Debug installation\\nprovisioning --debug taskserv create kubernetes --infra my-infra --check # Check server resources\\nprovisioning server ssh web-01 --command \\"free -h && df -h\\" --infra my-infra\\n```plaintext **Solutions:** #### Resource Issues ```bash\\n# Check available resources\\nprovisioning server ssh web-01 --command \\" echo \'Memory:\' && free -h echo \'Disk:\' && df -h echo \'CPU:\' && nproc\\n\\" --infra my-infra # Upgrade server if needed\\nprovisioning server resize web-01 --plan larger-plan --infra my-infra\\n```plaintext #### Package Repository Issues ```bash\\n# Update package lists\\nprovisioning server ssh web-01 --command \\" sudo apt update && sudo apt upgrade -y\\n\\" --infra my-infra # Check repository connectivity\\nprovisioning server ssh web-01 --command \\" curl -I https://download.docker.com/linux/ubuntu/\\n\\" --infra my-infra\\n```plaintext #### Dependency Issues ```bash\\n# Install missing dependencies\\nprovisioning taskserv create containerd --infra my-infra # Then install dependent service\\nprovisioning taskserv create kubernetes --infra my-infra\\n```plaintext ### Issue: Service Not Running **Symptoms:** ```plaintext\\nService status: failed\\nService not responding\\nHealth check failures\\n```plaintext **Diagnosis:** ```bash\\n# Check service status\\nprovisioning taskserv status kubernetes --infra my-infra # Check service logs\\nprovisioning taskserv logs kubernetes --infra my-infra # SSH and check manually\\nprovisioning server ssh web-01 --command \\" sudo systemctl status kubernetes sudo journalctl -u kubernetes --no-pager -n 50\\n\\" --infra my-infra\\n```plaintext **Solutions:** #### Configuration Issues ```bash\\n# Reconfigure service\\nprovisioning taskserv configure kubernetes --infra my-infra # Reset to defaults\\nprovisioning taskserv reset kubernetes --infra my-infra\\n```plaintext #### Port Conflicts ```bash\\n# Check port usage\\nprovisioning server ssh web-01 --command \\" sudo netstat -tulpn | grep :6443 sudo ss -tulpn | grep :6443\\n\\" --infra my-infra # Change port configuration or stop conflicting service\\n```plaintext #### Permission Issues ```bash\\n# Fix permissions\\nprovisioning server ssh web-01 --command \\" sudo chown -R kubernetes:kubernetes /var/lib/kubernetes sudo chmod 600 /etc/kubernetes/admin.conf\\n\\" --infra my-infra\\n```plaintext ## Cluster Management Issues ### Issue: Cluster Deployment Fails **Symptoms:** ```plaintext\\nCluster deployment failed\\nPod creation errors\\nService unavailable\\n```plaintext **Diagnosis:** ```bash\\n# Check cluster status\\nprovisioning cluster status web-cluster --infra my-infra # Check Kubernetes cluster\\nprovisioning server ssh master-01 --command \\" kubectl get nodes kubectl get pods --all-namespaces\\n\\" --infra my-infra # Check cluster logs\\nprovisioning cluster logs web-cluster --infra my-infra\\n```plaintext **Solutions:** #### Node Issues ```bash\\n# Check node status\\nprovisioning server ssh master-01 --command \\" kubectl describe nodes\\n\\" --infra my-infra # Drain and rejoin problematic nodes\\nprovisioning server ssh master-01 --command \\" kubectl drain worker-01 --ignore-daemonsets kubectl delete node worker-01\\n\\" --infra my-infra # Rejoin node\\nprovisioning taskserv configure kubernetes --infra my-infra --servers worker-01\\n```plaintext #### Resource Constraints ```bash\\n# Check resource usage\\nprovisioning server ssh master-01 --command \\" kubectl top nodes kubectl top pods --all-namespaces\\n\\" --infra my-infra # Scale down or add more nodes\\nprovisioning cluster scale web-cluster --replicas 3 --infra my-infra\\nprovisioning server create worker-04 --infra my-infra\\n```plaintext #### Network Issues ```bash\\n# Check network plugin\\nprovisioning server ssh master-01 --command \\" kubectl get pods -n kube-system | grep cilium\\n\\" --infra my-infra # Restart network plugin\\nprovisioning taskserv restart cilium --infra my-infra\\n```plaintext ## Performance Issues ### Issue: Slow Operations **Symptoms:** - Commands take very long to complete\\n- Timeouts during operations\\n- High CPU/memory usage **Diagnosis:** ```bash\\n# Check system resources\\ntop\\nhtop\\nfree -h\\ndf -h # Check network latency\\nping api.aws.amazon.com\\ntraceroute api.aws.amazon.com # Profile command execution\\ntime provisioning server list --infra my-infra\\n```plaintext **Solutions:** #### Local System Issues ```bash\\n# Close unnecessary applications\\n# Upgrade system resources\\n# Use SSD storage if available # Increase timeout values\\nexport PROVISIONING_TIMEOUT=600 # 10 minutes\\n```plaintext #### Network Issues ```bash\\n# Use region closer to your location\\n[providers.aws]\\nregion = \\"us-west-1\\" # Closer region # Enable connection pooling/caching\\n[cache]\\nenabled = true\\n```plaintext #### Large Infrastructure Issues ```bash\\n# Use parallel operations\\nprovisioning server create --infra my-infra --parallel 4 # Filter results\\nprovisioning server list --infra my-infra --filter \\"status == \'running\'\\"\\n```plaintext ### Issue: High Memory Usage **Symptoms:** - System becomes unresponsive\\n- Out of memory errors\\n- Swap usage high **Diagnosis:** ```bash\\n# Check memory usage\\nfree -h\\nps aux --sort=-%mem | head # Check for memory leaks\\nvalgrind provisioning server list --infra my-infra\\n```plaintext **Solutions:** ```bash\\n# Increase system memory\\n# Close other applications\\n# Use streaming operations for large datasets # Enable garbage collection\\nexport PROVISIONING_GC_ENABLED=true # Reduce concurrent operations\\nexport PROVISIONING_MAX_PARALLEL=2\\n```plaintext ## Network and Connectivity Issues ### Issue: API Connectivity Problems **Symptoms:** ```plaintext\\nConnection timeout\\nDNS resolution failed\\nSSL certificate errors\\n```plaintext **Diagnosis:** ```bash\\n# Test basic connectivity\\nping 8.8.8.8\\ncurl -I https://api.aws.amazon.com\\nnslookup api.upcloud.com # Check SSL certificates\\nopenssl s_client -connect api.aws.amazon.com:443 -servername api.aws.amazon.com\\n```plaintext **Solutions:** #### DNS Issues ```bash\\n# Use alternative DNS\\necho \'nameserver 8.8.8.8\' | sudo tee /etc/resolv.conf # Clear DNS cache\\nsudo systemctl restart systemd-resolved # Ubuntu\\nsudo dscacheutil -flushcache # macOS\\n```plaintext #### Proxy/Firewall Issues ```bash\\n# Configure proxy if needed\\nexport HTTP_PROXY=http://proxy.company.com:9090\\nexport HTTPS_PROXY=http://proxy.company.com:9090 # Check firewall rules\\nsudo ufw status # Ubuntu\\nsudo firewall-cmd --list-all # RHEL/CentOS\\n```plaintext #### Certificate Issues ```bash\\n# Update CA certificates\\nsudo apt update && sudo apt install ca-certificates # Ubuntu\\nbrew install ca-certificates # macOS # Skip SSL verification (temporary)\\nexport PROVISIONING_SKIP_SSL_VERIFY=true\\n```plaintext ## Security and Encryption Issues ### Issue: SOPS Decryption Fails **Symptoms:** ```plaintext\\nSOPS decryption failed\\nAge key not found\\nInvalid key format\\n```plaintext **Diagnosis:** ```bash\\n# Check SOPS configuration\\nprovisioning sops config # Test SOPS manually\\nsops -d encrypted-file.k # Check Age keys\\nls -la ~/.config/sops/age/keys.txt\\nage-keygen -y ~/.config/sops/age/keys.txt\\n```plaintext **Solutions:** #### Missing Keys ```bash\\n# Generate new Age key\\nage-keygen -o ~/.config/sops/age/keys.txt # Update SOPS configuration\\nprovisioning sops config --key-file ~/.config/sops/age/keys.txt\\n```plaintext #### Key Permissions ```bash\\n# Fix key file permissions\\nchmod 600 ~/.config/sops/age/keys.txt\\nchown $(whoami) ~/.config/sops/age/keys.txt\\n```plaintext #### Configuration Issues ```bash\\n# Update SOPS configuration in ~/.config/provisioning/config.toml\\n[sops]\\nuse_sops = true\\nkey_search_paths = [ \\"~/.config/sops/age/keys.txt\\", \\"/path/to/your/key.txt\\"\\n]\\n```plaintext ### Issue: Access Denied Errors **Symptoms:** ```plaintext\\nPermission denied\\nAccess denied\\nInsufficient privileges\\n```plaintext **Diagnosis:** ```bash\\n# Check user permissions\\nid\\ngroups # Check file permissions\\nls -la ~/.config/provisioning/\\nls -la /usr/local/provisioning/ # Test with sudo\\nsudo provisioning env\\n```plaintext **Solutions:** ```bash\\n# Fix file ownership\\nsudo chown -R $(whoami):$(whoami) ~/.config/provisioning/ # Fix permissions\\nchmod -R 755 ~/.config/provisioning/\\nchmod 600 ~/.config/provisioning/config.toml # Add user to required groups\\nsudo usermod -a -G docker $(whoami) # For Docker access\\n```plaintext ## Data and Storage Issues ### Issue: Disk Space Problems **Symptoms:** ```plaintext\\nNo space left on device\\nWrite failed\\nDisk full\\n```plaintext **Diagnosis:** ```bash\\n# Check disk usage\\ndf -h\\ndu -sh ~/.config/provisioning/\\ndu -sh /usr/local/provisioning/ # Find large files\\nfind /usr/local/provisioning -type f -size +100M\\n```plaintext **Solutions:** ```bash\\n# Clean up cache files\\nrm -rf ~/.config/provisioning/cache/*\\nrm -rf /usr/local/provisioning/.cache/* # Clean up logs\\nfind /usr/local/provisioning -name \\"*.log\\" -mtime +30 -delete # Clean up temporary files\\nrm -rf /tmp/provisioning-* # Compress old backups\\ngzip ~/.config/provisioning/backups/*.yaml\\n```plaintext ## Recovery Procedures ### Configuration Recovery ```bash\\n# Restore from backup\\nprovisioning config restore --backup latest # Reset to defaults\\nprovisioning config reset # Recreate configuration\\nprovisioning init config --force\\n```plaintext ### Infrastructure Recovery ```bash\\n# Check infrastructure status\\nprovisioning show servers --infra my-infra # Recover failed servers\\nprovisioning server create failed-server --infra my-infra # Restore from backup\\nprovisioning restore --backup latest --infra my-infra\\n```plaintext ### Service Recovery ```bash\\n# Restart failed services\\nprovisioning taskserv restart kubernetes --infra my-infra # Reinstall corrupted services\\nprovisioning taskserv delete kubernetes --infra my-infra\\nprovisioning taskserv create kubernetes --infra my-infra\\n```plaintext ## Prevention Strategies ### Regular Maintenance ```bash\\n# Weekly maintenance script\\n#!/bin/bash # Update system\\nprovisioning update --check # Validate configuration\\nprovisioning validate config # Check for service updates\\nprovisioning taskserv check-updates # Clean up old files\\nprovisioning cleanup --older-than 30d # Create backup\\nprovisioning backup create --name \\"weekly-$(date +%Y%m%d)\\"\\n```plaintext ### Monitoring Setup ```bash\\n# Set up health monitoring\\n#!/bin/bash # Check system health every hour\\n0 * * * * /usr/local/bin/provisioning health check || echo \\"Health check failed\\" | mail -s \\"Provisioning Alert\\" admin@company.com # Weekly cost reports\\n0 9 * * 1 /usr/local/bin/provisioning show costs --all | mail -s \\"Weekly Cost Report\\" finance@company.com\\n```plaintext ### Best Practices 1. **Configuration Management** - Version control all configuration files - Use check mode before applying changes - Regular validation and testing 2. **Security** - Regular key rotation - Principle of least privilege - Audit logs review 3. **Backup Strategy** - Automated daily backups - Test restore procedures - Off-site backup storage 4. **Documentation** - Document custom configurations - Keep troubleshooting logs - Share knowledge with team ## Getting Additional Help ### Debug Information Collection ```bash\\n#!/bin/bash\\n# Collect debug information echo \\"Collecting provisioning debug information...\\" mkdir -p /tmp/provisioning-debug\\ncd /tmp/provisioning-debug # System information\\nuname -a > system-info.txt\\nfree -h >> system-info.txt\\ndf -h >> system-info.txt # Provisioning information\\nprovisioning --version > provisioning-info.txt\\nprovisioning env >> provisioning-info.txt\\nprovisioning validate config --detailed > config-validation.txt 2>&1 # Configuration files\\ncp ~/.config/provisioning/config.toml user-config.toml 2>/dev/null || echo \\"No user config\\" > user-config.toml # Logs\\nprovisioning show logs > system-logs.txt 2>&1 # Create archive\\ncd /tmp\\ntar czf provisioning-debug-$(date +%Y%m%d_%H%M%S).tar.gz provisioning-debug/ echo \\"Debug information collected in: provisioning-debug-*.tar.gz\\"\\n```plaintext ### Support Channels 1. **Built-in Help** ```bash provisioning help provisioning help Documentation User guides in docs/user/ CLI reference: docs/user/cli-reference.md Configuration guide: docs/user/configuration.md Community Resources Project repository issues Community forums Documentation wiki Enterprise Support Professional services Priority support Custom development Remember: When reporting issues, always include the debug information collected above and specific error messages.","breadcrumbs":"Troubleshooting Guide » 1. Identify the Problem","id":"1729","title":"1. Identify the Problem"},"173":{"body":"The typical deployment workflow: # 1. Initialize workspace\\nprovisioning workspace init production # 2. Generate infrastructure\\nprovisioning generate infra --new prod-infra # 3. Configure (edit settings.k)\\n$EDITOR workspace/infra/prod-infra/settings.k # 4. Validate configuration\\nprovisioning validate config --infra prod-infra # 5. Create servers (check mode)\\nprovisioning server create --infra prod-infra --check # 6. Create servers (real)\\nprovisioning server create --infra prod-infra # 7. Install task services\\nprovisioning taskserv create kubernetes --infra prod-infra --wait # 8. Deploy cluster (if needed)\\nprovisioning cluster create my-cluster --infra prod-infra # 9. Verify\\nprovisioning server list\\nprovisioning taskserv list","breadcrumbs":"First Deployment » Deployment Workflow","id":"173","title":"Deployment Workflow"},"1730":{"body":"Version : 3.5.0 Last Updated : 2025-10-09 Estimated Time : 30-60 minutes Difficulty : Beginner to Intermediate","breadcrumbs":"From Scratch » Complete Deployment Guide: From Scratch to Production","id":"1730","title":"Complete Deployment Guide: From Scratch to Production"},"1731":{"body":"Prerequisites Step 1: Install Nushell Step 2: Install Nushell Plugins (Recommended) Step 3: Install Required Tools Step 4: Clone and Setup Project Step 5: Initialize Workspace Step 6: Configure Environment Step 7: Discover and Load Modules Step 8: Validate Configuration Step 9: Deploy Servers Step 10: Install Task Services Step 11: Create Clusters Step 12: Verify Deployment Step 13: Post-Deployment Troubleshooting Next Steps","breadcrumbs":"From Scratch » Table of Contents","id":"1731","title":"Table of Contents"},"1732":{"body":"Before starting, ensure you have: ✅ Operating System : macOS, Linux, or Windows (WSL2 recommended) ✅ Administrator Access : Ability to install software and configure system ✅ Internet Connection : For downloading dependencies and accessing cloud providers ✅ Cloud Provider Credentials : UpCloud, AWS, or local development environment ✅ Basic Terminal Knowledge : Comfortable running shell commands ✅ Text Editor : vim, nano, VSCode, or your preferred editor","breadcrumbs":"From Scratch » Prerequisites","id":"1732","title":"Prerequisites"},"1733":{"body":"CPU : 2+ cores RAM : 8GB minimum, 16GB recommended Disk : 20GB free space minimum","breadcrumbs":"From Scratch » Recommended Hardware","id":"1733","title":"Recommended Hardware"},"1734":{"body":"Nushell 0.107.1+ is the primary shell and scripting language for the provisioning platform.","breadcrumbs":"From Scratch » Step 1: Install Nushell","id":"1734","title":"Step 1: Install Nushell"},"1735":{"body":"# Install Nushell\\nbrew install nushell # Verify installation\\nnu --version\\n# Expected: 0.107.1 or higher\\n```plaintext ### Linux (via Package Manager) **Ubuntu/Debian:** ```bash\\n# Add Nushell repository\\ncurl -fsSL https://starship.rs/install.sh | bash # Install Nushell\\nsudo apt update\\nsudo apt install nushell # Verify installation\\nnu --version\\n```plaintext **Fedora:** ```bash\\nsudo dnf install nushell\\nnu --version\\n```plaintext **Arch Linux:** ```bash\\nsudo pacman -S nushell\\nnu --version\\n```plaintext ### Linux/macOS (via Cargo) ```bash\\n# Install Rust (if not already installed)\\ncurl --proto \'=https\' --tlsv1.2 -sSf https://sh.rustup.rs | sh\\nsource $HOME/.cargo/env # Install Nushell\\ncargo install nu --locked # Verify installation\\nnu --version\\n```plaintext ### Windows (via Winget) ```powershell\\n# Install Nushell\\nwinget install nushell # Verify installation\\nnu --version\\n```plaintext ### Configure Nushell ```bash\\n# Start Nushell\\nnu # Configure (creates default config if not exists)\\nconfig nu\\n```plaintext --- ## Step 2: Install Nushell Plugins (Recommended) Native plugins provide **10-50x performance improvement** for authentication, KMS, and orchestrator operations. ### Why Install Plugins? **Performance Gains:** - 🚀 **KMS operations**: ~5ms vs ~50ms (10x faster)\\n- 🚀 **Orchestrator queries**: ~1ms vs ~30ms (30x faster)\\n- 🚀 **Batch encryption**: 100 files in 0.5s vs 5s (10x faster) **Benefits:** - ✅ Native Nushell integration (pipelines, data structures)\\n- ✅ OS keyring for secure token storage\\n- ✅ Offline capability (Age encryption, local orchestrator)\\n- ✅ Graceful fallback to HTTP if not installed ### Prerequisites for Building Plugins ```bash\\n# Install Rust toolchain (if not already installed)\\ncurl --proto \'=https\' --tlsv1.2 -sSf https://sh.rustup.rs | sh\\nsource $HOME/.cargo/env\\nrustc --version\\n# Expected: rustc 1.75+ or higher # Linux only: Install development packages\\nsudo apt install libssl-dev pkg-config # Ubuntu/Debian\\nsudo dnf install openssl-devel # Fedora # Linux only: Install keyring service (required for auth plugin)\\nsudo apt install gnome-keyring # Ubuntu/Debian (GNOME)\\nsudo apt install kwalletmanager # Ubuntu/Debian (KDE)\\n```plaintext ### Build Plugins ```bash\\n# Navigate to plugins directory\\ncd provisioning/core/plugins/nushell-plugins # Build all three plugins in release mode (optimized)\\ncargo build --release --all # Expected output:\\n# Compiling nu_plugin_auth v0.1.0\\n# Compiling nu_plugin_kms v0.1.0\\n# Compiling nu_plugin_orchestrator v0.1.0\\n# Finished release [optimized] target(s) in 2m 15s\\n```plaintext **Build time**: ~2-5 minutes depending on hardware ### Register Plugins with Nushell ```bash\\n# Register all three plugins (full paths recommended)\\nplugin add $PWD/target/release/nu_plugin_auth\\nplugin add $PWD/target/release/nu_plugin_kms\\nplugin add $PWD/target/release/nu_plugin_orchestrator # Alternative (from plugins directory)\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator\\n```plaintext ### Verify Plugin Installation ```bash\\n# List registered plugins\\nplugin list | where name =~ \\"auth|kms|orch\\" # Expected output:\\n# ╭───┬─────────────────────────┬─────────┬───────────────────────────────────╮\\n# │ # │ name │ version │ filename │\\n# ├───┼─────────────────────────┼─────────┼───────────────────────────────────┤\\n# │ 0 │ nu_plugin_auth │ 0.1.0 │ .../nu_plugin_auth │\\n# │ 1 │ nu_plugin_kms │ 0.1.0 │ .../nu_plugin_kms │\\n# │ 2 │ nu_plugin_orchestrator │ 0.1.0 │ .../nu_plugin_orchestrator │\\n# ╰───┴─────────────────────────┴─────────┴───────────────────────────────────╯ # Test each plugin\\nauth --help # Should show auth commands\\nkms --help # Should show kms commands\\norch --help # Should show orch commands\\n```plaintext ### Configure Plugin Environments ```bash\\n# Add to ~/.config/nushell/env.nu\\n$env.CONTROL_CENTER_URL = \\"http://localhost:3000\\"\\n$env.RUSTYVAULT_ADDR = \\"http://localhost:8200\\"\\n$env.RUSTYVAULT_TOKEN = \\"your-vault-token-here\\"\\n$env.ORCHESTRATOR_DATA_DIR = \\"provisioning/platform/orchestrator/data\\" # For Age encryption (local development)\\n$env.AGE_IDENTITY = $\\"($env.HOME)/.age/key.txt\\"\\n$env.AGE_RECIPIENT = \\"age1xxxxxxxxx\\" # Replace with your public key\\n```plaintext ### Test Plugins (Quick Smoke Test) ```bash\\n# Test KMS plugin (requires backend configured)\\nkms status\\n# Expected: { backend: \\"rustyvault\\", status: \\"healthy\\", ... }\\n# Or: Error if backend not configured (OK for now) # Test orchestrator plugin (reads local files)\\norch status\\n# Expected: { active_tasks: 0, completed_tasks: 0, health: \\"healthy\\" }\\n# Or: Error if orchestrator not started yet (OK for now) # Test auth plugin (requires control center)\\nauth verify\\n# Expected: { active: false }\\n# Or: Error if control center not running (OK for now)\\n```plaintext **Note**: It\'s OK if plugins show errors at this stage. We\'ll configure backends and services later. ### Skip Plugins? (Not Recommended) If you want to skip plugin installation for now: - ✅ All features work via HTTP API (slower but functional)\\n- ⚠️ You\'ll miss 10-50x performance improvements\\n- ⚠️ No offline capability for KMS/orchestrator\\n- ℹ️ You can install plugins later anytime To use HTTP fallback: ```bash\\n# System automatically uses HTTP if plugins not available\\n# No configuration changes needed\\n```plaintext --- ## Step 3: Install Required Tools ### Essential Tools **KCL (Configuration Language)** ```bash\\n# macOS\\nbrew install kcl # Linux\\ncurl -fsSL https://kcl-lang.io/script/install.sh | /bin/bash # Verify\\nkcl version\\n# Expected: 0.11.2 or higher\\n```plaintext **SOPS (Secrets Management)** ```bash\\n# macOS\\nbrew install sops # Linux\\nwget https://github.com/mozilla/sops/releases/download/v3.10.2/sops-v3.10.2.linux.amd64\\nsudo mv sops-v3.10.2.linux.amd64 /usr/local/bin/sops\\nsudo chmod +x /usr/local/bin/sops # Verify\\nsops --version\\n# Expected: 3.10.2 or higher\\n```plaintext **Age (Encryption Tool)** ```bash\\n# macOS\\nbrew install age # Linux\\nsudo apt install age # Ubuntu/Debian\\nsudo dnf install age # Fedora # Or from source\\ngo install filippo.io/age/cmd/...@latest # Verify\\nage --version\\n# Expected: 1.2.1 or higher # Generate Age key (for local encryption)\\nage-keygen -o ~/.age/key.txt\\ncat ~/.age/key.txt\\n# Save the public key (age1...) for later\\n```plaintext ### Optional but Recommended Tools **K9s (Kubernetes Management)** ```bash\\n# macOS\\nbrew install k9s # Linux\\ncurl -sS https://webinstall.dev/k9s | bash # Verify\\nk9s version\\n# Expected: 0.50.6 or higher\\n```plaintext **glow (Markdown Renderer)** ```bash\\n# macOS\\nbrew install glow # Linux\\nsudo apt install glow # Ubuntu/Debian\\nsudo dnf install glow # Fedora # Verify\\nglow --version\\n```plaintext --- ## Step 4: Clone and Setup Project ### Clone Repository ```bash\\n# Clone project\\ngit clone https://github.com/your-org/project-provisioning.git\\ncd project-provisioning # Or if already cloned, update to latest\\ngit pull origin main\\n```plaintext ### Add CLI to PATH (Optional) ```bash\\n# Add to ~/.bashrc or ~/.zshrc\\nexport PATH=\\"$PATH:/Users/Akasha/project-provisioning/provisioning/core/cli\\" # Or create symlink\\nsudo ln -s /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning /usr/local/bin/provisioning # Verify\\nprovisioning version\\n# Expected: 3.5.0\\n```plaintext --- ## Step 5: Initialize Workspace A workspace is a self-contained environment for managing infrastructure. ### Create New Workspace ```bash\\n# Initialize new workspace\\nprovisioning workspace init --name production # Or use interactive mode\\nprovisioning workspace init\\n# Name: production\\n# Description: Production infrastructure\\n# Provider: upcloud\\n```plaintext **What this creates:** The new workspace initialization now generates **KCL (Kusion Configuration Language) configuration files** for type-safe, schema-validated infrastructure definitions: ```plaintext\\nworkspace/\\n├── config/\\n│ ├── provisioning.k # Main KCL configuration (schema-validated)\\n│ ├── providers/\\n│ │ └── upcloud.toml # Provider-specific settings\\n│ ├── platform/ # Platform service configs\\n│ └── kms.toml # Key management settings\\n├── infra/ # Infrastructure definitions\\n├── extensions/ # Custom modules\\n└── runtime/ # Runtime data and state\\n```plaintext ### Workspace Configuration Format The workspace configuration now uses **KCL (type-safe)** instead of YAML. This provides: - ✅ **Type Safety**: Schema validation catches errors at load time\\n- ✅ **Immutability**: Enforces configuration immutability by default\\n- ✅ **Validation**: Semantic versioning, required fields, value constraints\\n- ✅ **Documentation**: Self-documenting with schema descriptions **Example KCL config** (`provisioning.k`): ```kcl\\nimport provisioning.workspace_config as ws workspace_config = ws.WorkspaceConfig { workspace: { name: \\"production\\" version: \\"1.0.0\\" created: \\"2025-12-03T14:30:00Z\\" } paths: { base: \\"/opt/workspaces/production\\" infra: \\"/opt/workspaces/production/infra\\" cache: \\"/opt/workspaces/production/.cache\\" # ... other paths } providers: { active: [\\"upcloud\\"] default: \\"upcloud\\" } # ... other sections\\n}\\n```plaintext **Backward Compatibility**: If you have existing YAML workspace configs (`provisioning.yaml`), they continue to work. The config loader checks for KCL files first, then falls back to YAML. ### Verify Workspace ```bash\\n# Show workspace info\\nprovisioning workspace info # List all workspaces\\nprovisioning workspace list # Show active workspace\\nprovisioning workspace active\\n# Expected: production\\n```plaintext ### View and Validate Workspace Configuration Now you can inspect and validate your KCL workspace configuration: ```bash\\n# View complete workspace configuration\\nprovisioning workspace config show # Show specific workspace\\nprovisioning workspace config show production # View configuration in different formats\\nprovisioning workspace config show --format=json\\nprovisioning workspace config show --format=yaml\\nprovisioning workspace config show --format=kcl # Raw KCL file # Validate workspace configuration\\nprovisioning workspace config validate\\n# Output: ✅ Validation complete - all configs are valid # Show configuration hierarchy (priority order)\\nprovisioning workspace config hierarchy\\n```plaintext **Configuration Validation**: The KCL schema automatically validates: - ✅ Semantic versioning format (e.g., \\"1.0.0\\")\\n- ✅ Required sections present (workspace, paths, provisioning, etc.)\\n- ✅ Valid file paths and types\\n- ✅ Provider configuration exists for active providers\\n- ✅ KMS and SOPS settings properly configured --- ## Step 6: Configure Environment ### Set Provider Credentials **UpCloud Provider:** ```bash\\n# Create provider config\\nvim workspace/config/providers/upcloud.toml\\n```plaintext ```toml\\n[upcloud]\\nusername = \\"your-upcloud-username\\"\\npassword = \\"your-upcloud-password\\" # Will be encrypted # Default settings\\ndefault_zone = \\"de-fra1\\"\\ndefault_plan = \\"2xCPU-4GB\\"\\n```plaintext **AWS Provider:** ```bash\\n# Create AWS config\\nvim workspace/config/providers/aws.toml\\n```plaintext ```toml\\n[aws]\\nregion = \\"us-east-1\\"\\naccess_key_id = \\"AKIAXXXXX\\"\\nsecret_access_key = \\"xxxxx\\" # Will be encrypted # Default settings\\ndefault_instance_type = \\"t3.medium\\"\\ndefault_region = \\"us-east-1\\"\\n```plaintext ### Encrypt Sensitive Data ```bash\\n# Generate Age key if not done already\\nage-keygen -o ~/.age/key.txt # Encrypt provider configs\\nkms encrypt (open workspace/config/providers/upcloud.toml) --backend age \\\\ | save workspace/config/providers/upcloud.toml.enc # Or use SOPS\\nsops --encrypt --age $(cat ~/.age/key.txt | grep \\"public key:\\" | cut -d: -f2) \\\\ workspace/config/providers/upcloud.toml > workspace/config/providers/upcloud.toml.enc # Remove plaintext\\nrm workspace/config/providers/upcloud.toml\\n```plaintext ### Configure Local Overrides ```bash\\n# Edit user-specific settings\\nvim workspace/config/local-overrides.toml\\n```plaintext ```toml\\n[user]\\nname = \\"admin\\"\\nemail = \\"admin@example.com\\" [preferences]\\neditor = \\"vim\\"\\noutput_format = \\"yaml\\"\\nconfirm_delete = true\\nconfirm_deploy = true [http]\\nuse_curl = true # Use curl instead of ureq [paths]\\nssh_key = \\"~/.ssh/id_ed25519\\"\\n```plaintext --- ## Step 7: Discover and Load Modules ### Discover Available Modules ```bash\\n# Discover task services\\nprovisioning module discover taskserv\\n# Shows: kubernetes, containerd, etcd, cilium, helm, etc. # Discover providers\\nprovisioning module discover provider\\n# Shows: upcloud, aws, local # Discover clusters\\nprovisioning module discover cluster\\n# Shows: buildkit, registry, monitoring, etc.\\n```plaintext ### Load Modules into Workspace ```bash\\n# Load Kubernetes taskserv\\nprovisioning module load taskserv production kubernetes # Load multiple modules\\nprovisioning module load taskserv production kubernetes containerd cilium # Load cluster configuration\\nprovisioning module load cluster production buildkit # Verify loaded modules\\nprovisioning module list taskserv production\\nprovisioning module list cluster production\\n```plaintext --- ## Step 8: Validate Configuration Before deploying, validate all configuration: ```bash\\n# Validate workspace configuration\\nprovisioning workspace validate # Validate infrastructure configuration\\nprovisioning validate config # Validate specific infrastructure\\nprovisioning infra validate --infra production # Check environment variables\\nprovisioning env # Show all configuration and environment\\nprovisioning allenv\\n```plaintext **Expected output:** ```plaintext\\n✓ Configuration valid\\n✓ Provider credentials configured\\n✓ Workspace initialized\\n✓ Modules loaded: 3 taskservs, 1 cluster\\n✓ SSH key configured\\n✓ Age encryption key available\\n```plaintext **Fix any errors** before proceeding to deployment. --- ## Step 9: Deploy Servers ### Preview Server Creation (Dry Run) ```bash\\n# Check what would be created (no actual changes)\\nprovisioning server create --infra production --check # With debug output for details\\nprovisioning server create --infra production --check --debug\\n```plaintext **Review the output:** - Server names and configurations\\n- Zones and regions\\n- CPU, memory, disk specifications\\n- Estimated costs\\n- Network settings ### Create Servers ```bash\\n# Create servers (with confirmation prompt)\\nprovisioning server create --infra production # Or auto-confirm (skip prompt)\\nprovisioning server create --infra production --yes # Wait for completion\\nprovisioning server create --infra production --wait\\n```plaintext **Expected output:** ```plaintext\\nCreating servers for infrastructure: production ● Creating server: k8s-master-01 (de-fra1, 4xCPU-8GB) ● Creating server: k8s-worker-01 (de-fra1, 4xCPU-8GB) ● Creating server: k8s-worker-02 (de-fra1, 4xCPU-8GB) ✓ Created 3 servers in 120 seconds Servers: • k8s-master-01: 192.168.1.10 (Running) • k8s-worker-01: 192.168.1.11 (Running) • k8s-worker-02: 192.168.1.12 (Running)\\n```plaintext ### Verify Server Creation ```bash\\n# List all servers\\nprovisioning server list --infra production # Show detailed server info\\nprovisioning server list --infra production --out yaml # SSH to server (test connectivity)\\nprovisioning server ssh k8s-master-01\\n# Type \'exit\' to return\\n```plaintext --- ## Step 10: Install Task Services Task services are infrastructure components like Kubernetes, databases, monitoring, etc. ### Install Kubernetes (Check Mode First) ```bash\\n# Preview Kubernetes installation\\nprovisioning taskserv create kubernetes --infra production --check # Shows:\\n# - Dependencies required (containerd, etcd)\\n# - Configuration to be applied\\n# - Resources needed\\n# - Estimated installation time\\n```plaintext ### Install Kubernetes ```bash\\n# Install Kubernetes (with dependencies)\\nprovisioning taskserv create kubernetes --infra production # Or install dependencies first\\nprovisioning taskserv create containerd --infra production\\nprovisioning taskserv create etcd --infra production\\nprovisioning taskserv create kubernetes --infra production # Monitor progress\\nprovisioning workflow monitor \\n```plaintext **Expected output:** ```plaintext\\nInstalling taskserv: kubernetes ● Installing containerd on k8s-master-01 ● Installing containerd on k8s-worker-01 ● Installing containerd on k8s-worker-02 ✓ Containerd installed (30s) ● Installing etcd on k8s-master-01 ✓ etcd installed (20s) ● Installing Kubernetes control plane on k8s-master-01 ✓ Kubernetes control plane ready (45s) ● Joining worker nodes ✓ k8s-worker-01 joined (15s) ✓ k8s-worker-02 joined (15s) ✓ Kubernetes installation complete (125 seconds) Cluster Info: • Version: 1.28.0 • Nodes: 3 (1 control-plane, 2 workers) • API Server: https://192.168.1.10:6443\\n```plaintext ### Install Additional Services ```bash\\n# Install Cilium (CNI)\\nprovisioning taskserv create cilium --infra production # Install Helm\\nprovisioning taskserv create helm --infra production # Verify all taskservs\\nprovisioning taskserv list --infra production\\n```plaintext --- ## Step 11: Create Clusters Clusters are complete application stacks (e.g., BuildKit, OCI Registry, Monitoring). ### Create BuildKit Cluster (Check Mode) ```bash\\n# Preview cluster creation\\nprovisioning cluster create buildkit --infra production --check # Shows:\\n# - Components to be deployed\\n# - Dependencies required\\n# - Configuration values\\n# - Resource requirements\\n```plaintext ### Create BuildKit Cluster ```bash\\n# Create BuildKit cluster\\nprovisioning cluster create buildkit --infra production # Monitor deployment\\nprovisioning workflow monitor # Or use plugin for faster monitoring\\norch tasks --status running\\n```plaintext **Expected output:** ```plaintext\\nCreating cluster: buildkit ● Deploying BuildKit daemon ● Deploying BuildKit worker ● Configuring BuildKit cache ● Setting up BuildKit registry integration ✓ BuildKit cluster ready (60 seconds) Cluster Info: • BuildKit version: 0.12.0 • Workers: 2 • Cache: 50GB • Registry: registry.production.local\\n```plaintext ### Verify Cluster ```bash\\n# List all clusters\\nprovisioning cluster list --infra production # Show cluster details\\nprovisioning cluster list --infra production --out yaml # Check cluster health\\nkubectl get pods -n buildkit\\n```plaintext --- ## Step 12: Verify Deployment ### Comprehensive Health Check ```bash\\n# Check orchestrator status\\norch status\\n# or\\nprovisioning orchestrator status # Check all servers\\nprovisioning server list --infra production # Check all taskservs\\nprovisioning taskserv list --infra production # Check all clusters\\nprovisioning cluster list --infra production # Verify Kubernetes cluster\\nkubectl get nodes\\nkubectl get pods --all-namespaces\\n```plaintext ### Run Validation Tests ```bash\\n# Validate infrastructure\\nprovisioning infra validate --infra production # Test connectivity\\nprovisioning server ssh k8s-master-01 \\"kubectl get nodes\\" # Test BuildKit\\nkubectl exec -it -n buildkit buildkit-0 -- buildctl --version\\n```plaintext ### Expected Results All checks should show: - ✅ Servers: Running\\n- ✅ Taskservs: Installed and healthy\\n- ✅ Clusters: Deployed and operational\\n- ✅ Kubernetes: 3/3 nodes ready\\n- ✅ BuildKit: 2/2 workers ready --- ## Step 13: Post-Deployment ### Configure kubectl Access ```bash\\n# Get kubeconfig from master node\\nprovisioning server ssh k8s-master-01 \\"cat ~/.kube/config\\" > ~/.kube/config-production # Set KUBECONFIG\\nexport KUBECONFIG=~/.kube/config-production # Verify access\\nkubectl get nodes\\nkubectl get pods --all-namespaces\\n```plaintext ### Set Up Monitoring (Optional) ```bash\\n# Deploy monitoring stack\\nprovisioning cluster create monitoring --infra production # Access Grafana\\nkubectl port-forward -n monitoring svc/grafana 3000:80\\n# Open: http://localhost:3000\\n```plaintext ### Configure CI/CD Integration (Optional) ```bash\\n# Generate CI/CD credentials\\nprovisioning secrets generate aws --ttl 12h # Create CI/CD kubeconfig\\nkubectl create serviceaccount ci-cd -n default\\nkubectl create clusterrolebinding ci-cd --clusterrole=admin --serviceaccount=default:ci-cd\\n```plaintext ### Backup Configuration ```bash\\n# Backup workspace configuration\\ntar -czf workspace-production-backup.tar.gz workspace/ # Encrypt backup\\nkms encrypt (open workspace-production-backup.tar.gz | encode base64) --backend age \\\\ | save workspace-production-backup.tar.gz.enc # Store securely (S3, Vault, etc.)\\n```plaintext --- ## Troubleshooting ### Server Creation Fails **Problem**: Server creation times out or fails ```bash\\n# Check provider credentials\\nprovisioning validate config # Check provider API status\\ncurl -u username:password https://api.upcloud.com/1.3/account # Try with debug mode\\nprovisioning server create --infra production --check --debug\\n```plaintext ### Taskserv Installation Fails **Problem**: Kubernetes installation fails ```bash\\n# Check server connectivity\\nprovisioning server ssh k8s-master-01 # Check logs\\nprovisioning orchestrator logs | grep kubernetes # Check dependencies\\nprovisioning taskserv list --infra production | where status == \\"failed\\" # Retry installation\\nprovisioning taskserv delete kubernetes --infra production\\nprovisioning taskserv create kubernetes --infra production\\n```plaintext ### Plugin Commands Don\'t Work **Problem**: `auth`, `kms`, or `orch` commands not found ```bash\\n# Check plugin registration\\nplugin list | where name =~ \\"auth|kms|orch\\" # Re-register if missing\\ncd provisioning/core/plugins/nushell-plugins\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator # Restart Nushell\\nexit\\nnu\\n```plaintext ### KMS Encryption Fails **Problem**: `kms encrypt` returns error ```bash\\n# Check backend status\\nkms status # Check RustyVault running\\ncurl http://localhost:8200/v1/sys/health # Use Age backend instead (local)\\nkms encrypt \\"data\\" --backend age --key age1xxxxxxxxx # Check Age key\\ncat ~/.age/key.txt\\n```plaintext ### Orchestrator Not Running **Problem**: `orch status` returns error ```bash\\n# Check orchestrator status\\nps aux | grep orchestrator # Start orchestrator\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background # Check logs\\ntail -f provisioning/platform/orchestrator/data/orchestrator.log\\n```plaintext ### Configuration Validation Errors **Problem**: `provisioning validate config` shows errors ```bash\\n# Show detailed errors\\nprovisioning validate config --debug # Check configuration files\\nprovisioning allenv # Fix missing settings\\nvim workspace/config/local-overrides.toml\\n```plaintext --- ## Next Steps ### Explore Advanced Features 1. **Multi-Environment Deployment** ```bash # Create dev and staging workspaces provisioning workspace create dev provisioning workspace create staging provisioning workspace switch dev Batch Operations # Deploy to multiple clouds\\nprovisioning batch submit workflows/multi-cloud-deploy.k Security Features # Enable MFA\\nauth mfa enroll totp # Set up break-glass\\nprovisioning break-glass request \\"Emergency access\\" Compliance and Audit # Generate compliance report\\nprovisioning compliance report --standard soc2","breadcrumbs":"From Scratch » macOS (via Homebrew)","id":"1735","title":"macOS (via Homebrew)"},"1736":{"body":"Quick Reference : provisioning sc or docs/guides/quickstart-cheatsheet.md Update Guide : docs/guides/update-infrastructure.md Customize Guide : docs/guides/customize-infrastructure.md Plugin Guide : docs/user/PLUGIN_INTEGRATION_GUIDE.md Security System : docs/architecture/ADR-009-security-system-complete.md","breadcrumbs":"From Scratch » Learn More","id":"1736","title":"Learn More"},"1737":{"body":"# Show help for any command\\nprovisioning help\\nprovisioning help server\\nprovisioning help taskserv # Check version\\nprovisioning version # Start Nushell session with provisioning library\\nprovisioning nu\\n```plaintext --- ## Summary You\'ve successfully: ✅ Installed Nushell and essential tools\\n✅ Built and registered native plugins (10-50x faster operations)\\n✅ Cloned and configured the project\\n✅ Initialized a production workspace\\n✅ Configured provider credentials\\n✅ Deployed servers\\n✅ Installed Kubernetes and task services\\n✅ Created application clusters\\n✅ Verified complete deployment **Your infrastructure is now ready for production use!** --- **Estimated Total Time**: 30-60 minutes\\n**Next Guide**: [Update Infrastructure](update-infrastructure.md)\\n**Questions?**: Open an issue or contact **Last Updated**: 2025-10-09\\n**Version**: 3.5.0","breadcrumbs":"From Scratch » Get Help","id":"1737","title":"Get Help"},"1738":{"body":"Goal : Safely update running infrastructure with minimal downtime Time : 15-30 minutes Difficulty : Intermediate","breadcrumbs":"Update Infrastructure » Update Existing Infrastructure","id":"1738","title":"Update Existing Infrastructure"},"1739":{"body":"This guide covers: Checking for updates Planning update strategies Updating task services Rolling updates Rollback procedures Verification","breadcrumbs":"Update Infrastructure » Overview","id":"1739","title":"Overview"},"174":{"body":"","breadcrumbs":"First Deployment » Troubleshooting","id":"174","title":"Troubleshooting"},"1740":{"body":"","breadcrumbs":"Update Infrastructure » Update Strategies","id":"1740","title":"Update Strategies"},"1741":{"body":"Best for : Non-critical environments, development, staging # Direct update without downtime consideration\\nprovisioning t create --infra \\n```plaintext ### Strategy 2: Rolling Updates (Recommended) **Best for**: Production environments, high availability ```bash\\n# Update servers one by one\\nprovisioning s update --infra --rolling\\n```plaintext ### Strategy 3: Blue-Green Deployment (Safest) **Best for**: Critical production, zero-downtime requirements ```bash\\n# Create new infrastructure, switch traffic, remove old\\nprovisioning ws init -green\\n# ... configure and deploy\\n# ... switch traffic\\nprovisioning ws delete -blue\\n```plaintext ## Step 1: Check for Updates ### 1.1 Check All Task Services ```bash\\n# Check all taskservs for updates\\nprovisioning t check-updates\\n```plaintext **Expected Output:** ```plaintext\\n📦 Task Service Update Check: NAME CURRENT LATEST STATUS\\nkubernetes 1.29.0 1.30.0 ⬆️ update available\\ncontainerd 1.7.13 1.7.13 ✅ up-to-date\\ncilium 1.14.5 1.15.0 ⬆️ update available\\npostgres 15.5 16.1 ⬆️ update available\\nredis 7.2.3 7.2.3 ✅ up-to-date Updates available: 3\\n```plaintext ### 1.2 Check Specific Task Service ```bash\\n# Check specific taskserv\\nprovisioning t check-updates kubernetes\\n```plaintext **Expected Output:** ```plaintext\\n📦 Kubernetes Update Check: Current: 1.29.0\\nLatest: 1.30.0\\nStatus: ⬆️ Update available Changelog: • Enhanced security features • Performance improvements • Bug fixes in kube-apiserver • New workload resource types Breaking Changes: • None Recommended: ✅ Safe to update\\n```plaintext ### 1.3 Check Version Status ```bash\\n# Show detailed version information\\nprovisioning version show\\n```plaintext **Expected Output:** ```plaintext\\n📋 Component Versions: COMPONENT CURRENT LATEST DAYS OLD STATUS\\nkubernetes 1.29.0 1.30.0 45 ⬆️ update\\ncontainerd 1.7.13 1.7.13 0 ✅ current\\ncilium 1.14.5 1.15.0 30 ⬆️ update\\npostgres 15.5 16.1 60 ⬆️ update (major)\\nredis 7.2.3 7.2.3 0 ✅ current\\n```plaintext ### 1.4 Check for Security Updates ```bash\\n# Check for security-related updates\\nprovisioning version updates --security-only\\n```plaintext ## Step 2: Plan Your Update ### 2.1 Review Current Configuration ```bash\\n# Show current infrastructure\\nprovisioning show settings --infra my-production\\n```plaintext ### 2.2 Backup Configuration ```bash\\n# Create configuration backup\\ncp -r workspace/infra/my-production workspace/infra/my-production.backup-$(date +%Y%m%d) # Or use built-in backup\\nprovisioning ws backup my-production\\n```plaintext **Expected Output:** ```plaintext\\n✅ Backup created: workspace/backups/my-production-20250930.tar.gz\\n```plaintext ### 2.3 Create Update Plan ```bash\\n# Generate update plan\\nprovisioning plan update --infra my-production\\n```plaintext **Expected Output:** ```plaintext\\n📝 Update Plan for my-production: Phase 1: Minor Updates (Low Risk) • containerd: No update needed • redis: No update needed Phase 2: Patch Updates (Medium Risk) • cilium: 1.14.5 → 1.15.0 (estimated 5 minutes) Phase 3: Major Updates (High Risk - Requires Testing) • kubernetes: 1.29.0 → 1.30.0 (estimated 15 minutes) • postgres: 15.5 → 16.1 (estimated 10 minutes, may require data migration) Recommended Order: 1. Update cilium (low risk) 2. Update kubernetes (test in staging first) 3. Update postgres (requires maintenance window) Total Estimated Time: 30 minutes\\nRecommended: Test in staging environment first\\n```plaintext ## Step 3: Update Task Services ### 3.1 Update Non-Critical Service (Cilium Example) #### Dry-Run Update ```bash\\n# Test update without applying\\nprovisioning t create cilium --infra my-production --check\\n```plaintext **Expected Output:** ```plaintext\\n🔍 CHECK MODE: Simulating Cilium update Current: 1.14.5\\nTarget: 1.15.0 Would perform: 1. Download Cilium 1.15.0 2. Update configuration 3. Rolling restart of Cilium pods 4. Verify connectivity Estimated downtime: <1 minute per node\\nNo errors detected. Ready to update.\\n```plaintext #### Generate Updated Configuration ```bash\\n# Generate new configuration\\nprovisioning t generate cilium --infra my-production\\n```plaintext **Expected Output:** ```plaintext\\n✅ Generated Cilium configuration (version 1.15.0) Saved to: workspace/infra/my-production/taskservs/cilium.k\\n```plaintext #### Apply Update ```bash\\n# Apply update\\nprovisioning t create cilium --infra my-production\\n```plaintext **Expected Output:** ```plaintext\\n🚀 Updating Cilium on my-production... Downloading Cilium 1.15.0... ⏳\\n✅ Downloaded Updating configuration... ⏳\\n✅ Configuration updated Rolling restart: web-01... ⏳\\n✅ web-01 updated (Cilium 1.15.0) Rolling restart: web-02... ⏳\\n✅ web-02 updated (Cilium 1.15.0) Verifying connectivity... ⏳\\n✅ All nodes connected 🎉 Cilium update complete! Version: 1.14.5 → 1.15.0 Downtime: 0 minutes\\n```plaintext #### Verify Update ```bash\\n# Verify updated version\\nprovisioning version taskserv cilium\\n```plaintext **Expected Output:** ```plaintext\\n📦 Cilium Version Info: Installed: 1.15.0\\nLatest: 1.15.0\\nStatus: ✅ Up-to-date Nodes: ✅ web-01: 1.15.0 (running) ✅ web-02: 1.15.0 (running)\\n```plaintext ### 3.2 Update Critical Service (Kubernetes Example) #### Test in Staging First ```bash\\n# If you have staging environment\\nprovisioning t create kubernetes --infra my-staging --check\\nprovisioning t create kubernetes --infra my-staging # Run integration tests\\nprovisioning test kubernetes --infra my-staging\\n```plaintext #### Backup Current State ```bash\\n# Backup Kubernetes state\\nkubectl get all -A -o yaml > k8s-backup-$(date +%Y%m%d).yaml # Backup etcd (if using external etcd)\\nprovisioning t backup kubernetes --infra my-production\\n```plaintext #### Schedule Maintenance Window ```bash\\n# Set maintenance mode (optional, if supported)\\nprovisioning maintenance enable --infra my-production --duration 30m\\n```plaintext #### Update Kubernetes ```bash\\n# Update control plane first\\nprovisioning t create kubernetes --infra my-production --control-plane-only\\n```plaintext **Expected Output:** ```plaintext\\n🚀 Updating Kubernetes control plane on my-production... Draining control plane: web-01... ⏳\\n✅ web-01 drained Updating control plane: web-01... ⏳\\n✅ web-01 updated (Kubernetes 1.30.0) Uncordoning: web-01... ⏳\\n✅ web-01 ready Verifying control plane... ⏳\\n✅ Control plane healthy 🎉 Control plane update complete!\\n```plaintext ```bash\\n# Update worker nodes one by one\\nprovisioning t create kubernetes --infra my-production --workers-only --rolling\\n```plaintext **Expected Output:** ```plaintext\\n🚀 Updating Kubernetes workers on my-production... Rolling update: web-02... Draining... ⏳ ✅ Drained (pods rescheduled) Updating... ⏳ ✅ Updated (Kubernetes 1.30.0) Uncordoning... ⏳ ✅ Ready Waiting for pods to stabilize... ⏳ ✅ All pods running 🎉 Worker update complete! Updated: web-02 Version: 1.30.0\\n```plaintext #### Verify Update ```bash\\n# Verify Kubernetes cluster\\nkubectl get nodes\\nprovisioning version taskserv kubernetes\\n```plaintext **Expected Output:** ```plaintext\\nNAME STATUS ROLES AGE VERSION\\nweb-01 Ready control-plane 30d v1.30.0\\nweb-02 Ready 30d v1.30.0\\n```plaintext ```bash\\n# Run smoke tests\\nprovisioning test kubernetes --infra my-production\\n```plaintext ### 3.3 Update Database (PostgreSQL Example) ⚠️ **WARNING**: Database updates may require data migration. Always backup first! #### Backup Database ```bash\\n# Backup PostgreSQL database\\nprovisioning t backup postgres --infra my-production\\n```plaintext **Expected Output:** ```plaintext\\n🗄️ Backing up PostgreSQL... Creating dump: my-production-postgres-20250930.sql... ⏳\\n✅ Dump created (2.3 GB) Compressing... ⏳\\n✅ Compressed (450 MB) Saved to: workspace/backups/postgres/my-production-20250930.sql.gz\\n```plaintext #### Check Compatibility ```bash\\n# Check if data migration is needed\\nprovisioning t check-migration postgres --from 15.5 --to 16.1\\n```plaintext **Expected Output:** ```plaintext\\n🔍 PostgreSQL Migration Check: From: 15.5\\nTo: 16.1 Migration Required: ✅ Yes (major version change) Steps Required: 1. Dump database with pg_dump 2. Stop PostgreSQL 15.5 3. Install PostgreSQL 16.1 4. Initialize new data directory 5. Restore from dump Estimated Time: 15-30 minutes (depending on data size)\\nEstimated Downtime: 15-30 minutes Recommended: Use streaming replication for zero-downtime upgrade\\n```plaintext #### Perform Update ```bash\\n# Update PostgreSQL (with automatic migration)\\nprovisioning t create postgres --infra my-production --migrate\\n```plaintext **Expected Output:** ```plaintext\\n🚀 Updating PostgreSQL on my-production... ⚠️ Major version upgrade detected (15.5 → 16.1) Automatic migration will be performed Dumping database... ⏳\\n✅ Database dumped (2.3 GB) Stopping PostgreSQL 15.5... ⏳\\n✅ Stopped Installing PostgreSQL 16.1... ⏳\\n✅ Installed Initializing new data directory... ⏳\\n✅ Initialized Restoring database... ⏳\\n✅ Restored (2.3 GB) Starting PostgreSQL 16.1... ⏳\\n✅ Started Verifying data integrity... ⏳\\n✅ All tables verified 🎉 PostgreSQL update complete! Version: 15.5 → 16.1 Downtime: 18 minutes\\n```plaintext #### Verify Update ```bash\\n# Verify PostgreSQL\\nprovisioning version taskserv postgres\\nssh db-01 \\"psql --version\\"\\n```plaintext ## Step 4: Update Multiple Services ### 4.1 Batch Update (Sequentially) ```bash\\n# Update multiple taskservs one by one\\nprovisioning t update --infra my-production --taskservs cilium,containerd,redis\\n```plaintext **Expected Output:** ```plaintext\\n🚀 Updating 3 taskservs on my-production... [1/3] Updating cilium... ⏳\\n✅ cilium updated (1.15.0) [2/3] Updating containerd... ⏳\\n✅ containerd updated (1.7.14) [3/3] Updating redis... ⏳\\n✅ redis updated (7.2.4) 🎉 All updates complete! Updated: 3 taskservs Total time: 8 minutes\\n```plaintext ### 4.2 Parallel Update (Non-Dependent Services) ```bash\\n# Update taskservs in parallel (if they don\'t depend on each other)\\nprovisioning t update --infra my-production --taskservs redis,postgres --parallel\\n```plaintext **Expected Output:** ```plaintext\\n🚀 Updating 2 taskservs in parallel on my-production... redis: Updating... ⏳\\npostgres: Updating... ⏳ redis: ✅ Updated (7.2.4)\\npostgres: ✅ Updated (16.1) 🎉 All updates complete! Updated: 2 taskservs Total time: 3 minutes (parallel)\\n```plaintext ## Step 5: Update Server Configuration ### 5.1 Update Server Resources ```bash\\n# Edit server configuration\\nprovisioning sops workspace/infra/my-production/servers.k\\n```plaintext **Example: Upgrade server plan** ```kcl\\n# Before\\n{ name = \\"web-01\\" plan = \\"1xCPU-2GB\\" # Old plan\\n} # After\\n{ name = \\"web-01\\" plan = \\"2xCPU-4GB\\" # New plan\\n}\\n```plaintext ```bash\\n# Apply server update\\nprovisioning s update --infra my-production --check\\nprovisioning s update --infra my-production\\n```plaintext ### 5.2 Update Server OS ```bash\\n# Update operating system packages\\nprovisioning s update --infra my-production --os-update\\n```plaintext **Expected Output:** ```plaintext\\n🚀 Updating OS packages on my-production servers... web-01: Updating packages... ⏳\\n✅ web-01: 24 packages updated web-02: Updating packages... ⏳\\n✅ web-02: 24 packages updated db-01: Updating packages... ⏳\\n✅ db-01: 24 packages updated 🎉 OS updates complete!\\n```plaintext ## Step 6: Rollback Procedures ### 6.1 Rollback Task Service If update fails or causes issues: ```bash\\n# Rollback to previous version\\nprovisioning t rollback cilium --infra my-production\\n```plaintext **Expected Output:** ```plaintext\\n🔄 Rolling back Cilium on my-production... Current: 1.15.0\\nTarget: 1.14.5 (previous version) Rolling back: web-01... ⏳\\n✅ web-01 rolled back Rolling back: web-02... ⏳\\n✅ web-02 rolled back Verifying connectivity... ⏳\\n✅ All nodes connected 🎉 Rollback complete! Version: 1.15.0 → 1.14.5\\n```plaintext ### 6.2 Rollback from Backup ```bash\\n# Restore configuration from backup\\nprovisioning ws restore my-production --from workspace/backups/my-production-20250930.tar.gz\\n```plaintext ### 6.3 Emergency Rollback ```bash\\n# Complete infrastructure rollback\\nprovisioning rollback --infra my-production --to-snapshot \\n```plaintext ## Step 7: Post-Update Verification ### 7.1 Verify All Components ```bash\\n# Check overall health\\nprovisioning health --infra my-production\\n```plaintext **Expected Output:** ```plaintext\\n🏥 Health Check: my-production Servers: ✅ web-01: Healthy ✅ web-02: Healthy ✅ db-01: Healthy Task Services: ✅ kubernetes: 1.30.0 (healthy) ✅ containerd: 1.7.13 (healthy) ✅ cilium: 1.15.0 (healthy) ✅ postgres: 16.1 (healthy) Clusters: ✅ buildkit: 2/2 replicas (healthy) Overall Status: ✅ All systems healthy\\n```plaintext ### 7.2 Verify Version Updates ```bash\\n# Verify all versions are updated\\nprovisioning version show\\n```plaintext ### 7.3 Run Integration Tests ```bash\\n# Run comprehensive tests\\nprovisioning test all --infra my-production\\n```plaintext **Expected Output:** ```plaintext\\n🧪 Running Integration Tests... [1/5] Server connectivity... ⏳\\n✅ All servers reachable [2/5] Kubernetes health... ⏳\\n✅ All nodes ready, all pods running [3/5] Network connectivity... ⏳\\n✅ All services reachable [4/5] Database connectivity... ⏳\\n✅ PostgreSQL responsive [5/5] Application health... ⏳\\n✅ All applications healthy 🎉 All tests passed!\\n```plaintext ### 7.4 Monitor for Issues ```bash\\n# Monitor logs for errors\\nprovisioning logs --infra my-production --follow --level error\\n```plaintext ## Update Checklist Use this checklist for production updates: - [ ] Check for available updates\\n- [ ] Review changelog and breaking changes\\n- [ ] Create configuration backup\\n- [ ] Test update in staging environment\\n- [ ] Schedule maintenance window\\n- [ ] Notify team/users of maintenance\\n- [ ] Update non-critical services first\\n- [ ] Verify each update before proceeding\\n- [ ] Update critical services with rolling updates\\n- [ ] Backup database before major updates\\n- [ ] Verify all components after update\\n- [ ] Run integration tests\\n- [ ] Monitor for issues (30 minutes minimum)\\n- [ ] Document any issues encountered\\n- [ ] Close maintenance window ## Common Update Scenarios ### Scenario 1: Minor Security Patch ```bash\\n# Quick security update\\nprovisioning t check-updates --security-only\\nprovisioning t update --infra my-production --security-patches --yes\\n```plaintext ### Scenario 2: Major Version Upgrade ```bash\\n# Careful major version update\\nprovisioning ws backup my-production\\nprovisioning t check-migration --from X.Y --to X+1.Y\\nprovisioning t create --infra my-production --migrate\\nprovisioning test all --infra my-production\\n```plaintext ### Scenario 3: Emergency Hotfix ```bash\\n# Apply critical hotfix immediately\\nprovisioning t create --infra my-production --hotfix --yes\\n```plaintext ## Troubleshooting Updates ### Issue: Update fails mid-process **Solution:** ```bash\\n# Check update status\\nprovisioning t status --infra my-production # Resume failed update\\nprovisioning t update --infra my-production --resume # Or rollback\\nprovisioning t rollback --infra my-production\\n```plaintext ### Issue: Service not starting after update **Solution:** ```bash\\n# Check logs\\nprovisioning logs --infra my-production # Verify configuration\\nprovisioning t validate --infra my-production # Rollback if necessary\\nprovisioning t rollback --infra my-production\\n```plaintext ### Issue: Data migration fails **Solution:** ```bash\\n# Check migration logs\\nprovisioning t migration-logs --infra my-production # Restore from backup\\nprovisioning t restore --infra my-production --from \\n```plaintext ## Best Practices 1. **Always Test First**: Test updates in staging before production\\n2. **Backup Everything**: Create backups before any update\\n3. **Update Gradually**: Update one service at a time\\n4. **Monitor Closely**: Watch for errors after each update\\n5. **Have Rollback Plan**: Always have a rollback strategy\\n6. **Document Changes**: Keep update logs for reference\\n7. **Schedule Wisely**: Update during low-traffic periods\\n8. **Verify Thoroughly**: Run tests after each update ## Next Steps - **[Customize Guide](customize-infrastructure.md)** - Customize your infrastructure\\n- **[From Scratch Guide](from-scratch.md)** - Deploy new infrastructure\\n- **[Workflow Guide](../development/workflow.md)** - Automate with workflows ## Quick Reference ```bash\\n# Update workflow\\nprovisioning t check-updates\\nprovisioning ws backup my-production\\nprovisioning t create --infra my-production --check\\nprovisioning t create --infra my-production\\nprovisioning version taskserv \\nprovisioning health --infra my-production\\nprovisioning test all --infra my-production\\n```plaintext --- *This guide is part of the provisioning project documentation. Last updated: 2025-09-30*","breadcrumbs":"Update Infrastructure » Strategy 1: In-Place Updates (Fastest)","id":"1741","title":"Strategy 1: In-Place Updates (Fastest)"},"1742":{"body":"Goal : Customize infrastructure using layers, templates, and configuration patterns Time : 20-40 minutes Difficulty : Intermediate to Advanced","breadcrumbs":"Customize Infrastructure » Customize Infrastructure","id":"1742","title":"Customize Infrastructure"},"1743":{"body":"This guide covers: Understanding the layer system Using templates Creating custom modules Configuration inheritance Advanced customization patterns","breadcrumbs":"Customize Infrastructure » Overview","id":"1743","title":"Overview"},"1744":{"body":"","breadcrumbs":"Customize Infrastructure » The Layer System","id":"1744","title":"The Layer System"},"1745":{"body":"The provisioning system uses a 3-layer architecture for configuration inheritance: ┌─────────────────────────────────────┐\\n│ Infrastructure Layer (Priority 300)│ ← Highest priority\\n│ workspace/infra/{name}/ │\\n│ • Project-specific configs │\\n│ • Environment customizations │\\n│ • Local overrides │\\n└─────────────────────────────────────┘ ↓ overrides\\n┌─────────────────────────────────────┐\\n│ Workspace Layer (Priority 200) │\\n│ provisioning/workspace/templates/ │\\n│ • Reusable patterns │\\n│ • Organization standards │\\n│ • Team conventions │\\n└─────────────────────────────────────┘ ↓ overrides\\n┌─────────────────────────────────────┐\\n│ Core Layer (Priority 100) │ ← Lowest priority\\n│ provisioning/extensions/ │\\n│ • System defaults │\\n│ • Provider implementations │\\n│ • Default taskserv configs │\\n└─────────────────────────────────────┘\\n```plaintext **Resolution Order**: Infrastructure (300) → Workspace (200) → Core (100) Higher numbers override lower numbers. ### View Layer Resolution ```bash\\n# Explain layer concept\\nprovisioning lyr explain\\n```plaintext **Expected Output:** ```plaintext\\n📚 LAYER SYSTEM EXPLAINED The layer system provides configuration inheritance across 3 levels: 🔵 CORE LAYER (100) - System Defaults Location: provisioning/extensions/ • Base taskserv configurations • Default provider settings • Standard cluster templates • Built-in extensions 🟢 WORKSPACE LAYER (200) - Shared Templates Location: provisioning/workspace/templates/ • Organization-wide patterns • Reusable configurations • Team standards • Custom extensions 🔴 INFRASTRUCTURE LAYER (300) - Project Specific Location: workspace/infra/{project}/ • Project-specific overrides • Environment customizations • Local modifications • Runtime settings Resolution: Infrastructure → Workspace → Core\\nHigher priority layers override lower ones.\\n```plaintext ```bash\\n# Show layer resolution for your project\\nprovisioning lyr show my-production\\n```plaintext **Expected Output:** ```plaintext\\n📊 Layer Resolution for my-production: LAYER PRIORITY SOURCE FILES\\nInfrastructure 300 workspace/infra/my-production/ 4 files • servers.k (overrides) • taskservs.k (overrides) • clusters.k (custom) • providers.k (overrides) Workspace 200 provisioning/workspace/templates/ 2 files • production.k (used) • kubernetes.k (used) Core 100 provisioning/extensions/ 15 files • taskservs/* (base configs) • providers/* (default settings) • clusters/* (templates) Resolution Order: Infrastructure → Workspace → Core\\nStatus: ✅ All layers resolved successfully\\n```plaintext ### Test Layer Resolution ```bash\\n# Test how a specific module resolves\\nprovisioning lyr test kubernetes my-production\\n```plaintext **Expected Output:** ```plaintext\\n🔍 Layer Resolution Test: kubernetes → my-production Resolving kubernetes configuration... 🔴 Infrastructure Layer (300): ✅ Found: workspace/infra/my-production/taskservs/kubernetes.k Provides: • version = \\"1.30.0\\" (overrides) • control_plane_servers = [\\"web-01\\"] (overrides) • worker_servers = [\\"web-02\\"] (overrides) 🟢 Workspace Layer (200): ✅ Found: provisioning/workspace/templates/production-kubernetes.k Provides: • security_policies (inherited) • network_policies (inherited) • resource_quotas (inherited) 🔵 Core Layer (100): ✅ Found: provisioning/extensions/taskservs/kubernetes/config.k Provides: • default_version = \\"1.29.0\\" (base) • default_features (base) • default_plugins (base) Final Configuration (after merging all layers): version: \\"1.30.0\\" (from Infrastructure) control_plane_servers: [\\"web-01\\"] (from Infrastructure) worker_servers: [\\"web-02\\"] (from Infrastructure) security_policies: {...} (from Workspace) network_policies: {...} (from Workspace) resource_quotas: {...} (from Workspace) default_features: {...} (from Core) default_plugins: {...} (from Core) Resolution: ✅ Success\\n```plaintext ## Using Templates ### List Available Templates ```bash\\n# List all templates\\nprovisioning tpl list\\n```plaintext **Expected Output:** ```plaintext\\n📋 Available Templates: TASKSERVS: • production-kubernetes - Production-ready Kubernetes setup • production-postgres - Production PostgreSQL with replication • production-redis - Redis cluster with sentinel • development-kubernetes - Development Kubernetes (minimal) • ci-cd-pipeline - Complete CI/CD pipeline PROVIDERS: • upcloud-production - UpCloud production settings • upcloud-development - UpCloud development settings • aws-production - AWS production VPC setup • aws-development - AWS development environment • local-docker - Local Docker-based setup CLUSTERS: • buildkit-cluster - BuildKit for container builds • monitoring-stack - Prometheus + Grafana + Loki • security-stack - Security monitoring tools Total: 13 templates\\n```plaintext ```bash\\n# List templates by type\\nprovisioning tpl list --type taskservs\\nprovisioning tpl list --type providers\\nprovisioning tpl list --type clusters\\n```plaintext ### View Template Details ```bash\\n# Show template details\\nprovisioning tpl show production-kubernetes\\n```plaintext **Expected Output:** ```plaintext\\n📄 Template: production-kubernetes Description: Production-ready Kubernetes configuration with security hardening, network policies, and monitoring Category: taskservs\\nVersion: 1.0.0 Configuration Provided: • Kubernetes version: 1.30.0 • Security policies: Pod Security Standards (restricted) • Network policies: Default deny + allow rules • Resource quotas: Per-namespace limits • Monitoring: Prometheus integration • Logging: Loki integration • Backup: Velero configuration Requirements: • Minimum 2 servers • 4GB RAM per server • Network plugin (Cilium recommended) Location: provisioning/workspace/templates/production-kubernetes.k Example Usage: provisioning tpl apply production-kubernetes my-production\\n```plaintext ### Apply Template ```bash\\n# Apply template to your infrastructure\\nprovisioning tpl apply production-kubernetes my-production\\n```plaintext **Expected Output:** ```plaintext\\n🚀 Applying template: production-kubernetes → my-production Checking compatibility... ⏳\\n✅ Infrastructure compatible with template Merging configuration... ⏳\\n✅ Configuration merged Files created/updated: • workspace/infra/my-production/taskservs/kubernetes.k (updated) • workspace/infra/my-production/policies/security.k (created) • workspace/infra/my-production/policies/network.k (created) • workspace/infra/my-production/monitoring/prometheus.k (created) 🎉 Template applied successfully! Next steps: 1. Review generated configuration 2. Adjust as needed 3. Deploy: provisioning t create kubernetes --infra my-production\\n```plaintext ### Validate Template Usage ```bash\\n# Validate template was applied correctly\\nprovisioning tpl validate my-production\\n```plaintext **Expected Output:** ```plaintext\\n✅ Template Validation: my-production Templates Applied: ✅ production-kubernetes (v1.0.0) ✅ production-postgres (v1.0.0) Configuration Status: ✅ All required fields present ✅ No conflicting settings ✅ Dependencies satisfied Compliance: ✅ Security policies configured ✅ Network policies configured ✅ Resource quotas set ✅ Monitoring enabled Status: ✅ Valid\\n```plaintext ## Creating Custom Templates ### Step 1: Create Template Structure ```bash\\n# Create custom template directory\\nmkdir -p provisioning/workspace/templates/my-custom-template\\n```plaintext ### Step 2: Write Template Configuration **File: `provisioning/workspace/templates/my-custom-template/config.k`** ```kcl\\n# Custom Kubernetes template with specific settings kubernetes_config = { # Version version = \\"1.30.0\\" # Custom feature gates feature_gates = { \\"GracefulNodeShutdown\\" = True \\"SeccompDefault\\" = True \\"StatefulSetAutoDeletePVC\\" = True } # Custom kubelet configuration kubelet_config = { max_pods = 110 pod_pids_limit = 4096 container_log_max_size = \\"10Mi\\" container_log_max_files = 5 } # Custom API server flags apiserver_extra_args = { \\"enable-admission-plugins\\" = \\"NodeRestriction,PodSecurity,LimitRanger\\" \\"audit-log-maxage\\" = \\"30\\" \\"audit-log-maxbackup\\" = \\"10\\" } # Custom scheduler configuration scheduler_config = { profiles = [ { name = \\"high-availability\\" plugins = { score = { enabled = [ {name = \\"NodeResourcesBalancedAllocation\\", weight = 2} {name = \\"NodeResourcesLeastAllocated\\", weight = 1} ] } } } ] } # Network configuration network = { service_cidr = \\"10.96.0.0/12\\" pod_cidr = \\"10.244.0.0/16\\" dns_domain = \\"cluster.local\\" } # Security configuration security = { pod_security_standard = \\"restricted\\" encrypt_etcd = True rotate_certificates = True }\\n}\\n```plaintext ### Step 3: Create Template Metadata **File: `provisioning/workspace/templates/my-custom-template/metadata.toml`** ```toml\\n[template]\\nname = \\"my-custom-template\\"\\nversion = \\"1.0.0\\"\\ndescription = \\"Custom Kubernetes template with enhanced security\\"\\ncategory = \\"taskservs\\"\\nauthor = \\"Your Name\\" [requirements]\\nmin_servers = 2\\nmin_memory_gb = 4\\nrequired_taskservs = [\\"containerd\\", \\"cilium\\"] [tags]\\nenvironment = [\\"production\\", \\"staging\\"]\\nfeatures = [\\"security\\", \\"monitoring\\", \\"high-availability\\"]\\n```plaintext ### Step 4: Test Custom Template ```bash\\n# List templates (should include your custom template)\\nprovisioning tpl list # Show your template\\nprovisioning tpl show my-custom-template # Apply to test infrastructure\\nprovisioning tpl apply my-custom-template my-test\\n```plaintext ## Configuration Inheritance Examples ### Example 1: Override Single Value **Core Layer** (`provisioning/extensions/taskservs/postgres/config.k`): ```kcl\\npostgres_config = { version = \\"15.5\\" port = 5432 max_connections = 100\\n}\\n```plaintext **Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.k`): ```kcl\\npostgres_config = { max_connections = 500 # Override only max_connections\\n}\\n```plaintext **Result** (after layer resolution): ```kcl\\npostgres_config = { version = \\"15.5\\" # From Core port = 5432 # From Core max_connections = 500 # From Infrastructure (overridden)\\n}\\n```plaintext ### Example 2: Add Custom Configuration **Workspace Layer** (`provisioning/workspace/templates/production-postgres.k`): ```kcl\\npostgres_config = { replication = { enabled = True replicas = 2 sync_mode = \\"async\\" }\\n}\\n```plaintext **Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.k`): ```kcl\\npostgres_config = { replication = { sync_mode = \\"sync\\" # Override sync mode } custom_extensions = [\\"pgvector\\", \\"timescaledb\\"] # Add custom config\\n}\\n```plaintext **Result**: ```kcl\\npostgres_config = { version = \\"15.5\\" # From Core port = 5432 # From Core max_connections = 100 # From Core replication = { enabled = True # From Workspace replicas = 2 # From Workspace sync_mode = \\"sync\\" # From Infrastructure (overridden) } custom_extensions = [\\"pgvector\\", \\"timescaledb\\"] # From Infrastructure (added)\\n}\\n```plaintext ### Example 3: Environment-Specific Configuration **Workspace Layer** (`provisioning/workspace/templates/base-kubernetes.k`): ```kcl\\nkubernetes_config = { version = \\"1.30.0\\" control_plane_count = 3 worker_count = 5 resources = { control_plane = {cpu = \\"4\\", memory = \\"8Gi\\"} worker = {cpu = \\"8\\", memory = \\"16Gi\\"} }\\n}\\n```plaintext **Development Infrastructure** (`workspace/infra/my-dev/taskservs/kubernetes.k`): ```kcl\\nkubernetes_config = { control_plane_count = 1 # Smaller for dev worker_count = 2 resources = { control_plane = {cpu = \\"2\\", memory = \\"4Gi\\"} worker = {cpu = \\"2\\", memory = \\"4Gi\\"} }\\n}\\n```plaintext **Production Infrastructure** (`workspace/infra/my-prod/taskservs/kubernetes.k`): ```kcl\\nkubernetes_config = { control_plane_count = 5 # Larger for prod worker_count = 10 resources = { control_plane = {cpu = \\"8\\", memory = \\"16Gi\\"} worker = {cpu = \\"16\\", memory = \\"32Gi\\"} }\\n}\\n```plaintext ## Advanced Customization Patterns ### Pattern 1: Multi-Environment Setup Create different configurations for each environment: ```bash\\n# Create environments\\nprovisioning ws init my-app-dev\\nprovisioning ws init my-app-staging\\nprovisioning ws init my-app-prod # Apply environment-specific templates\\nprovisioning tpl apply development-kubernetes my-app-dev\\nprovisioning tpl apply staging-kubernetes my-app-staging\\nprovisioning tpl apply production-kubernetes my-app-prod # Customize each environment\\n# Edit: workspace/infra/my-app-dev/...\\n# Edit: workspace/infra/my-app-staging/...\\n# Edit: workspace/infra/my-app-prod/...\\n```plaintext ### Pattern 2: Shared Configuration Library Create reusable configuration fragments: **File: `provisioning/workspace/templates/shared/security-policies.k`** ```kcl\\nsecurity_policies = { pod_security = { enforce = \\"restricted\\" audit = \\"restricted\\" warn = \\"restricted\\" } network_policies = [ { name = \\"deny-all\\" pod_selector = {} policy_types = [\\"Ingress\\", \\"Egress\\"] }, { name = \\"allow-dns\\" pod_selector = {} egress = [ { to = [{namespace_selector = {name = \\"kube-system\\"}}] ports = [{protocol = \\"UDP\\", port = 53}] } ] } ]\\n}\\n```plaintext Import in your infrastructure: ```kcl\\nimport \\"../../../provisioning/workspace/templates/shared/security-policies.k\\" kubernetes_config = { version = \\"1.30.0\\" # ... other config security = security_policies # Import shared policies\\n}\\n```plaintext ### Pattern 3: Dynamic Configuration Use KCL features for dynamic configuration: ```kcl\\n# Calculate resources based on server count\\nserver_count = 5\\nreplicas_per_server = 2\\ntotal_replicas = server_count * replicas_per_server postgres_config = { version = \\"16.1\\" max_connections = total_replicas * 50 # Dynamic calculation shared_buffers = \\"${total_replicas * 128}MB\\"\\n}\\n```plaintext ### Pattern 4: Conditional Configuration ```kcl\\nenvironment = \\"production\\" # or \\"development\\" kubernetes_config = { version = \\"1.30.0\\" control_plane_count = if environment == \\"production\\" { 3 } else { 1 } worker_count = if environment == \\"production\\" { 5 } else { 2 } monitoring = { enabled = environment == \\"production\\" retention = if environment == \\"production\\" { \\"30d\\" } else { \\"7d\\" } }\\n}\\n```plaintext ## Layer Statistics ```bash\\n# Show layer system statistics\\nprovisioning lyr stats\\n```plaintext **Expected Output:** ```plaintext\\n📊 Layer System Statistics: Infrastructure Layer: • Projects: 3 • Total files: 15 • Average overrides per project: 5 Workspace Layer: • Templates: 13 • Most used: production-kubernetes (5 projects) • Custom templates: 2 Core Layer: • Taskservs: 15 • Providers: 3 • Clusters: 3 Resolution Performance: • Average resolution time: 45ms • Cache hit rate: 87% • Total resolutions: 1,250\\n```plaintext ## Customization Workflow ### Complete Customization Example ```bash\\n# 1. Create new infrastructure\\nprovisioning ws init my-custom-app # 2. Understand layer system\\nprovisioning lyr explain # 3. Discover templates\\nprovisioning tpl list --type taskservs # 4. Apply base template\\nprovisioning tpl apply production-kubernetes my-custom-app # 5. View applied configuration\\nprovisioning lyr show my-custom-app # 6. Customize (edit files)\\nprovisioning sops workspace/infra/my-custom-app/taskservs/kubernetes.k # 7. Test layer resolution\\nprovisioning lyr test kubernetes my-custom-app # 8. Validate configuration\\nprovisioning tpl validate my-custom-app\\nprovisioning val config --infra my-custom-app # 9. Deploy customized infrastructure\\nprovisioning s create --infra my-custom-app --check\\nprovisioning s create --infra my-custom-app\\nprovisioning t create kubernetes --infra my-custom-app\\n```plaintext ## Best Practices ### 1. Use Layers Correctly - **Core Layer**: Only modify for system-wide changes\\n- **Workspace Layer**: Use for organization-wide templates\\n- **Infrastructure Layer**: Use for project-specific customizations ### 2. Template Organization ```plaintext\\nprovisioning/workspace/templates/\\n├── shared/ # Shared configuration fragments\\n│ ├── security-policies.k\\n│ ├── network-policies.k\\n│ └── monitoring.k\\n├── production/ # Production templates\\n│ ├── kubernetes.k\\n│ ├── postgres.k\\n│ └── redis.k\\n└── development/ # Development templates ├── kubernetes.k └── postgres.k\\n```plaintext ### 3. Documentation Document your customizations: **File: `workspace/infra/my-production/README.md`** ```markdown\\n# My Production Infrastructure ## Customizations - Kubernetes: Using production template with 5 control plane nodes\\n- PostgreSQL: Configured with streaming replication\\n- Cilium: Native routing mode enabled ## Layer Overrides - `taskservs/kubernetes.k`: Control plane count (3 → 5)\\n- `taskservs/postgres.k`: Replication mode (async → sync)\\n- `network/cilium.k`: Routing mode (tunnel → native)\\n```plaintext ### 4. Version Control Keep templates and configurations in version control: ```bash\\ncd provisioning/workspace/templates/\\ngit add .\\ngit commit -m \\"Add production Kubernetes template with enhanced security\\" cd workspace/infra/my-production/\\ngit add .\\ngit commit -m \\"Configure production environment for my-production\\"\\n```plaintext ## Troubleshooting Customizations ### Issue: Configuration not applied ```bash\\n# Check layer resolution\\nprovisioning lyr show my-production # Verify file exists\\nls -la workspace/infra/my-production/taskservs/ # Test specific resolution\\nprovisioning lyr test kubernetes my-production\\n```plaintext ### Issue: Conflicting configurations ```bash\\n# Validate configuration\\nprovisioning val config --infra my-production # Show configuration merge result\\nprovisioning show config kubernetes --infra my-production\\n```plaintext ### Issue: Template not found ```bash\\n# List available templates\\nprovisioning tpl list # Check template path\\nls -la provisioning/workspace/templates/ # Refresh template cache\\nprovisioning tpl refresh\\n```plaintext ## Next Steps - **[From Scratch Guide](from-scratch.md)** - Deploy new infrastructure\\n- **[Update Guide](update-infrastructure.md)** - Update existing infrastructure\\n- **[Workflow Guide](../development/workflow.md)** - Automate with workflows\\n- **[KCL Guide](../development/KCL_MODULE_GUIDE.md)** - Learn KCL configuration language ## Quick Reference ```bash\\n# Layer system\\nprovisioning lyr explain # Explain layers\\nprovisioning lyr show # Show layer resolution\\nprovisioning lyr test # Test resolution\\nprovisioning lyr stats # Layer statistics # Templates\\nprovisioning tpl list # List all templates\\nprovisioning tpl list --type # Filter by type\\nprovisioning tpl show