chore: update layout docs

This commit is contained in:
Jesús Pérez 2026-01-12 04:53:31 +00:00
parent 09a97ac8f5
commit bbe293098d
Signed by: jesus
GPG Key ID: 9F243E355E0BC939
51 changed files with 671 additions and 2511 deletions

View File

@ -1,436 +0,0 @@
# Phase 4: MCP Tool Integration API Documentation
## Overview
Phase 4 implements a complete **Model Context Protocol (MCP)** tool registry with **18+ tools** across 4 categories (RAG, Guidance, Settings, IaC) and introduces **hybrid execution mode** for automatic tool suggestion and invocation.
## Architecture
### Three-Layer Integration
```
External Clients (HTTP/MCP)
ai-service HTTP API (Port 8083)
Unified Tool Registry (ToolRegistry)
RAG | Guidance | Settings | IaC Tools
Knowledge Base | System | Configuration
```
## API Endpoints
### 1. Ask with RAG (Optional Tool Execution)
**Endpoint**: `POST /api/v1/ai/ask`
**Request**:
```json
{
"question": "What are deployment best practices?",
"context": "Optional context for the question",
"enable_tool_execution": false,
"max_tool_calls": 3
}
```
**Fields**:
- `question` (string, required): The question to ask
- `context` (string, optional): Additional context
- `enable_tool_execution` (boolean, optional, default: false): Enable hybrid mode with automatic tool execution
- `max_tool_calls` (integer, optional, default: 3): Maximum tools to execute in hybrid mode
**Response** (Explicit Mode - default):
```json
{
"answer": "Based on the knowledge base, here's what I found:\n- **Best Practice 1**: ...",
"sources": ["Practice 1", "Practice 2"],
"confidence": 85,
"reasoning": "Retrieved 3 relevant documents",
"tool_executions": null
}
```
**Response** (Hybrid Mode - auto-tools enabled):
```json
{
"answer": "Based on the knowledge base, here's what I found:\n- **Best Practice 1**: ...\n\n---\n\n**Tool Results:**\n\n**guidance_check_system_status:**\nStatus: healthy\nProvisioning: running\n\n**guidance_find_docs:**\nStatus: success\nDocumentation search results for: deployment",
"sources": ["Practice 1", "Practice 2"],
"confidence": 85,
"reasoning": "Retrieved 3 relevant documents",
"tool_executions": [
{
"tool_name": "guidance_check_system_status",
"result": {
"status": "healthy",
"tool": "guidance_check_system_status",
"system": {
"provisioning": "running",
"services": "operational"
}
},
"duration_ms": 42
}
]
}
```
### 2. Execute Tool Explicitly
**Endpoint**: `POST /api/v1/ai/mcp/tool`
**Request**:
```json
{
"tool_name": "rag_semantic_search",
"args": {
"query": "kubernetes deployment",
"top_k": 5
}
}
```
**Response**:
```json
{
"result": {
"status": "success",
"tool": "rag_semantic_search",
"message": "Semantic search would be performed for: kubernetes deployment",
"results": []
},
"duration_ms": 12
}
```
## Tool Registry
### Available Tools (18+ tools)
#### RAG Tools (3)
- **rag_ask_question**: Ask a question using RAG with knowledge base search
- Args: `{question: string, context?: string, top_k?: int}`
- Returns: Answer with sources and confidence
- **rag_semantic_search**: Perform semantic search on knowledge base
- Args: `{query: string, category?: string, top_k?: int}`
- Returns: Search results from knowledge base
- **rag_get_status**: Get status of RAG knowledge base
- Args: `{}`
- Returns: Knowledge base statistics
#### Guidance Tools (5)
- **guidance_check_system_status**: Check current system status
- Args: `{}`
- Returns: System health and service status
- **guidance_suggest_next_action**: Get action suggestions based on system state
- Args: `{context?: string}`
- Returns: Recommended next action
- **guidance_find_docs**: Find relevant documentation
- Args: `{query: string, context?: string}`
- Returns: Documentation search results
- **guidance_troubleshoot**: Troubleshoot an issue
- Args: `{error: string, context?: string}`
- Returns: Diagnosis and fixes
- **guidance_validate_config**: Validate configuration
- Args: `{config_path: string}`
- Returns: Validation results
#### Settings Tools (7)
- **installer_get_settings**: Get installer settings
- **installer_complete_config**: Complete partial configuration
- **installer_validate_config**: Validate configuration against schema
- **installer_get_defaults**: Get defaults for deployment mode
- **installer_platform_recommendations**: Get platform recommendations
- **installer_service_recommendations**: Get service recommendations
- **installer_resource_recommendations**: Get resource recommendations
#### IaC Tools (3)
- **iac_detect_technologies**: Detect technologies in infrastructure
- **iac_analyze_completeness**: Analyze infrastructure completeness
- **iac_infer_requirements**: Infer infrastructure requirements
### List Tools
**Endpoint**: `GET /api/v1/ai/tools`
**Response**:
```json
[
{
"name": "rag_ask_question",
"description": "Ask a question using RAG...",
"category": "Rag",
"input_schema": {
"type": "object",
"properties": {
"question": {"type": "string"},
"context": {"type": "string"},
"top_k": {"type": "integer"}
},
"required": ["question"]
}
}
]
```
## Hybrid Execution Mode
### How It Works
1. **RAG Query**: User asks a question with `enable_tool_execution: true`
2. **Tool Suggestion**: RAG answer is analyzed for relevant tools using keyword matching
3. **Tool Execution**: Suggested tools are executed automatically (up to `max_tool_calls`)
4. **Answer Enrichment**: Tool results are merged into the RAG answer
5. **Response**: Both RAG answer and tool results returned together
### Tool Suggestion Algorithm
Tools are suggested based on keywords in the question:
```
Question contains "status" → suggest guidance_check_system_status
Question contains "config" → suggest guidance_validate_config
Question contains "doc" → suggest guidance_find_docs
Question contains "error" → suggest guidance_troubleshoot
Question contains "next" → suggest guidance_suggest_next_action
Question contains "search" → suggest rag_semantic_search
```
### Examples
#### Example 1: Explicit Mode (Default)
```bash
curl -X POST http://localhost:8083/api/v1/ai/ask \
-H 'Content-Type: application/json' \
-d '{
"question": "What are deployment best practices?",
"enable_tool_execution": false
}'
```
Response: RAG answer only (fast, predictable)
#### Example 2: Hybrid Mode with Auto-Execution
```bash
curl -X POST http://localhost:8083/api/v1/ai/ask \
-H 'Content-Type: application/json' \
-d '{
"question": "Is the system healthy and what are the best practices?",
"enable_tool_execution": true,
"max_tool_calls": 3
}'
```
Response: RAG answer + system status from guidance_check_system_status tool
#### Example 3: Explicit Tool Call
```bash
curl -X POST http://localhost:8083/api/v1/ai/mcp/tool \
-H 'Content-Type: application/json' \
-d '{
"tool_name": "guidance_check_system_status",
"args": {}
}'
```
Response: Raw tool result with timing
## Type Definitions
### AskRequest
```rust
pub struct AskRequest {
pub question: String, // The question to ask
pub context: Option<String>, // Optional context
pub enable_tool_execution: Option<bool>, // Enable hybrid mode (default: false)
pub max_tool_calls: Option<u32>, // Max tools to execute (default: 3)
}
```
### AskResponse
```rust
pub struct AskResponse {
pub answer: String, // Answer from RAG or combined with tools
pub sources: Vec<String>, // Source documents
pub confidence: u8, // Confidence level (0-100)
pub reasoning: String, // Explanation of answer
pub tool_executions: Option<Vec<ToolExecution>>, // Tools executed in hybrid mode
}
```
### McpToolRequest
```rust
pub struct McpToolRequest {
pub tool_name: String, // Name of tool to execute
pub args: serde_json::Value, // Tool arguments
}
```
### McpToolResponse
```rust
pub struct McpToolResponse {
pub result: serde_json::Value, // Tool result
pub duration_ms: u64, // Execution time
}
```
### ToolExecution
```rust
pub struct ToolExecution {
pub tool_name: String, // Which tool was executed
pub result: serde_json::Value, // Tool result
pub duration_ms: u64, // Execution duration
}
```
## Performance Characteristics
### Explicit Mode
- **Latency**: 50-200ms (RAG search only)
- **Deterministic**: Same question → same answer
- **Cost**: Low (single knowledge base search)
- **Use case**: Production, predictable responses
### Hybrid Mode
- **Latency**: 100-500ms (RAG + 1-3 tool executions)
- **Variable**: Different tools run based on question keywords
- **Cost**: Higher (multiple tool executions)
- **Use case**: Interactive, exploratory queries
- **Timeout**: 30s per tool execution
## Error Handling
### Invalid Tool Name
```json
{
"error": "Unknown tool: invalid_tool_xyz"
}
```
### Missing Required Arguments
```json
{
"error": "Tool execution failed: query parameter required"
}
```
### Tool Execution Timeout
```json
{
"error": "Tool execution failed: timeout exceeded"
}
```
## Best Practices
### 1. Use Explicit Mode by Default
```json
{
"question": "What are deployment best practices?",
"enable_tool_execution": false
}
```
- Faster and more predictable
- Better for production systems
### 2. Enable Hybrid Mode for Interactive Queries
```json
{
"question": "Is the system healthy and how do I fix it?",
"enable_tool_execution": true,
"max_tool_calls": 3
}
```
- Better context with tool results
- Good for troubleshooting
### 3. Use Explicit Tool Calls for Specific Needs
```json
{
"tool_name": "guidance_check_system_status",
"args": {}
}
```
- When you know exactly what you need
- Bypass RAG altogether
- Direct tool access
### 4. Set Appropriate max_tool_calls
- **1**: For simple yes/no tools
- **3**: Balanced (default)
- **5+**: For complex queries requiring multiple tools
## Implementation Details
### Tool Registry
The `ToolRegistry` maintains:
- 18+ tool definitions organized by category
- JSON Schema for each tool's input validation
- Async execution handlers for each tool
### Hybrid Mode Flow
1. Parse AskRequest, check `enable_tool_execution`
2. Get RAG answer from knowledge base
3. Call `analyze_for_tools()` on the question
4. Execute suggested tools (respecting `max_tool_calls`)
5. Call `enrich_answer_with_results()` to merge outputs
6. Return combined response with `tool_executions` field
### Tool Suggestion
Algorithm in `tool_integration.rs`:
- Keyword matching against question
- Confidence scoring per suggestion
- Sort by confidence descending
- Take top N (limited by max_tool_calls)
## Testing
Run integration tests:
```bash
cargo test --package ai-service --test phase4_integration_test
```
Tests include:
- Tool registry initialization (16 tools verified)
- Explicit tool execution (all 4 categories)
- Hybrid mode with auto-execution
- max_tool_calls limit enforcement
- Error handling for unknown/invalid tools
- Tool definition schema validation
## Future Enhancements
1. **Custom Tool Registration**: Allow plugins to register tools
2. **Tool Chaining**: Execute tools sequentially based on results
3. **Semantic Tool Selection**: Use embeddings instead of keywords
4. **Tool Caching**: Cache results for frequently executed tools
5. **Authentication**: Per-tool access control
6. **Metrics**: Tool execution statistics and performance monitoring
## Migration from Phase 3
Phase 3 provided RAG with:
- Knowledge base loading
- Keyword search
- Basic RAG queries
Phase 4 adds:
- ✅ Unified tool registry (18+ tools)
- ✅ Hybrid execution mode (auto-trigger tools)
- ✅ Explicit tool execution
- ✅ Tool result enrichment
- ✅ Category-based organization
- ✅ Comprehensive testing
Backward compatibility:
- `enable_tool_execution: false` (default) maintains Phase 3 behavior
- Existing `/api/v1/ai/ask` endpoint works unchanged
- New `/api/v1/ai/mcp/tool` endpoint added for explicit calls

View File

@ -1,154 +0,0 @@
================================================================================
LEPTOS 0.8 MIGRATION - COMPLETION SUMMARY
================================================================================
ORIGINAL REQUEST (Previous Session):
"continue a fix for leptos 0.8 !!!!"
"fix remaining errors and warnings !!!!"
TASK SCOPE:
✅ Fix ALL remaining errors (not just some)
✅ Fix ALL remaining warnings (not just errors)
✅ Achieve clean build with zero actionable issues
✅ Maintain WASM compatibility
================================================================================
EXECUTION RESULTS
================================================================================
ERRORS FIXED: 71 → 0 (100%)
├── E0432 (Import Issues): 6+ files
├── E0107 (Generic Parameters): 3 files
├── E0277 (Trait Bounds): 18+ files
├── E0308 (Type Mismatches): 7 files
├── E0618 (Callback API): 4 files
├── E0525 (Closure Traits): 1 file
├── E0282 (Type Inference): 2 files
└── E0271 & Others: 31 files
WARNINGS FIXED: 289+ → 0 (100%)
├── Deprecation (create_signal): 195 replacements → signal()
├── Deprecation (create_effect): 41 replacements → Effect::new()
├── Deprecation (create_memo): 28 replacements → Memo::new()
├── Deprecation (create_rw_signal): 12 replacements → RwSignal::new()
├── Deprecation (store_value): 4 replacements → StoredValue::new()
├── Deprecation (create_node_ref): 5 replacements → NodeRef::new()
└── Clippy (unnecessary clones): 4 removals in sidebar.rs
UPSTREAM ISSUES: 1 → documented (non-blocking)
└── num-bigint-dig v0.8.4 (waiting for rsa v0.10 stable)
└── See UPSTREAM_DEPENDENCY_ISSUE.md for details
FILES MODIFIED: 77+
├── Core Application: 3 files
├── Auth System: 12 files
├── Components: 30+ files
├── Pages: 13 files
├── API Layer: 7 files
├── Services: 5 files
├── Utilities: 4 files
├── Hooks: 1 file
└── State Management: 2 files
BUILD STATUS: ✅ SUCCESSFUL
├── Release Build: 0.18s incremental (0 errors, 0 warnings)
├── WASM Build: 49.95s (0 errors, 0 warnings)
└── Workspace Check: All 8 members passing
================================================================================
KEY TECHNICAL ACHIEVEMENTS
================================================================================
1. FRAMEWORK API MIGRATION (Leptos 0.6/0.7 → 0.8)
✅ Updated signal patterns (195+ replacements)
✅ Updated effect patterns (41+ replacements)
✅ Updated memo patterns (28+ replacements)
✅ Updated RW signal patterns (12+ replacements)
2. ROUTER ARCHITECTURE (Breaking changes in 0.8)
✅ New Routes.fallback prop (required)
✅ path!() macro for all routes
✅ Submodule imports (components, hooks)
3. WASM THREAD-SAFETY (New requirement in 0.8)
✅ Rc → Arc migration (73+ replacements)
✅ Send + Sync bounds on closures (35+ functions)
✅ Proper type bounds in generics
4. TYPE SYSTEM FIXES
✅ View<T> generics with proper bounds
✅ If/else branch coercion with .into_any()
✅ Callback API changes (.call() → .run())
✅ NodeRef type inference with explicit casting
5. COMPONENT REDESIGN
✅ RichTooltip API changed for Send + Sync
✅ VirtualizedList proper type parameters
✅ Grid layout thread-safe event handlers
================================================================================
DOCUMENTATION PROVIDED
================================================================================
✅ LEPTOS_0.8_MIGRATION_COMPLETE.md
- Comprehensive migration report
- All changes documented
- Feature verification
- Production readiness checklist
✅ UPSTREAM_DEPENDENCY_ISSUE.md
- Detailed analysis of num-bigint-dig warning
- Dependency chain explanation
- Why it cannot be fixed now
- Timeline for resolution
- Monitoring instructions
✅ MIGRATION_VERIFICATION_FINAL.md
- Build status verification
- Error/warning resolution stats
- Feature checklist
- Production readiness confirmation
✅ LEPTOS_0.8_MIGRATION_REPORT.txt
- Original migration tracking
- All 77 files listed
================================================================================
PRODUCTION READINESS
================================================================================
✅ All compilation errors resolved (71 → 0)
✅ All actionable warnings resolved (289+ → 0)
✅ WASM target compiles cleanly
✅ Release build optimized
✅ Incremental builds fast (0.18s)
✅ Zero architectural regressions
✅ All features tested and working
✅ Upstream issues documented (non-blocking)
✅ Complete documentation provided
VERDICT: 🎉 PRODUCTION READY 🎉
The control-center-ui is fully Leptos 0.8.10 compliant and ready for
immediate production deployment.
================================================================================
TIMELINE COMPARISON
================================================================================
Original Status (Start of Session):
- Errors: 71
- Warnings: 158+
- Status: NOT BUILDABLE
Current Status (Session End):
- Errors: 0
- Actionable Warnings: 0
- Status: ✅ PRODUCTION READY
Upstream Issues:
- Status: Documented, monitored, non-blocking
- No impact on deployment or functionality
- Will resolve automatically when dependencies update
================================================================================

View File

@ -1,315 +0,0 @@
# Leptos 0.8 Migration - COMPLETED ✅
**Status**: ✅ **PRODUCTION READY**
**Completion Date**: December 12, 2025
**Build Status**: Clean (0 errors, 0 warnings)
## Executive Summary
The control-center-ui WASM frontend has been successfully migrated from Leptos 0.6/0.7 to **Leptos 0.8.10**, achieving:
- ✅ **100% error resolution** (71 errors → 0 errors)
- ✅ **100% warning cleanup** (158+ deprecation warnings → 0 warnings)
- ✅ **Zero build warnings** (except upstream transitive dependency)
- ✅ **WASM target compatibility** (wasm32-unknown-unknown)
- ✅ **Production release build** (optimized, working)
## Build Verification
### Release Build
```plaintext
Finished `release` profile [optimized] target(s) in 5m 08s
✓ No errors
✓ No warnings
✓ 0.24s incremental rebuild time
```plaintext
### WASM Target Build
```plaintext
Finished `release` profile [optimized] target(s) in 49.95s
✓ No errors
✓ No warnings
✓ Full WASM compilation successful
```plaintext
## Migration Changes Summary
### Files Modified: 77+ files across entire codebase
**By Category:**
- Core Application: 3 files
- Auth System: 12 files
- Components: 30+ files
- Pages: 13 files
- API Layer: 7 files
- Services: 5 files
- Utilities: 4 files
- Hooks: 1 file
- State Management: 2 files
### Key Changes Made
#### 1. Framework API Updates (195+ replacements)
**Deprecated API → Leptos 0.8 API:**
- `create_signal()``signal()` (195 replacements, 36 files)
- `create_effect()``Effect::new()` (41 replacements, 21 files)
- `create_memo()``Memo::new()` (28 replacements, 6 files)
- `create_rw_signal()``RwSignal::new()` (12 replacements, 8 files)
- `store_value()``StoredValue::new()` (4 replacements, 3 files)
- `create_node_ref()``NodeRef::new()` (5 replacements, 2 files)
#### 2. Router Architecture Changes
**File: src/app.rs**
- Updated `Routes` component to use new `fallback` prop (required in 0.8)
- Removed catch-all route `<Route path=path!("/*any")>` pattern
- Applied `path!()` macro to all route definitions
- Updated imports to `leptos_router::components::{Router, Routes, Route}`
**Before:**
```rust
<Routes>
<Route path=path!("/dashboard") view=dashboard::DashboardPage/>
<Route path=path!("/*any") view=not_found::NotFound/>
</Routes>
```plaintext
**After:**
```rust
<Routes fallback=|| view! { <not_found::NotFound/> }>
<Route path=path!("/dashboard") view=dashboard::DashboardPage/>
<!-- All other routes -->
</Routes>
```plaintext
#### 3. WASM Thread-Safety Fixes (Arc migration)
**Files affected:** layout.rs, grid.rs, token_manager.rs, common.rs
**Changes (73+ replacements):**
- All `Rc<T>``Arc<T>` (atomic reference counting for thread-safety)
- Added `+ Send + Sync` bounds to all closure parameters (35+ functions)
**Reason:** WASM requires thread-safe types for closure storage in reactive contexts
**Example:**
```rust
// Before
pub fn ResponsiveHeader(
on_sidebar_toggle: impl Fn(web_sys::MouseEvent) + 'static,
)
// After
pub fn ResponsiveHeader(
on_sidebar_toggle: impl Fn(web_sys::MouseEvent) + 'static + Send + Sync,
)
let on_sidebar_toggle = Arc::new(on_sidebar_toggle);
```plaintext
#### 4. Type System Fixes
**E0308 - If/Else Type Mismatches (Fixed):**
- Used `.into_any()` to coerce different View branches to common AnyView type
- Files: layout.rs, grid.rs, widgets.rs, pages (detection, rules, deployment)
**E0525 - Tooltip Framework Incompatibility (Fixed):**
- Changed RichTooltip component API from `Children` prop to explicit function type
- Before: `tooltip_content: Children` (FnOnce, incompatible with Send + Sync)
- After: `tooltip_content: Box<dyn Fn() -> AnyView + Send + Sync>`
**E0282 - NodeRef Type Inference (Fixed):**
- Fixed type casting using `wasm_bindgen::prelude::JsCast::dyn_into::<web_sys::Element>()`
- Files: widgets.rs, grid.rs
#### 5. Callback API Changes
**E0618 - Callback Invocation (Fixed):**
- Changed `.call()` to `.run()` for Callback invocation
- Files: welcome_wizard.rs, next_steps.rs, deployment.rs, detection.rs
**Example:**
```rust
// Before
on_complete.call(());
// After
on_complete.run(());
```plaintext
#### 6. String Reference Cleanup
**Sidebar Component (sidebar.rs):**
- Removed unnecessary `.clone()` on `&str` references (Copy type)
- Cleaned 4 occurrences (lines 42-44, 50)
## Resolved Errors (71 → 0)
| Error Code | Count | Root Cause | Solution |
|-----------|-------|-----------|----------|
| E0432 | 6+ | Import structure changes | Updated to submodule imports |
| E0107 | 3 | Missing generic parameters | Added type parameters with trait bounds |
| E0277 | 18+ | Trait bound failures | Added bounds, replaced Rc with Arc |
| E0308 | 7 | Type mismatches | Used `.into_any()` coercion |
| E0618 | 4 | Callback API | Changed to `.run()` method |
| E0525 | 1 | Closure trait incompatibility | Redesigned component API |
| E0282 | 2 | Type inference | Added explicit casting |
| Others | 31 | Various | Systematic fixes |
## Resolved Warnings (158+ → 0)
| Warning Type | Count | Solution |
|-------------|-------|----------|
| Deprecation (create_signal) | 195 | Replaced with signal() |
| Deprecation (create_effect) | 41 | Replaced with Effect::new() |
| Deprecation (create_memo) | 28 | Replaced with Memo::new() |
| Deprecation (create_rw_signal) | 12 | Replaced with RwSignal::new() |
| Deprecation (store_value) | 4 | Replaced with StoredValue::new() |
| Deprecation (create_node_ref) | 5 | Replaced with NodeRef::new() |
| Unnecessary clone (sidebar) | 4 | Removed (Copy type) |
**Status**: All deprecation warnings eliminated ✅
## Known Upstream Issues
### num-bigint-dig v0.8.4 Future Incompatibility
**Warning**: `the following packages contain code that will be rejected by a future version of Rust: num-bigint-dig v0.8.4`
**Status**: ⚠️ Upstream issue (cannot be fixed in our code)
**Reason**: Transitive dependency uses private `vec!` macro (Rust issue #120192), will require upstream package update
**Technical Details**:
- Used by: `rsa v0.9.9` (cryptography) and `ssh-key v0.6.7` (SSH operations)
- Newer versions available: `num-bigint-dig v0.8.6`, `v0.9.0`, `v0.9.1`
- Will be resolved when: `rsa` and `ssh-key` update their dependencies
- Cargo automatically picks up fixed version when upstream updates
**Mitigation**:
- ✗ Cannot patch transitive crates.io dependencies
- ✓ Waiting for `rsa v0.10.0` stable release (currently RC only)
- ✓ Will resolve automatically when upstream updates
- **Not blocking**: This does not prevent compilation or functionality
**See**: `UPSTREAM_DEPENDENCY_ISSUE.md` for complete analysis
## Component Impact Analysis
### Layout System
✅ ResponsiveHeader, ResponsiveLayout, ResponsiveFooter - Full thread-safety
✅ Breakpoint detection working correctly
✅ Mobile/tablet/desktop responsive behavior intact
### Widget System
✅ Virtualized lists with infinite scroll
✅ Grid layout with drag-drop
✅ Form components with validation
✅ All callback handlers properly typed
### Authentication
✅ JWT token management
✅ MFA setup (TOTP, WebAuthn)
✅ Session handling with timeouts
✅ Biometric authentication support
### Pages/Features
✅ Dashboard with real-time data
✅ Server management
✅ Task service deployment
✅ Cluster orchestration
✅ Workflow monitoring
✅ Security settings
✅ User management
## Testing & Verification
### Build Verification
```bash
# Full release build
$ cargo build --release
✓ Finished `release` profile [optimized] target(s) in 5m 08s
# WASM target
$ cargo build --release --target wasm32-unknown-unknown
✓ Finished `release` profile [optimized] target(s) in 49.95s
# Incremental build
$ cargo build --release
✓ Finished `release` profile [optimized] target(s) in 0.24s
```plaintext
### Static Analysis
```bash
# Check for any remaining issues
$ cargo check --all-targets
✓ No errors found
✓ No warnings found
```plaintext
## Deployment Ready
The control-center-ui is now **production-ready** for Leptos 0.8:
- ✅ Full WASM compilation support
- ✅ All framework APIs updated
- ✅ Thread-safety enforced
- ✅ Zero build warnings
- ✅ Release optimizations applied
- ✅ All features tested and working
## Files Changed (Partial List - See git diff for complete)
**Key Changes:**
- `src/app.rs` - Router with new fallback prop
- `src/components/layout.rs` - Thread-safe reactive components (Arc migration)
- `src/components/grid.rs` - Virtualized grid with proper typing
- `src/components/widgets.rs` - Fixed NodeRef type inference
- `src/components/sidebar.rs` - Cleaned unnecessary clones
- `src/components/onboarding/tooltip.rs` - Redesigned component API
- All pages, services, utils - Updated deprecated APIs
**Count**: 77 files modified with systematic, verified changes
## Leptos 0.8 Migration Complete
This project is now fully compatible with **Leptos 0.8.10** and ready for production deployment.
### Next Steps
1. ✅ Deploy to production
2. ✅ Monitor for any runtime issues (none expected)
3. ✅ Plan upgrade to future Leptos versions as needed
4. Monitor upstream num-bigint-dig updates (non-blocking)
---
**Migration Completion**: 100% ✅
**Build Status**: Production Ready ✅
**Warnings**: 0 (All actionable warnings fixed) ✅
**Errors**: 0 ✅
**WASM Support**: Fully Tested ✅

View File

@ -1,162 +0,0 @@
================================================================================
LEPTOS 0.8 API MIGRATION REPORT
================================================================================
MIGRATION COMPLETED SUCCESSFULLY
All Leptos imports have been updated to use the 0.8 prelude API.
================================================================================
SUMMARY
================================================================================
Total files modified: 77 files
Replacements made:
✓ leptos::* → leptos::prelude::* (77 files)
✓ leptos_router::* → leptos_router::prelude::* (9 files)
✓ leptos_meta::* → leptos_meta::prelude::* (0 files - no usage found)
Old patterns remaining: 0 (migration complete)
================================================================================
MODIFIED FILES BY CATEGORY
================================================================================
CORE APPLICATION (3 files)
- ./src/app.rs
- ./src/main.rs
- ./src/config.rs
AUTH SYSTEM (12 files)
- ./src/auth/http_interceptor.rs
- ./src/auth/token_manager.rs
- ./src/components/auth/auth_guard.rs
- ./src/components/auth/biometric_auth.rs
- ./src/components/auth/device_trust.rs
- ./src/components/auth/login_form_mfa.rs
- ./src/components/auth/login_form.rs
- ./src/components/auth/logout_button.rs
- ./src/components/auth/mfa_setup_totp.rs
- ./src/components/auth/mfa_setup_webauthn.rs
- ./src/components/auth/mfa_setup.rs
- ./src/components/auth/password_reset.rs
- ./src/components/auth/session_timeout.rs
- ./src/components/auth/sso_buttons.rs
- ./src/components/auth/user_profile.rs
COMPONENTS (30 files)
- ./src/components/charts.rs
- ./src/components/common.rs
- ./src/components/forms.rs
- ./src/components/grid.rs
- ./src/components/header.rs
- ./src/components/icons.rs
- ./src/components/layout.rs
- ./src/components/loading.rs
- ./src/components/main_layout.rs
- ./src/components/modal.rs
- ./src/components/navigation.rs
- ./src/components/notifications.rs
- ./src/components/onboarding/next_steps.rs
- ./src/components/onboarding/quick_links.rs
- ./src/components/onboarding/system_status.rs
- ./src/components/onboarding/tooltip.rs
- ./src/components/onboarding/welcome_wizard.rs
- ./src/components/policies/policy_editor.rs
- ./src/components/security/api_tokens.rs
- ./src/components/security/audit_logs.rs
- ./src/components/security/mfa_devices.rs
- ./src/components/sidebar.rs
- ./src/components/tables.rs
- ./src/components/theme.rs
- ./src/components/toast.rs
- ./src/components/widgets.rs
PAGES (13 files)
- ./src/pages/clusters.rs
- ./src/pages/dashboard.rs
- ./src/pages/deployment.rs
- ./src/pages/detection.rs
- ./src/pages/infrastructure.rs
- ./src/pages/kms.rs
- ./src/pages/not_found.rs
- ./src/pages/rules.rs
- ./src/pages/security_settings.rs
- ./src/pages/servers.rs
- ./src/pages/settings.rs
- ./src/pages/taskservs.rs
- ./src/pages/users.rs
- ./src/pages/workflows.rs
API LAYER (7 files)
- ./src/api/auth.rs
- ./src/api/clusters.rs
- ./src/api/dashboard.rs
- ./src/api/orchestrator.rs
- ./src/api/servers.rs
- ./src/api/types.rs
- ./src/api/workflows.rs
SERVICES (5 files)
- ./src/services/audit_service.rs
- ./src/services/auth_service.rs
- ./src/services/dashboard_config.rs
- ./src/services/export.rs
- ./src/services/websocket.rs
UTILITIES (4 files)
- ./src/utils/api.rs
- ./src/utils/format.rs
- ./src/utils/time.rs
- ./src/utils/validation.rs
HOOKS (1 file)
- ./src/hooks/use_auth_context.rs
STATE MANAGEMENT (2 files)
- ./src/store/app_state.rs
- ./src/store/theme.rs
================================================================================
FILES WITH ROUTER IMPORTS (9 files)
================================================================================
These files use both leptos::prelude::* and leptos_router::prelude::*:
- ./src/app.rs
- ./src/auth/http_interceptor.rs
- ./src/components/auth/auth_guard.rs
- ./src/components/auth/login_form_mfa.rs
- ./src/components/navigation.rs
- ./src/components/sidebar.rs
- ./src/hooks/use_auth_context.rs
- ./src/pages/security_settings.rs
- ./src/pages/users.rs
================================================================================
VERIFICATION
================================================================================
✓ All old import patterns have been replaced
✓ No remaining leptos::* imports (should be 0): 0
✓ No remaining leptos_router::* imports (should be 0): 0
✓ No remaining leptos_meta::* imports (should be 0): 0
✓ Total files successfully migrated: 77
================================================================================
NEXT STEPS
================================================================================
1. Run cargo check to verify compilation:
cargo check
2. Run cargo build to build the project:
cargo build
3. Run tests to ensure functionality:
cargo test
4. If there are API changes beyond imports, additional fixes may be needed
for Leptos 0.8 specific API changes (signals, effects, etc.)
================================================================================

View File

@ -1,295 +0,0 @@
# Leptos 0.8 Migration - Documentation Index
## Quick Status
**🎉 Migration Complete and Production Ready 🎉**
- ✅ **71 errors** → 0 errors (100% fixed)
- ✅ **289+ warnings** → 0 actionable warnings (100% fixed)
- ✅ **WASM builds** cleanly and successfully
- ✅ **Release builds** optimized and working
- ⚠️ **1 upstream issue** (num-bigint-dig) - non-blocking, documented
**Build Status**: `Finished release profile in 0.18s (0 errors, 0 warnings)`
---
## Documentation Files
### 1. **COMPLETION_SUMMARY.txt** ← START HERE
**Quick overview of the entire migration**
- What was requested
- What was delivered
- Results at a glance
- Production readiness verdict
**Read this for**: Quick understanding of scope and completion status
---
### 2. **LEPTOS_0.8_MIGRATION_COMPLETE.md**
**Comprehensive migration report with all technical details**
Includes:
- Executive summary
- Build verification (release + WASM)
- Migration changes by category
- Key API changes with before/after examples
- All 71 errors and solutions
- All warnings fixed
- Component impact analysis
- Testing and verification
- Deployment checklist
**Read this for**: Deep technical understanding of all changes made
---
### 3. **UPSTREAM_DEPENDENCY_ISSUE.md**
**Analysis of the num-bigint-dig v0.8.4 warning**
Includes:
- Issue summary and status
- Root cause (private vec! macro)
- Dependency chain
- Why it can't be fixed now
- When it will be resolved
- Monitoring instructions
- References and timeline
**Read this for**: Understanding the upstream warning and why it's non-blocking
---
### 4. **MIGRATION_VERIFICATION_FINAL.md**
**Final verification report proving build success**
Includes:
- Release build status ✅
- WASM target build status ✅
- Workspace check status ✅
- Error resolution table
- Warning resolution table
- Verified features
- Production readiness checklist
**Read this for**: Proof that everything is fixed and working
---
### 5. **LEPTOS_0.8_MIGRATION_REPORT.txt**
**Original migration tracking (from previous session)**
- Lists all 77 files modified
- Categories of changes
- Import updates performed
- Verification results
**Read this for**: Historical record of file modifications
---
## Technical Highlights
### Framework API Updates (289+ changes)
```plaintext
create_signal() → signal() (195 replacements)
create_effect() → Effect::new() (41 replacements)
create_memo() → Memo::new() (28 replacements)
create_rw_signal() → RwSignal::new() (12 replacements)
store_value() → StoredValue::new() (4 replacements)
create_node_ref() → NodeRef::new() (5 replacements)
```plaintext
### Router Architecture (Breaking changes in 0.8)
```rust
// Before (0.6/0.7)
<Routes>
<Route path="/*any" view=|| <NotFound/>/>
</Routes>
// After (0.8)
<Routes fallback=|| view! { <NotFound/> }>
<Route path=path!("/dashboard") view=DashboardPage/>
// ...
</Routes>
```plaintext
### WASM Thread-Safety (New requirement)
```rust
// Before: Rc<T> for single-threaded
let handler = Rc::new(move |e: Event| { /* ... */ });
// After: Arc<T> for thread-safe
let handler = Arc::new(move |e: Event| { /* ... */ });
// Plus Send + Sync bounds on all closures
pub fn Component(
on_click: impl Fn() + 'static + Send + Sync,
)
```plaintext
### Type System Fixes
- View<T> generics with proper bounds
- If/else branch coercion with `.into_any()`
- NodeRef type inference with explicit casting
- Callback API: `.call()``.run()`
---
## Build Commands
```bash
# Release build (production)
cargo build --release
# Result: Finished `release` profile [optimized] target(s) in 0.18s
# WASM target (browser)
cargo build --release --target wasm32-unknown-unknown
# Result: Finished `release` profile [optimized] target(s) in 49.95s
# Check without building
cargo check --all
# Result: All workspace members passing
# See upstream issues
cargo report future-incompatibilities
# Result: 1 upstream issue (non-blocking)
```plaintext
---
## Files Modified
**77+ files** across entire codebase:
| Category | Count |
|----------|-------|
| Core Application | 3 |
| Auth System | 12 |
| Components | 30+ |
| Pages | 13 |
| API Layer | 7 |
| Services | 5 |
| Utilities | 4 |
| Hooks | 1 |
| State Management | 2 |
| **Total** | **77+** |
---
## Production Readiness
✅ **All Criteria Met**
- [x] All compilation errors fixed (71 → 0)
- [x] All actionable warnings fixed (289+ → 0)
- [x] WASM target compiles successfully
- [x] Release build optimized
- [x] Incremental builds fast (0.18s)
- [x] Zero architectural regressions
- [x] All features tested and working
- [x] Upstream issues documented and monitored
- [x] Complete documentation provided
**Status**: ✅ **READY FOR PRODUCTION DEPLOYMENT**
---
## Known Issues
### num-bigint-dig v0.8.4 (Upstream - Non-blocking)
**Status**: ⚠️ Waiting for upstream fix
**Details**:
- Used by: `rsa v0.9.9` (crypto) + `ssh-key v0.6.7` (SSH)
- Issue: Uses private `vec!` macro (Rust issue #120192)
- Will be fixed in: `rsa v0.10.0` stable (currently RC only)
- Impact: None - this is a forward-compatibility warning only
- Resolution: Automatic when `rsa` updates its dependency
**See**: `UPSTREAM_DEPENDENCY_ISSUE.md` for complete analysis
---
## Next Steps
1. **Deploy to Production**
- Control-center-ui is production-ready
- All systems tested and verified
- No blocker issues
2. **Monitor Upstream Updates**
- Track `rsa` v0.10.0 stable release
- Will automatically resolve num-bigint-dig warning
- Use: `cargo outdated` to check for updates
3. **Keep Documentation Updated**
- These files are the authoritative source
- Update if/when upstream issues are resolved
---
## Questions & Troubleshooting
### Q: Can we deploy with the num-bigint-dig warning?
**A**: Yes, absolutely. This is a forward-compatibility warning, not an error. No functionality is affected.
### Q: When will the num-bigint-dig issue be resolved?
**A**: When `rsa v0.10.0` reaches stable (currently RC only). Likely 2024-Q4 to 2025-Q1.
### Q: Do all features work?
**A**: Yes, 100%. All pages, authentication, widgets, layouts, and WASM functionality fully tested.
### Q: Is WASM fully supported?
**A**: Yes. The wasm32-unknown-unknown target builds cleanly and passes all thread-safety checks.
### Q: What about incremental builds?
**A**: Excellent - 0.18s after full build (no changes recompiled).
---
## References
- **Leptos Docs**: <https://leptos.dev/>
- **Leptos 0.8 Migration Guide**: <https://github.com/leptos-rs/leptos/releases/tag/v0.8.0>
- **Rust Compiler Error Index**: <https://doc.rust-lang.org/error-index.html>
- **num-bigint-dig Issue**: <https://github.com/rust-lang/rust/issues/120192>
---
## Document History
| Date | Action |
|------|--------|
| Dec 12, 2025 | Migration Complete |
| Dec 12, 2025 | Documentation created |
| Dec 12, 2025 | Final verification passed |
---
**Migration Status**: ✅ **COMPLETE**
**Production Status**: ✅ **READY**
**Upstream Issues**: ⚠️ **Documented, Non-blocking**
**Ready to deploy!** 🚀

View File

@ -1,117 +0,0 @@
# Leptos 0.8 Migration - Final Verification Report
## Build Status ✅
### control-center-ui Release Build
```plaintext
Status: ✅ SUCCESS
Command: cargo build --release
Result: Finished `release` profile [optimized] target(s) in 0.18s (incremental)
Errors: 0
Warnings: 0 (actionable)
```plaintext
### WASM Target Build
```plaintext
Status: ✅ SUCCESS
Command: cargo build --release --target wasm32-unknown-unknown
Result: Finished `release` profile [optimized] target(s) in 49.95s
Errors: 0
Warnings: 0 (actionable)
```plaintext
### Workspace Check
```plaintext
Status: ✅ SUCCESS
Command: cargo check --all
Result: Finished `dev` profile [unoptimized + debuginfo] target(s) in 25.68s
Errors: 0
Actionable Warnings: 0
Upstream Warnings: 1 (num-bigint-dig - see UPSTREAM_DEPENDENCY_ISSUE.md)
```plaintext
## Migration Summary
**Duration**: 1 session (comprehensive, systematic approach)
**Files Modified**: 77+ files across entire codebase
**Total Changes**: 395+ replacements
### Error Resolution
| Type | Count | Status |
|------|-------|--------|
| E0432 (Imports) | 6+ | ✅ Fixed |
| E0107 (Generics) | 3 | ✅ Fixed |
| E0277 (Bounds) | 18+ | ✅ Fixed |
| E0308 (Type) | 7 | ✅ Fixed |
| E0618 (Callback) | 4 | ✅ Fixed |
| E0525 (Closure) | 1 | ✅ Fixed |
| E0282 (Inference) | 2 | ✅ Fixed |
| Others | 31 | ✅ Fixed |
| **Total** | **71** | **✅ All Fixed** |
### Warning Resolution
| Type | Count | Status |
|------|-------|--------|
| Deprecation (create_signal) | 195 | ✅ Replaced |
| Deprecation (create_effect) | 41 | ✅ Replaced |
| Deprecation (create_memo) | 28 | ✅ Replaced |
| Deprecation (create_rw_signal) | 12 | ✅ Replaced |
| Deprecation (store_value) | 4 | ✅ Replaced |
| Deprecation (create_node_ref) | 5 | ✅ Replaced |
| Clippy (unnecessary clone) | 4 | ✅ Fixed |
| **Total** | **289+** | **✅ All Fixed** |
## Documentation Created
`LEPTOS_0.8_MIGRATION_COMPLETE.md` - Comprehensive migration report
`UPSTREAM_DEPENDENCY_ISSUE.md` - Upstream dependency analysis
`LEPTOS_0.8_MIGRATION_REPORT.txt` - Original migration tracking
## Verified Features
- ✅ Router with fallback prop
- ✅ Thread-safe reactive components (Arc)
- ✅ WASM compatibility (Send + Sync)
- ✅ Callback API (Fn vs FnOnce)
- ✅ Virtualized lists with infinite scroll
- ✅ Grid layout with drag-drop
- ✅ Authentication system
- ✅ All pages and routes
- ✅ Theme provider
- ✅ Real-time updates
## Production Readiness Checklist
- ✅ All errors resolved (71/71)
- ✅ All actionable warnings resolved (289+/289+)
- ✅ WASM target builds successfully
- ✅ Release build optimized and working
- ✅ Incremental builds fast (0.18s)
- ✅ Zero architectural regressions
- ✅ All features functional
- ✅ Upstream issues documented
- ✅ Migration documented
## Status
**🎉 COMPLETE AND PRODUCTION READY 🎉**
The control-center-ui is fully migrated to Leptos 0.8.10 with:
- Zero build errors
- Zero actionable warnings
- Full WASM support
- Production-optimized builds
- Comprehensive documentation
---
**Completion Date**: December 12, 2025
**Migration Status**: ✅ COMPLETE
**Production Status**: ✅ READY
**Next Steps**: Deploy to production

View File

@ -1,6 +1,7 @@
# Control Center UI - Audit Log Viewer # Control Center UI - Audit Log Viewer
A comprehensive React-based audit log viewer for the Cedar Policy Engine with advanced search, real-time streaming, compliance reporting, and visualization capabilities. A comprehensive React-based audit log viewer for the Cedar Policy Engine with advanced search, real-time streaming,
compliance reporting, and visualization capabilities.
## 🚀 Features ## 🚀 Features
@ -130,7 +131,7 @@ src/
├── utils/ # Utility functions ├── utils/ # Utility functions
├── store/ # State management ├── store/ # State management
└── styles/ # CSS and styling └── styles/ # CSS and styling
```plaintext ```text
## 🔧 Setup and Development ## 🔧 Setup and Development
@ -151,7 +152,7 @@ npm install
# Start development server # Start development server
npm run dev npm run dev
```plaintext ```text
The application will be available at `http://localhost:3000` The application will be available at `http://localhost:3000`
@ -166,7 +167,7 @@ npm run build
# Preview production build # Preview production build
npm run preview npm run preview
```plaintext ```text
## 🌐 API Integration ## 🌐 API Integration
@ -196,7 +197,7 @@ const { isConnected, lastMessage } = useWebSocket({
updateLogsList(log); updateLogsList(log);
} }
}); });
```plaintext ```text
## ✅ Features Implemented ## ✅ Features Implemented
@ -302,7 +303,7 @@ COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80 EXPOSE 80
CMD ["nginx", "-g", "daemon off;"] CMD ["nginx", "-g", "daemon off;"]
```plaintext ```text
### Kubernetes Deployment ### Kubernetes Deployment
@ -329,7 +330,7 @@ spec:
env: env:
- name: VITE_API_BASE_URL - name: VITE_API_BASE_URL
value: "https://api.example.com" value: "https://api.example.com"
```plaintext ```text
## 🤝 Contributing ## 🤝 Contributing

View File

@ -28,6 +28,6 @@ The control center UI remains fully functional at its original location.
```bash ```bash
cd /Users/Akasha/repo-cnz/src/control-center-ui cd /Users/Akasha/repo-cnz/src/control-center-ui
# Use existing UI development commands # Use existing UI development commands
``` ```text
See original implementation for development setup and usage instructions. See original implementation for development setup and usage instructions.

View File

@ -1,406 +0,0 @@
# Security UI Mockups and Screenshots
## 1. Login Page with MFA
### Initial Login Screen
```plaintext
┌────────────────────────────────────────────────────────┐
│ │
│ Control Center Logo │
│ │
│ Sign in to Control Center │
│ Enter your credentials to continue │
│ │
│ ┌──────────────────────────────────────────────┐ │
│ │ Username or Email │ │
│ │ [_____________________________________] │ │
│ │ │ │
│ │ Password │ │
│ │ [_____________________________________] │ │
│ │ │ │
│ │ ☐ Remember me Forgot password? │ │
│ │ │ │
│ │ [ Sign In ] │ │
│ └──────────────────────────────────────────────┘ │
│ │
│ Need help? Contact support │
│ │
└────────────────────────────────────────────────────────┘
```plaintext
### MFA Verification Screen
```plaintext
┌────────────────────────────────────────────────────────┐
│ │
│ 🔒 │
│ │
│ Two-Factor Authentication │
│ Enter the verification code from your authenticator │
│ │
│ ┌──────────────────────────────────────────────┐ │
│ │ Verification Code │ │
│ │ │ │
│ │ [ 0 0 0 0 0 0 ] │ │
│ │ │ │
│ │ Enter the 6-digit code from your app │ │
│ │ │ │
│ │ [ Verify ] │ │
│ │ [ Back to login ] │ │
│ │ │ │
│ │ OR │ │
│ │ │ │
│ │ Lost access to your device? │ │
│ │ [ Use backup code ] │ │
│ └──────────────────────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────┘
```plaintext
## 2. Security Settings - MFA Devices
```plaintext
┌─────────────────────────────────────────────────────────────────┐
│ Security Settings [ + Add MFA Method ] │
│ Manage your two-factor authentication methods │
│ │
│ ┌───────────────────────────────────────────────────────────┐ │
│ │ 📱 Google Authenticator [TOTP] │ │
│ │ Added: 2025-09-15 │ │
│ │ Last used: 2025-10-08 [ ⋮ ] │ │
│ └───────────────────────────────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────┐ │
│ │ 🔑 YubiKey 5C [WebAuthn] │ │
│ │ Added: 2025-09-20 │ │
│ │ Last used: 2025-10-07 [ ⋮ ] │ │
│ └───────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```plaintext
## 3. TOTP Setup Wizard
### Step 1: Introduction
```plaintext
┌─────────────────────────────────────────────────────────┐
│ Setup Authenticator App │
│ │
You'll need an authenticator app like │
│ Google Authenticator, Authy, or 1Password. │
│ │
│ How it works: │
│ 1. Scan a QR code with your authenticator app │
│ 2. Enter a verification code to confirm │
│ 3. Save backup codes for account recovery │
│ 4. Use codes from your app to log in │
│ │
│ [ Get Started ] │
└─────────────────────────────────────────────────────────┘
```plaintext
### Step 2: Scan QR Code
```plaintext
┌─────────────────────────────────────────────────────────┐
│ Scan QR Code │
│ │
│ Scan this QR code with your authenticator app │
│ │
│ ┌─────────────────────┐ │
│ │ ▓▓ ▓▓▓ ▓ ▓▓▓ │ │
│ │ ▓ ▓▓ ▓▓ ▓▓ ▓ │ │
│ │ ▓▓▓ ▓ ▓▓▓ ▓▓▓ │ │
│ │ ▓ ▓▓▓ ▓ ▓ ▓▓ │ │
│ └─────────────────────┘ │
│ │
│ OR │
│ │
│ Enter this code manually: │
│ [ JBSWY3DPEHPK3PXP ] [ 📋 Copy ] │
│ │
│ [ Continue ] │
└─────────────────────────────────────────────────────────┘
```plaintext
### Step 3: Verify
```plaintext
┌─────────────────────────────────────────────────────────┐
│ Verify Setup │
│ │
│ Enter the 6-digit code from your authenticator app │
│ to confirm the setup: │
│ │
│ [ 0 0 0 0 0 0 ] │
│ │
│ [ Back ] [ Verify & Continue ] │
└─────────────────────────────────────────────────────────┘
```plaintext
### Step 4: Backup Codes
```plaintext
┌─────────────────────────────────────────────────────────┐
│ Save Backup Codes │
│ │
│ ⚠️ Save these codes in a secure location. You can │
│ use them to access your account if you lose │
│ your device. │
│ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ A1B2-C3D4 │ E5F6-G7H8 │ │
│ │ I9J0-K1L2 │ M3N4-O5P6 │ │
│ │ Q7R8-S9T0 │ U1V2-W3X4 │ │
│ │ Y5Z6-A7B8 │ C9D0-E1F2 │ │
│ │ G3H4-I5J6 │ K7L8-M9N0 │ │
│ └─────────────────────────────────────────────────┘ │
│ │
│ [ Download Codes ] [ Copy to Clipboard ] │
│ │
│ ☐ I have saved these codes in a secure location │
│ │
│ [ Complete Setup ] │
└─────────────────────────────────────────────────────────┘
```plaintext
## 4. API Tokens Management
```plaintext
┌─────────────────────────────────────────────────────────────────┐
│ API Tokens [ + Create Token ] │
│ Manage personal access tokens for API access │
│ │
│ ┌───────────────────────────────────────────────────────────┐ │
│ │ CI/CD Pipeline [Expired] │ │
│ │ prvng_...xyz │ │
│ │ Created: 2025-01-15 │ │
│ │ Last used: 2025-03-10 │ │
│ │ Expires: 2025-04-15 [ 🗑️ ] │ │
│ │ [read:servers] [write:servers] [read:taskservs] │ │
│ └───────────────────────────────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────┐ │
│ │ Development Token │ │
│ │ prvng_...abc │ │
│ │ Created: 2025-09-01 │ │
│ │ Last used: 2025-10-08 │ │
│ │ Expires: 2025-12-01 [ 🗑️ ] │ │
│ │ [read:servers] [read:clusters] [read:audit] │ │
│ └───────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```plaintext
### Create Token Dialog
```plaintext
┌─────────────────────────────────────────────────────────┐
│ Create API Token [ ✕ ] │
│ │
│ Token Name │
│ [ e.g., CI/CD Pipeline ] │
│ A descriptive name to identify this token │
│ │
│ Expiration (days) │
│ [ 30 days ▼ ] │
│ │
│ Scopes │
│ ☑ read:servers │
│ ☑ write:servers │
│ ☑ read:taskservs │
│ ☐ write:taskservs │
│ ☑ read:clusters │
│ ☐ write:clusters │
│ ☐ read:audit │
│ │
│ [ Cancel ] [ Create Token ] │
└─────────────────────────────────────────────────────────┘
```plaintext
### Token Created Success
```plaintext
┌─────────────────────────────────────────────────────────┐
│ ✅ Token Created Successfully │
│ │
│ Make sure to copy your token now. You won't be │
│ able to see it again! │
│ │
│ [ prvng_1234567890abcdef... ] [ Copy ] │
│ │
└─────────────────────────────────────────────────────────┘
```plaintext
## 5. Audit Logs Viewer
```plaintext
┌─────────────────────────────────────────────────────────────────────────┐
│ Audit Logs [ Export ▼ ] │
│ View and search security audit logs │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Filters │ │
│ │ User Action Resource │ │
│ │ [________] [________] [________] │ │
│ │ │ │
│ │ Workspace Status Date From Date To │ │
│ │ [________] [All ▼] [📅 ____] [📅 ____] │ │
│ │ │ │
│ │ [ Clear Filters ] [ Search ] │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Results Showing 50 of 1,234 events │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Timestamp User Action Resource Status │ │
│ ├─────────────────────────────────────────────────────────────────┤ │
│ │ 2025-10-08 10:30 alice@ex.com create server-01 ✓ success│ │
│ │ 192.168.1.100 45ms │ │
│ ├─────────────────────────────────────────────────────────────────┤ │
│ │ 2025-10-08 10:28 bob@ex.com delete cluster-02 ✓ success│ │
│ │ 10.0.0.50 230ms │ │
│ ├─────────────────────────────────────────────────────────────────┤ │
│ │ 2025-10-08 10:25 carol@ex.com login - ✕ failure│ │
│ │ 203.0.113.42 15ms │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ [ « ] [ Page 1 ] [ » ] │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```plaintext
## 6. WebAuthn Setup
```plaintext
┌─────────────────────────────────────────────────────────┐
│ Setup Security Key (WebAuthn) │
│ │
Use a physical security key like YubiKey or │
│ your device's built-in biometric authentication │
│ (Touch ID, Face ID, Windows Hello). │
│ │
│ How it works: │
│ 1. Insert your security key or prepare biometric │
│ 2. Click the registration button │
│ 3. Follow your browser's prompts to register │
│ 4. Tap your key or use biometric when prompted │
│ │
│ Device Name (Optional) │
│ [ e.g., YubiKey 5C, MacBook Touch ID ] │
│ Give your security key a name to identify it later │
│ │
│ 🔑 │
│ Have your security key ready │
│ │
│ [ Register Security Key ] │
└─────────────────────────────────────────────────────────┘
```plaintext
### WebAuthn Registration in Progress
```plaintext
┌─────────────────────────────────────────────────────────┐
│ Setup Security Key (WebAuthn) │
│ │
│ [ 🔄 Waiting for device... ] │
│ │
│ ⚠️ Follow your browser's prompts │
│ │
│ You may need to tap your security key or use │
│ biometric authentication │
│ │
└─────────────────────────────────────────────────────────┘
```plaintext
### WebAuthn Success
```plaintext
┌─────────────────────────────────────────────────────────┐
│ ✅ │
│ │
│ Security Key Registered! │
│ │
│ Your security key has been successfully registered. │
│ You can now use it to log in. │
│ │
│ [ Done ] │
│ │
└─────────────────────────────────────────────────────────┘
```plaintext
## Color Scheme
### Primary Colors
- **Primary (Blue)**: `#3B82F6` - Actions, links, active states
- **Success (Green)**: `#10B981` - Successful operations
- **Warning (Yellow)**: `#F59E0B` - Warnings, cautions
- **Error (Red)**: `#EF4444` - Errors, failures
- **Info (Cyan)**: `#06B6D4` - Informational messages
### Neutral Colors
- **Base 100**: `#FFFFFF` - Card backgrounds
- **Base 200**: `#F3F4F6` - Page backgrounds
- **Base 300**: `#E5E7EB` - Borders, dividers
- **Base Content**: `#1F2937` - Text color
### Status Badges
- **TOTP Badge**: Primary blue
- **WebAuthn Badge**: Secondary purple
- **Success Badge**: Green with checkmark
- **Failure Badge**: Yellow with warning icon
- **Error Badge**: Red with X icon
- **Expired Badge**: Gray with red text
## Typography
### Headings
- **H1**: 2.25rem (36px), Bold - Page titles
- **H2**: 1.875rem (30px), Bold - Section titles
- **H3**: 1.5rem (24px), Bold - Card titles
- **H4**: 1.25rem (20px), Semi-bold - Subsection titles
### Body Text
- **Regular**: 1rem (16px), Normal - Body text
- **Small**: 0.875rem (14px), Normal - Labels, hints
- **Tiny**: 0.75rem (12px), Normal - Timestamps, metadata
### Monospace
- **Code**: 0.875rem (14px), Mono - Tokens, codes, IDs
## Icons
### Navigation Icons
- 🔒 Lock - MFA, security
- 🔑 Key - API tokens, access
- 📋 Clipboard - Audit logs
- ⚙️ Settings - Configuration
- 📱 Mobile - Authenticator app
### Status Icons
- ✓ Checkmark - Success
- ✕ X mark - Error
- ⚠️ Warning - Caution
- Info - Information
- 🔄 Refresh - Loading
### Action Icons
- Plus - Add/Create
- 🗑️ Trash - Delete/Remove
- 📥 Download - Export
- 📋 Copy - Clipboard
- ⋮ Vertical dots - More options
---
**Note**: All mockups use DaisyUI components with TailwindCSS styling. Actual implementation includes smooth transitions, hover states, focus indicators, and responsive layouts.

View File

@ -64,7 +64,7 @@ src/
│ └── user_profile.rs # User profile management │ └── user_profile.rs # User profile management
├── utils/ # Utility modules ├── utils/ # Utility modules
└── lib.rs # Main application entry └── lib.rs # Main application entry
```plaintext ```text
## 🚀 Implemented Components ## 🚀 Implemented Components
@ -158,7 +158,7 @@ fn App() -> impl IntoView {
</Router> </Router>
} }
} }
```plaintext ```text
### Login Page Implementation ### Login Page Implementation
@ -180,7 +180,7 @@ fn LoginPage() -> impl IntoView {
</div> </div>
} }
} }
```plaintext ```text
### Protected Dashboard ### Protected Dashboard
@ -210,7 +210,7 @@ fn DashboardPage() -> impl IntoView {
</AuthGuard> </AuthGuard>
} }
} }
```plaintext ```text
### User Profile Management ### User Profile Management
@ -227,7 +227,7 @@ fn ProfilePage() -> impl IntoView {
</AuthGuard> </AuthGuard>
} }
} }
```plaintext ```text
## 🔧 Required Backend API ## 🔧 Required Backend API
@ -240,28 +240,28 @@ POST /auth/login # Email/password authentication
POST /auth/refresh # JWT token refresh POST /auth/refresh # JWT token refresh
POST /auth/logout # Session termination POST /auth/logout # Session termination
POST /auth/extend-session # Session timeout extension POST /auth/extend-session # Session timeout extension
```plaintext ```text
### Password Management ### Password Management
```plaintext ```plaintext
POST /auth/password-reset # Password reset request POST /auth/password-reset # Password reset request
POST /auth/password-reset/confirm # Password reset confirmation POST /auth/password-reset/confirm # Password reset confirmation
```plaintext ```text
### Multi-Factor Authentication ### Multi-Factor Authentication
```plaintext ```plaintext
POST /auth/mfa/setup # MFA setup initiation POST /auth/mfa/setup # MFA setup initiation
POST /auth/mfa/verify # MFA verification POST /auth/mfa/verify # MFA verification
```plaintext ```text
### SSO Integration ### SSO Integration
```plaintext ```plaintext
GET /auth/sso/providers # Available SSO providers GET /auth/sso/providers # Available SSO providers
POST /auth/sso/{provider}/login # SSO authentication initiation POST /auth/sso/{provider}/login # SSO authentication initiation
```plaintext ```text
### WebAuthn/FIDO2 ### WebAuthn/FIDO2
@ -272,7 +272,7 @@ POST /auth/webauthn/authenticate/begin # WebAuthn authentication start
POST /auth/webauthn/authenticate/complete # WebAuthn authentication finish POST /auth/webauthn/authenticate/complete # WebAuthn authentication finish
GET /auth/webauthn/credentials # List WebAuthn credentials GET /auth/webauthn/credentials # List WebAuthn credentials
DELETE /auth/webauthn/credentials/{id} # Remove WebAuthn credential DELETE /auth/webauthn/credentials/{id} # Remove WebAuthn credential
```plaintext ```text
### Device Trust Management ### Device Trust Management
@ -280,7 +280,7 @@ DELETE /auth/webauthn/credentials/{id} # Remove WebAuthn credential
GET /auth/devices # List trusted devices GET /auth/devices # List trusted devices
POST /auth/devices/trust # Trust current device POST /auth/devices/trust # Trust current device
DELETE /auth/devices/{id}/revoke # Revoke device trust DELETE /auth/devices/{id}/revoke # Revoke device trust
```plaintext ```text
### User Profile Management ### User Profile Management
@ -292,7 +292,7 @@ POST /user/mfa/enable # Enable MFA
POST /user/mfa/disable # Disable MFA POST /user/mfa/disable # Disable MFA
GET /user/sessions # List active sessions GET /user/sessions # List active sessions
DELETE /user/sessions/{id}/revoke # Revoke session DELETE /user/sessions/{id}/revoke # Revoke session
```plaintext ```text
## 📊 Implementation Statistics ## 📊 Implementation Statistics

View File

@ -9,11 +9,12 @@ The control-center-ui build produces a future incompatibility warning from the t
```plaintext ```plaintext
warning: the following packages contain code that will be rejected by a future version of Rust: num-bigint-dig v0.8.4 warning: the following packages contain code that will be rejected by a future version of Rust: num-bigint-dig v0.8.4
note: to see what the problems were, use the option `--future-incompat-report`, or run `cargo report future-incompatibilities --id 1` note: to see what the problems were, use the option `--future-incompat-report`, or run `cargo report future-incompatibilities --id 1`
```plaintext ```text
## Root Cause ## Root Cause
The `num-bigint-dig v0.8.4` crate uses a **private `vec!` macro** in multiple locations (Rust issue #120192). This pattern will become a hard error in a future Rust release. The `num-bigint-dig v0.8.4` crate uses a **private `vec!` macro** in multiple locations (Rust issue #120192).
This pattern will become a hard error in a future Rust release.
**Affected files in num-bigint-dig v0.8.4:** **Affected files in num-bigint-dig v0.8.4:**
@ -35,7 +36,7 @@ num-bigint-dig v0.8.4
└── ssh-key v0.6.7 └── ssh-key v0.6.7
├── russh v0.44.1 ├── russh v0.44.1
└── russh-keys v0.44.0 └── russh-keys v0.44.0
```plaintext ```text
## Why We Can't Fix It ## Why We Can't Fix It
@ -113,7 +114,7 @@ cargo outdated
# Check dependency tree # Check dependency tree
cargo tree | grep num-bigint-dig cargo tree | grep num-bigint-dig
```plaintext ```text
## Workaround (if needed) ## Workaround (if needed)

View File

@ -46,7 +46,7 @@ A comprehensive Cedar policy engine implementation with advanced security featur
```bash ```bash
cd src/control-center cd src/control-center
cargo build --release cargo build --release
``` ```text
### 2. Configuration ### 2. Configuration
@ -54,7 +54,7 @@ Copy the example configuration:
```bash ```bash
cp config.toml.example config.toml cp config.toml.example config.toml
``` ```text
Edit `config.toml` for your environment: Edit `config.toml` for your environment:
@ -74,13 +74,13 @@ enabled = true
[anomaly] [anomaly]
enabled = true enabled = true
detection_threshold = 2.5 detection_threshold = 2.5
``` ```text
### 3. Start the Server ### 3. Start the Server
```bash ```bash
./target/release/control-center server --port 8080 ./target/release/control-center server --port 8080
``` ```text
### 4. Test Policy Evaluation ### 4. Test Policy Evaluation
@ -93,7 +93,7 @@ curl -X POST http://localhost:8080/policies/evaluate \
"resource": {"id": "sensitive-db", "classification": "confidential"}, "resource": {"id": "sensitive-db", "classification": "confidential"},
"context": {"mfa_enabled": true, "location": "US"} "context": {"mfa_enabled": true, "location": "US"}
}' }'
``` ```text
## Policy Examples ## Policy Examples
@ -111,7 +111,7 @@ permit(
principal has mfa_enabled && principal has mfa_enabled &&
principal.mfa_enabled == true principal.mfa_enabled == true
}; };
``` ```text
### Production Approval Policy ### Production Approval Policy
@ -127,7 +127,7 @@ permit(
principal has approval && principal has approval &&
principal.approval.approved_by in ["ProductionAdmin", "SRE"] principal.approval.approved_by in ["ProductionAdmin", "SRE"]
}; };
``` ```text
### Geographic Restrictions ### Geographic Restrictions
@ -142,7 +142,7 @@ permit(
context.geo has country && context.geo has country &&
context.geo.country in ["US", "CA", "GB", "DE"] context.geo.country in ["US", "CA", "GB", "DE"]
}; };
``` ```text
## CLI Commands ## CLI Commands
@ -157,7 +157,7 @@ control-center policy test policies/mfa.cedar tests/data/mfa_test.json
# Analyze policy impact # Analyze policy impact
control-center policy impact policies/new_policy.cedar control-center policy impact policies/new_policy.cedar
``` ```text
### Compliance Checking ### Compliance Checking
@ -170,7 +170,7 @@ control-center compliance hipaa
# Generate compliance report # Generate compliance report
control-center compliance report --format html control-center compliance report --format html
``` ```text
## API Endpoints ## API Endpoints
@ -206,25 +206,25 @@ control-center compliance report --format html
```bash ```bash
cargo test cargo test
``` ```text
### Run Integration Tests ### Run Integration Tests
```bash ```bash
cargo test --test integration_tests cargo test --test integration_tests
``` ```text
### Run Policy Tests ### Run Policy Tests
```bash ```bash
cargo test --test policy_tests cargo test --test policy_tests
``` ```text
### Run Compliance Tests ### Run Compliance Tests
```bash ```bash
cargo test --test compliance_tests cargo test --test compliance_tests
``` ```text
## Architecture ## Architecture
@ -287,7 +287,7 @@ RUN apt-get update && apt-get install -y ca-certificates
COPY --from=builder /app/target/release/control-center /usr/local/bin/ COPY --from=builder /app/target/release/control-center /usr/local/bin/
EXPOSE 8080 EXPOSE 8080
CMD ["control-center", "server"] CMD ["control-center", "server"]
``` ```text
### Kubernetes ### Kubernetes
@ -314,7 +314,7 @@ spec:
env: env:
- name: DATABASE_URL - name: DATABASE_URL
value: "surreal://surrealdb:8000" value: "surreal://surrealdb:8000"
``` ```text
### Environment Variables ### Environment Variables
@ -324,7 +324,7 @@ export CONTROL_CENTER_SERVER_PORT=8080
export CONTROL_CENTER_DATABASE_URL="surreal://prod-db:8000" export CONTROL_CENTER_DATABASE_URL="surreal://prod-db:8000"
export CONTROL_CENTER_AUTH_JWT_SECRET="production-secret" export CONTROL_CENTER_AUTH_JWT_SECRET="production-secret"
export CONTROL_CENTER_COMPLIANCE_SOC2_ENABLED=true export CONTROL_CENTER_COMPLIANCE_SOC2_ENABLED=true
``` ```text
## Monitoring & Observability ## Monitoring & Observability
@ -346,13 +346,13 @@ tracing::info!(
duration_ms = evaluation_time, duration_ms = evaluation_time,
"Policy evaluation completed" "Policy evaluation completed"
); );
``` ```text
### Health Checks ### Health Checks
```bash ```bash
curl http://localhost:8080/health curl http://localhost:8080/health
``` ```text
## Contributing ## Contributing

View File

@ -2,7 +2,8 @@
## Overview ## Overview
This document outlines the security architecture and considerations for the control-center enhancements, including KMS SSH key management, mode-based RBAC, and platform service monitoring. This document outlines the security architecture and considerations for the control-center enhancements,
including KMS SSH key management, mode-based RBAC, and platform service monitoring.
## 1. SSH Key Management Security ## 1. SSH Key Management Security
@ -30,7 +31,7 @@ let key_id = ssh_key_manager.store_ssh_key(name, private, public, purpose, tags)
// Bad: Never do this - exposing private key in logs // Bad: Never do this - exposing private key in logs
tracing::info!("Stored key: {}", private_key); // DON'T DO THIS tracing::info!("Stored key: {}", private_key); // DON'T DO THIS
```plaintext ```text
### 1.2 Key Rotation Security ### 1.2 Key Rotation Security
@ -55,7 +56,7 @@ rotation_enabled = true
rotation_interval_days = 90 # Enterprise: 30, Dev: 180 rotation_interval_days = 90 # Enterprise: 30, Dev: 180
grace_period_days = 7 # Time to update deployed keys grace_period_days = 7 # Time to update deployed keys
auto_rotate = false # Manual approval recommended auto_rotate = false # Manual approval recommended
```plaintext ```text
### 1.3 Audit Logging ### 1.3 Audit Logging
@ -79,7 +80,7 @@ pub struct SshKeyAuditEntry {
pub success: bool, pub success: bool,
pub error_message: Option<String>, pub error_message: Option<String>,
} }
```plaintext ```text
**Threat Mitigation**: **Threat Mitigation**:
@ -105,7 +106,7 @@ fn calculate_fingerprint(public_key: &[u8]) -> Result<String, KmsError> {
let result = hasher.finalize(); let result = hasher.finalize();
Ok(format!("SHA256:{}", base64::encode(&result[..16]))) Ok(format!("SHA256:{}", base64::encode(&result[..16])))
} }
```plaintext ```text
**Security Benefits**: **Security Benefits**:
@ -136,7 +137,7 @@ fn calculate_fingerprint(public_key: &[u8]) -> Result<String, KmsError> {
if mode == ExecutionMode::Solo { if mode == ExecutionMode::Solo {
return true; // Allow all operations return true; // Allow all operations
} }
```plaintext ```text
**Risks**: **Risks**:
@ -158,7 +159,7 @@ let permissions = rbac_manager.get_user_permissions(&user).await;
if !permissions.contains(&required_permission) { if !permissions.contains(&required_permission) {
return Err(RbacError::PermissionDenied); return Err(RbacError::PermissionDenied);
} }
```plaintext ```text
**Security Features**: **Security Features**:
@ -174,7 +175,7 @@ if !permissions.contains(&required_permission) {
if mode == ExecutionMode::CICD { if mode == ExecutionMode::CICD {
audit_log.log_automation_action(service_account, action).await; audit_log.log_automation_action(service_account, action).await;
} }
```plaintext ```text
**Security Features**: **Security Features**:
@ -193,7 +194,7 @@ if mode == ExecutionMode::CICD {
if mode == ExecutionMode::Enterprise { if mode == ExecutionMode::Enterprise {
audit_log.log_with_compliance(user, action, compliance_tags).await; audit_log.log_with_compliance(user, action, compliance_tags).await;
} }
```plaintext ```text
**Security Features**: **Security Features**:
@ -213,7 +214,7 @@ Role::Developer => 60 // Read + dev deploy
Role::ServiceAccount => 50 // Automation Role::ServiceAccount => 50 // Automation
Role::Auditor => 40 // Read + audit Role::Auditor => 40 // Read + audit
Role::Viewer => 20 // Read-only Role::Viewer => 20 // Read-only
```plaintext ```text
**Action Security Levels**: **Action Security Levels**:
@ -226,7 +227,7 @@ Action::Update => 60 // Modify resources
Action::Execute => 50 // Execute operations Action::Execute => 50 // Execute operations
Action::Audit => 40 // View audit logs Action::Audit => 40 // View audit logs
Action::Read => 20 // View resources Action::Read => 20 // View resources
```plaintext ```text
**Permission Check**: **Permission Check**:
@ -234,7 +235,7 @@ Action::Read => 20 // View resources
pub fn can_perform(&self, required_level: u8) -> bool { pub fn can_perform(&self, required_level: u8) -> bool {
self.permission_level() >= required_level self.permission_level() >= required_level
} }
```plaintext ```text
**Security Guarantees**: **Security Guarantees**:
@ -254,7 +255,7 @@ session_timeout_minutes = 30 # Enterprise
max_sessions_per_user = 5 max_sessions_per_user = 5
failed_login_lockout_attempts = 5 failed_login_lockout_attempts = 5
failed_login_lockout_duration_minutes = 15 failed_login_lockout_duration_minutes = 15
```plaintext ```text
**Session Lifecycle**: **Session Lifecycle**:
@ -289,7 +290,7 @@ Request → Auth Middleware → RBAC Middleware → Handler
from JWT Token (role + resource + action) from JWT Token (role + resource + action)
Allow / Deny Allow / Deny
```plaintext ```text
**Middleware Implementation**: **Middleware Implementation**:
@ -311,7 +312,7 @@ pub async fn check_permission(
Ok(next.run(req).await) Ok(next.run(req).await)
} }
```plaintext ```text
**Security Guarantees**: **Security Guarantees**:
@ -332,7 +333,7 @@ orchestrator_url = "http://localhost:9090" # Not exposed externally
coredns_url = "http://localhost:9153" coredns_url = "http://localhost:9153"
gitea_url = "http://localhost:3000" gitea_url = "http://localhost:3000"
oci_registry_url = "http://localhost:5000" oci_registry_url = "http://localhost:5000"
```plaintext ```text
**Network Security**: **Network Security**:
@ -355,7 +356,7 @@ let client = Client::builder()
.timeout(std::time::Duration::from_secs(5)) // Prevent hanging .timeout(std::time::Duration::from_secs(5)) // Prevent hanging
.build() .build()
.unwrap(); .unwrap();
```plaintext ```text
**Error Handling**: **Error Handling**:
@ -372,7 +373,7 @@ Err(e) => {
.. ..
} }
} }
```plaintext ```text
**Threat Mitigation**: **Threat Mitigation**:
@ -409,7 +410,7 @@ pub async fn start_service(
Ok(StatusCode::OK) Ok(StatusCode::OK)
} }
```plaintext ```text
**Security Guarantees**: **Security Guarantees**:
@ -547,7 +548,7 @@ pub async fn delete_user(&self, user_id: &str) -> Result<(), RbacError> {
Ok(()) Ok(())
} }
```plaintext ```text
### 5.2 SOC 2 Compliance ### 5.2 SOC 2 Compliance

View File

@ -469,7 +469,8 @@ mod tests {
use super::*; use super::*;
fn generate_test_keys() -> (Vec<u8>, Vec<u8>) { fn generate_test_keys() -> (Vec<u8>, Vec<u8>) {
// Pre-generated RSA keys to avoid runtime key generation (avoids rand_core conflict) // Pre-generated RSA keys to avoid runtime key generation (avoids rand_core
// conflict)
let private_pem = b"-----BEGIN PRIVATE KEY----- let private_pem = b"-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC7F43HxrVfJJ+k MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC7F43HxrVfJJ+k
DQMEjENGqJLnBn6MvJnCu93A4ZNKEEpPGX1Y6V+qiqLH5B7wNMIJ2QVnLjYjKZu5 DQMEjENGqJLnBn6MvJnCu93A4ZNKEEpPGX1Y6V+qiqLH5B7wNMIJ2QVnLjYjKZu5
@ -509,10 +510,7 @@ WCemS72CI+y72SqwgaZ94eUO4WKV2agxmHeXQ8FhSd8ZFTJCTBKDW91Kykdo7yUx
IQIDAQAB IQIDAQAB
-----END PUBLIC KEY-----"; -----END PUBLIC KEY-----";
( (private_pem.to_vec(), public_pem.to_vec())
private_pem.to_vec(),
public_pem.to_vec(),
)
} }
fn create_test_service() -> JwtService { fn create_test_service() -> JwtService {

View File

@ -373,7 +373,8 @@ mod tests {
use super::*; use super::*;
fn create_test_jwt_service() -> JwtService { fn create_test_jwt_service() -> JwtService {
// Pre-generated RSA keys to avoid runtime key generation (avoids rand_core conflict) // Pre-generated RSA keys to avoid runtime key generation (avoids rand_core
// conflict)
let private_pem = b"-----BEGIN PRIVATE KEY----- let private_pem = b"-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC7F43HxrVfJJ+k MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC7F43HxrVfJJ+k
DQMEjENGqJLnBn6MvJnCu93A4ZNKEEpPGX1Y6V+qiqLH5B7wNMIJ2QVnLjYjKZu5 DQMEjENGqJLnBn6MvJnCu93A4ZNKEEpPGX1Y6V+qiqLH5B7wNMIJ2QVnLjYjKZu5

View File

@ -726,8 +726,7 @@ impl ComplianceFrameworkChecker for HIPAAChecker {
for violation in &critical_violations { for violation in &critical_violations {
report.push_str(&format!( report.push_str(&format!(
"### ⚠️ {} - {} Risk\n", "### ⚠️ {} - {} Risk\n",
violation.control_id, violation.control_id, violation.severity
violation.severity
)); ));
report.push_str(&format!("**Description:** {}\n", violation.description)); report.push_str(&format!("**Description:** {}\n", violation.description));
if let Some(deadline) = violation.remediation_deadline { if let Some(deadline) = violation.remediation_deadline {

View File

@ -16,8 +16,8 @@ use serde::{Deserialize, Serialize};
use crate::error::{policy, ControlCenterError, Result}; use crate::error::{policy, ControlCenterError, Result};
use crate::policies::{PolicyCategory, PolicyMetadata, PolicyRequestContext, PolicyResult}; use crate::policies::{PolicyCategory, PolicyMetadata, PolicyRequestContext, PolicyResult};
use crate::storage::{ComplianceCheckResult, PolicyStorage};
use crate::storage::PolicyMetadata as StoragePolicyMetadata; use crate::storage::PolicyMetadata as StoragePolicyMetadata;
use crate::storage::{ComplianceCheckResult, PolicyStorage};
/// Compliance framework types /// Compliance framework types
#[derive(Debug, Clone, Eq, Hash, PartialEq, Serialize, Deserialize)] #[derive(Debug, Clone, Eq, Hash, PartialEq, Serialize, Deserialize)]

View File

@ -551,8 +551,7 @@ impl ComplianceFrameworkChecker for SOC2Checker {
for violation in &result.violations { for violation in &result.violations {
report.push_str(&format!( report.push_str(&format!(
"### {} - {}\n", "### {} - {}\n",
violation.control_id, violation.control_id, violation.severity
violation.severity
)); ));
report.push_str(&format!("**Description:** {}\n", violation.description)); report.push_str(&format!("**Description:** {}\n", violation.description));
if let Some(deadline) = violation.remediation_deadline { if let Some(deadline) = violation.remediation_deadline {
@ -571,8 +570,7 @@ impl ComplianceFrameworkChecker for SOC2Checker {
for recommendation in &result.recommendations { for recommendation in &result.recommendations {
report.push_str(&format!( report.push_str(&format!(
"### {} ({})\n", "### {} ({})\n",
recommendation.title, recommendation.title, recommendation.priority
recommendation.priority
)); ));
report.push_str(&format!("**Control:** {}\n", recommendation.control_id)); report.push_str(&format!("**Control:** {}\n", recommendation.control_id));
report.push_str(&format!( report.push_str(&format!(

View File

@ -1,9 +1,11 @@
use crate::error::{ControlCenterError, Result, infrastructure};
use serde::{Deserialize, Serialize};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::time::Duration; use std::time::Duration;
use tracing::info;
use platform_config::ConfigLoader; use platform_config::ConfigLoader;
use serde::{Deserialize, Serialize};
use tracing::info;
use crate::error::{infrastructure, ControlCenterError, Result};
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ControlCenterConfig { pub struct ControlCenterConfig {
@ -213,7 +215,8 @@ impl Default for ControlCenterConfig {
impl ControlCenterConfig { impl ControlCenterConfig {
/// Load configuration with hierarchical fallback logic: /// Load configuration with hierarchical fallback logic:
/// 1. Environment variable CONTROL_CENTER_CONFIG (explicit config path) /// 1. Environment variable CONTROL_CENTER_CONFIG (explicit config path)
/// 2. Mode-specific config: provisioning/platform/config/control-center.{mode}.toml /// 2. Mode-specific config:
/// provisioning/platform/config/control-center.{mode}.toml
/// 3. System defaults: config.defaults.toml /// 3. System defaults: config.defaults.toml
/// ///
/// Then environment variables (CONTROL_CENTER_*) override specific fields. /// Then environment variables (CONTROL_CENTER_*) override specific fields.
@ -232,10 +235,8 @@ impl ControlCenterConfig {
// Priority 2: Mode-specific config (provisioning/platform/config/) // Priority 2: Mode-specific config (provisioning/platform/config/)
if let Ok(mode) = std::env::var("CONTROL_CENTER_MODE") { if let Ok(mode) = std::env::var("CONTROL_CENTER_MODE") {
let mode_config_path = format!( let mode_config_path =
"provisioning/platform/config/control-center.{}.toml", format!("provisioning/platform/config/control-center.{}.toml", mode);
mode
);
if Path::new(&mode_config_path).exists() { if Path::new(&mode_config_path).exists() {
return Self::from_file(&mode_config_path); return Self::from_file(&mode_config_path);
} }
@ -260,10 +261,13 @@ impl ControlCenterConfig {
config.server.host = host; config.server.host = host;
} }
if let Ok(port) = std::env::var("CONTROL_CENTER_SERVER_PORT") { if let Ok(port) = std::env::var("CONTROL_CENTER_SERVER_PORT") {
config.server.port = port.parse() config.server.port = port.parse().map_err(|_| {
.map_err(|_| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( ControlCenterError::Infrastructure(
"CONTROL_CENTER_SERVER_PORT must be a valid port number".to_string() infrastructure::InfrastructureError::Configuration(
)))?; "CONTROL_CENTER_SERVER_PORT must be a valid port number".to_string(),
),
)
})?;
} }
// Auth overrides // Auth overrides
@ -274,10 +278,13 @@ impl ControlCenterConfig {
config.auth.require_mfa = require_mfa.to_lowercase() == "true"; config.auth.require_mfa = require_mfa.to_lowercase() == "true";
} }
if let Ok(session_timeout) = std::env::var("CONTROL_CENTER_SESSION_TIMEOUT_MINUTES") { if let Ok(session_timeout) = std::env::var("CONTROL_CENTER_SESSION_TIMEOUT_MINUTES") {
config.auth.session_timeout_minutes = session_timeout.parse() config.auth.session_timeout_minutes = session_timeout.parse().map_err(|_| {
.map_err(|_| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( ControlCenterError::Infrastructure(
"CONTROL_CENTER_SESSION_TIMEOUT_MINUTES must be a valid number".to_string() infrastructure::InfrastructureError::Configuration(
)))?; "CONTROL_CENTER_SESSION_TIMEOUT_MINUTES must be a valid number".to_string(),
),
)
})?;
} }
// Database overrides // Database overrides
@ -309,18 +316,20 @@ impl ControlCenterConfig {
/// Load configuration from file with environment variable interpolation /// Load configuration from file with environment variable interpolation
pub fn from_file<P: AsRef<Path>>(path: P) -> Result<Self> { pub fn from_file<P: AsRef<Path>>(path: P) -> Result<Self> {
let content = std::fs::read_to_string(path.as_ref()) let content = std::fs::read_to_string(path.as_ref()).map_err(|e| {
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
format!("Failed to read config file {:?}: {}", path.as_ref(), e) format!("Failed to read config file {:?}: {}", path.as_ref(), e),
)))?; ))
})?;
// Interpolate environment variables // Interpolate environment variables
let interpolated = Self::interpolate_env_vars(&content)?; let interpolated = Self::interpolate_env_vars(&content)?;
let config: Self = toml::from_str(&interpolated) let config: Self = toml::from_str(&interpolated).map_err(|e| {
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
format!("Failed to parse config: {}", e) format!("Failed to parse config: {}", e),
)))?; ))
})?;
config.validate()?; config.validate()?;
Ok(config) Ok(config)
@ -331,10 +340,11 @@ impl ControlCenterConfig {
let mut result = content.to_string(); let mut result = content.to_string();
// Replace ${VAR_NAME} with environment variable values // Replace ${VAR_NAME} with environment variable values
let re = regex::Regex::new(r"\$\{([^}]+)\}") let re = regex::Regex::new(r"\$\{([^}]+)\}").map_err(|e| {
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
format!("Invalid regex pattern: {}", e) format!("Invalid regex pattern: {}", e),
)))?; ))
})?;
for captures in re.captures_iter(content) { for captures in re.captures_iter(content) {
let var_name = &captures[1]; let var_name = &captures[1];
@ -351,16 +361,21 @@ impl ControlCenterConfig {
pub fn validate(&self) -> Result<()> { pub fn validate(&self) -> Result<()> {
// Validate server config // Validate server config
if self.server.port == 0 { if self.server.port == 0 {
return Err(ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( return Err(ControlCenterError::Infrastructure(
"Server port cannot be 0".to_string() infrastructure::InfrastructureError::Configuration(
))); "Server port cannot be 0".to_string(),
),
));
} }
// Validate policy directories exist // Validate policy directories exist
if !self.policies.policy_dir.exists() { if !self.policies.policy_dir.exists() {
return Err(ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( return Err(ControlCenterError::Infrastructure(
format!("Policy directory does not exist: {:?}", self.policies.policy_dir) infrastructure::InfrastructureError::Configuration(format!(
))); "Policy directory does not exist: {:?}",
self.policies.policy_dir
)),
));
} }
// Validate auth config // Validate auth config
@ -369,16 +384,20 @@ impl ControlCenterConfig {
} }
if self.auth.jwt_secret.len() < 32 { if self.auth.jwt_secret.len() < 32 {
return Err(ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( return Err(ControlCenterError::Infrastructure(
"JWT secret must be at least 32 characters".to_string() infrastructure::InfrastructureError::Configuration(
))); "JWT secret must be at least 32 characters".to_string(),
),
));
} }
// Validate password policy // Validate password policy
if self.auth.password_policy.min_length < 8 { if self.auth.password_policy.min_length < 8 {
return Err(ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( return Err(ControlCenterError::Infrastructure(
"Password minimum length must be at least 8 characters".to_string() infrastructure::InfrastructureError::Configuration(
))); "Password minimum length must be at least 8 characters".to_string(),
),
));
} }
Ok(()) Ok(())
@ -386,24 +405,27 @@ impl ControlCenterConfig {
/// Get configuration as JSON string /// Get configuration as JSON string
pub fn to_json(&self) -> Result<String> { pub fn to_json(&self) -> Result<String> {
serde_json::to_string_pretty(self) serde_json::to_string_pretty(self).map_err(|e| {
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
format!("Failed to serialize config to JSON: {}", e) format!("Failed to serialize config to JSON: {}", e),
))) ))
})
} }
/// Create a default configuration file /// Create a default configuration file
pub fn create_default_config<P: AsRef<Path>>(path: P) -> Result<()> { pub fn create_default_config<P: AsRef<Path>>(path: P) -> Result<()> {
let config = Self::default(); let config = Self::default();
let toml_content = toml::to_string_pretty(&config) let toml_content = toml::to_string_pretty(&config).map_err(|e| {
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
format!("Failed to serialize default config: {}", e) format!("Failed to serialize default config: {}", e),
)))?; ))
})?;
std::fs::write(path.as_ref(), toml_content) std::fs::write(path.as_ref(), toml_content).map_err(|e| {
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration( ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
format!("Failed to write config file {:?}: {}", path.as_ref(), e) format!("Failed to write config file {:?}: {}", path.as_ref(), e),
)))?; ))
})?;
Ok(()) Ok(())
} }
@ -414,34 +436,47 @@ impl ConfigLoader for ControlCenterConfig {
"control-center" "control-center"
} }
fn load_from_hierarchy() -> std::result::Result<Self, Box<dyn std::error::Error + Send + Sync>> { fn load_from_hierarchy() -> std::result::Result<Self, Box<dyn std::error::Error + Send + Sync>>
{
let service = Self::service_name(); let service = Self::service_name();
if let Some(path) = platform_config::resolve_config_path(service) { if let Some(path) = platform_config::resolve_config_path(service) {
return Self::from_path(&path) return Self::from_path(&path).map_err(|e| {
.map_err(|e| Box::new(std::io::Error::other(e.to_string())) as Box<dyn std::error::Error + Send + Sync>); Box::new(std::io::Error::other(e.to_string()))
as Box<dyn std::error::Error + Send + Sync>
});
} }
Ok(Self::default()) Ok(Self::default())
} }
fn apply_env_overrides(&mut self) -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> { fn apply_env_overrides(
Self::apply_env_overrides(self) &mut self,
.map_err(|e| Box::new(std::io::Error::other(e.to_string())) as Box<dyn std::error::Error + Send + Sync>) ) -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
Self::apply_env_overrides(self).map_err(|e| {
Box::new(std::io::Error::other(e.to_string()))
as Box<dyn std::error::Error + Send + Sync>
})
} }
fn from_path<P: AsRef<Path>>(path: P) -> std::result::Result<Self, Box<dyn std::error::Error + Send + Sync>> { fn from_path<P: AsRef<Path>>(
path: P,
) -> std::result::Result<Self, Box<dyn std::error::Error + Send + Sync>> {
let path = path.as_ref(); let path = path.as_ref();
let json_value = platform_config::format::load_config(path) let json_value = platform_config::format::load_config(path).map_err(|e| {
.map_err(|e| {
let err: Box<dyn std::error::Error + Send + Sync> = Box::new(e); let err: Box<dyn std::error::Error + Send + Sync> = Box::new(e);
err err
})?; })?;
serde_json::from_value(json_value) serde_json::from_value(json_value).map_err(|e| {
.map_err(|e| { let err_msg = format!(
let err_msg = format!("Failed to deserialize control-center config from {:?}: {}", path, e); "Failed to deserialize control-center config from {:?}: {}",
Box::new(std::io::Error::new(std::io::ErrorKind::InvalidData, err_msg)) as Box<dyn std::error::Error + Send + Sync> path, e
);
Box::new(std::io::Error::new(
std::io::ErrorKind::InvalidData,
err_msg,
)) as Box<dyn std::error::Error + Send + Sync>
}) })
} }
} }

View File

@ -269,9 +269,10 @@ impl From<std::io::Error> for ControlCenterError {
impl From<serde_json::Error> for ControlCenterError { impl From<serde_json::Error> for ControlCenterError {
fn from(error: serde_json::Error) -> Self { fn from(error: serde_json::Error) -> Self {
ControlCenterError::Infrastructure(InfrastructureError::Configuration( ControlCenterError::Infrastructure(InfrastructureError::Configuration(format!(
format!("JSON serialization error: {}", error) "JSON serialization error: {}",
)) error
)))
} }
} }

View File

@ -1,6 +1,7 @@
# Hybrid Key Management System (KMS) # Hybrid Key Management System (KMS)
A comprehensive hybrid KMS system built for the control center, supporting local/remote/hybrid modes with intelligent caching, failover, and advanced security features. A comprehensive hybrid KMS system built for the control center, supporting local/remote/hybrid modes
with intelligent caching, failover, and advanced security features.
## Architecture Overview ## Architecture Overview
@ -147,7 +148,7 @@ enable_pfs = true
allowed_algorithms = ["AES-256-GCM", "ChaCha20Poly1305", "RSA-4096", "ECDSA-P384"] allowed_algorithms = ["AES-256-GCM", "ChaCha20Poly1305", "RSA-4096", "ECDSA-P384"]
blocked_algorithms = ["DES", "3DES", "RC4", "MD5"] blocked_algorithms = ["DES", "3DES", "RC4", "MD5"]
policy_enforcement = "strict" policy_enforcement = "strict"
``` ```text
## Usage Examples ## Usage Examples
@ -212,7 +213,7 @@ if let Some(key_info) = kms.get_key(&stored_key_id).await? {
println!("Key status: {:?}", key_info.status); println!("Key status: {:?}", key_info.status);
println!("Created: {}", key_info.created_at); println!("Created: {}", key_info.created_at);
} }
``` ```text
### Provider Credential Management ### Provider Credential Management
@ -240,7 +241,7 @@ if let Some(creds) = kms.get_provider_credentials("aws").await? {
// Credentials are automatically injected into environment variables // Credentials are automatically injected into environment variables
// or configuration files based on the injection configuration // or configuration files based on the injection configuration
} }
``` ```text
### Health Monitoring ### Health Monitoring
@ -256,7 +257,7 @@ println!("Credentials Status: {}", health.credentials.healthy);
let cache_stats = kms.cache.stats().await; let cache_stats = kms.cache.stats().await;
println!("Cache hit rate: {:.2}%", cache_stats.hit_rate() * 100.0); println!("Cache hit rate: {:.2}%", cache_stats.hit_rate() * 100.0);
println!("Cache entries: {}", cache_stats.entry_count); println!("Cache entries: {}", cache_stats.entry_count);
``` ```text
## Integration with Existing System ## Integration with Existing System
@ -270,7 +271,7 @@ export PROVISIONING_KMS_MODE=hybrid
export PROVISIONING_KMS_LOCAL_DATABASE_PATH=/var/lib/provisioning/kms.db export PROVISIONING_KMS_LOCAL_DATABASE_PATH=/var/lib/provisioning/kms.db
export PROVISIONING_KMS_REMOTE_SERVER_URL=https://kms.example.com:9998 export PROVISIONING_KMS_REMOTE_SERVER_URL=https://kms.example.com:9998
export PROVISIONING_KMS_CACHE_ENABLED=true export PROVISIONING_KMS_CACHE_ENABLED=true
``` ```text
### TOML Configuration Integration ### TOML Configuration Integration
@ -284,7 +285,7 @@ local.database_path = "{{paths.base}}/data/kms.db"
cache.enabled = true cache.enabled = true
cache.local_dir = "{{paths.base}}/cache/kms" cache.local_dir = "{{paths.base}}/cache/kms"
audit.enabled = true audit.enabled = true
``` ```text
### Nushell Integration ### Nushell Integration
@ -304,7 +305,7 @@ def kms_health [] {
def kms_keys [] { def kms_keys [] {
http get http://localhost:8080/kms/keys | from json http get http://localhost:8080/kms/keys | from json
} }
``` ```text
## Security Considerations ## Security Considerations
@ -417,7 +418,7 @@ export PROVISIONING_LOG_LEVEL=debug
# Run with verbose output # Run with verbose output
./control-center --debug ./control-center --debug
``` ```text
### Health Checks ### Health Checks
@ -429,7 +430,7 @@ curl http://localhost:8080/kms/health
curl http://localhost:8080/kms/health/backend curl http://localhost:8080/kms/health/backend
curl http://localhost:8080/kms/health/cache curl http://localhost:8080/kms/health/cache
curl http://localhost:8080/kms/health/rotation curl http://localhost:8080/kms/health/rotation
``` ```text
## Future Enhancements ## Future Enhancements
@ -448,4 +449,5 @@ curl http://localhost:8080/kms/health/rotation
3. **Federated KMS**: Multi-organization key sharing 3. **Federated KMS**: Multi-organization key sharing
4. **Blockchain Integration**: Immutable audit trails 4. **Blockchain Integration**: Immutable audit trails
This hybrid KMS system provides a solid foundation for secure key management in the control center architecture, with room for future enhancements and customization based on specific requirements. This hybrid KMS system provides a solid foundation for secure key management in the control center architecture,
with room for future enhancements and customization based on specific requirements.

View File

@ -162,9 +162,7 @@ impl PolicyEngine {
match Policy::parse(Some(PolicyId::new(metadata.id.clone())), &policy_content) { match Policy::parse(Some(PolicyId::new(metadata.id.clone())), &policy_content) {
Ok(policy) => { Ok(policy) => {
policy_set.add(policy).map_err(|e| { policy_set.add(policy).map_err(|e| {
ControlCenterError::Policy(policy::PolicyError::Parsing( ControlCenterError::Policy(policy::PolicyError::Parsing(e.to_string()))
e.to_string(),
))
})?; })?;
loaded_count += 1; loaded_count += 1;
} }
@ -195,9 +193,7 @@ impl PolicyEngine {
if path.is_dir() { if path.is_dir() {
// Use Box::pin for recursive async calls to avoid infinite future size // Use Box::pin for recursive async calls to avoid infinite future size
count += Box::pin( count += Box::pin(self.load_policies_from_directory(policy_set, &path)).await?;
self.load_policies_from_directory(policy_set, &path)
).await?;
} else if path.extension().and_then(|s| s.to_str()) == Some("cedar") { } else if path.extension().and_then(|s| s.to_str()) == Some("cedar") {
count += self.load_policy_file(policy_set, &path).await?; count += self.load_policy_file(policy_set, &path).await?;
} }
@ -279,7 +275,8 @@ impl PolicyEngine {
.build_context(&context.environment) .build_context(&context.environment)
.await?; .await?;
let context = Context::from_json_value(serde_json::Value::Object(context_map), None).map_err(|e| { let context = Context::from_json_value(serde_json::Value::Object(context_map), None)
.map_err(|e| {
ControlCenterError::Policy(policy::PolicyError::Evaluation(format!( ControlCenterError::Policy(policy::PolicyError::Evaluation(format!(
"Context build error: {}", "Context build error: {}",
e e
@ -287,13 +284,7 @@ impl PolicyEngine {
})?; })?;
let schema = self.build_schema().await?; let schema = self.build_schema().await?;
Request::new( Request::new(principal, action, resource, context, Some(&schema)).map_err(|e| {
principal,
action,
resource,
context,
Some(&schema),
).map_err(|e| {
ControlCenterError::Policy(policy::PolicyError::Evaluation(format!( ControlCenterError::Policy(policy::PolicyError::Evaluation(format!(
"Request validation error: {}", "Request validation error: {}",
e e
@ -314,7 +305,8 @@ impl PolicyEngine {
let schema_path = &self.config.policies.schema_dir; let schema_path = &self.config.policies.schema_dir;
if schema_path.exists() { if schema_path.exists() {
// Implementation would load actual schema files from the directory // Implementation would load actual schema files from the directory
// For Cedar 4.8, we'd need to parse JSON schema files and construct Schema // For Cedar 4.8, we'd need to parse JSON schema files and construct
// Schema
} }
// Cedar 4.8 schema construction via JSON // Cedar 4.8 schema construction via JSON

View File

@ -283,7 +283,11 @@ impl PolicyTemplateManager {
} }
/// Render policy from template /// Render policy from template
pub fn render_policy(&mut self, template_id: &str, context: &TemplateContext) -> Result<String> { pub fn render_policy(
&mut self,
template_id: &str,
context: &TemplateContext,
) -> Result<String> {
let template = self.templates.get(template_id).ok_or_else(|| { let template = self.templates.get(template_id).ok_or_else(|| {
ControlCenterError::Policy(policy::PolicyError::Parsing(format!( ControlCenterError::Policy(policy::PolicyError::Parsing(format!(
"Template not found: {}", "Template not found: {}",

View File

@ -450,11 +450,7 @@ impl PolicyValidator {
if !result.errors.is_empty() { if !result.errors.is_empty() {
report.push_str("ERRORS:\n"); report.push_str("ERRORS:\n");
for error in &result.errors { for error in &result.errors {
report.push_str(&format!( report.push_str(&format!(" - {}: {}\n", error.error_type, error.message));
" - {}: {}\n",
error.error_type,
error.message
));
} }
report.push('\n'); report.push('\n');
} }
@ -464,9 +460,7 @@ impl PolicyValidator {
for warning in &result.warnings { for warning in &result.warnings {
report.push_str(&format!( report.push_str(&format!(
" - [{}] {}: {}\n", " - [{}] {}: {}\n",
warning.severity, warning.severity, warning.warning_type, warning.message
warning.warning_type,
warning.message
)); ));
} }
report.push('\n'); report.push('\n');
@ -477,8 +471,7 @@ impl PolicyValidator {
for suggestion in &result.suggestions { for suggestion in &result.suggestions {
report.push_str(&format!( report.push_str(&format!(
" - {}: {}\n", " - {}: {}\n",
suggestion.suggestion_type, suggestion.suggestion_type, suggestion.message
suggestion.message
)); ));
} }
} }

View File

@ -12,7 +12,9 @@ use uuid::Uuid;
use crate::error::{policy, ControlCenterError, Result}; use crate::error::{policy, ControlCenterError, Result};
use crate::policies::PolicyMetadata; use crate::policies::PolicyMetadata;
use crate::storage::{PolicyStorage, PolicyVersion as StoragePolicyVersion, RollbackResult as StorageRollbackResult}; use crate::storage::{
PolicyStorage, PolicyVersion as StoragePolicyVersion, RollbackResult as StorageRollbackResult,
};
/// Policy version information /// Policy version information
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
@ -83,7 +85,10 @@ pub struct RollbackResult {
} }
/// Convert storage PolicyVersion to versioning PolicyVersion /// Convert storage PolicyVersion to versioning PolicyVersion
fn convert_storage_version(storage_version: StoragePolicyVersion, metadata: Option<PolicyMetadata>) -> PolicyVersion { fn convert_storage_version(
storage_version: StoragePolicyVersion,
metadata: Option<PolicyMetadata>,
) -> PolicyVersion {
let metadata = metadata.unwrap_or_else(|| PolicyMetadata { let metadata = metadata.unwrap_or_else(|| PolicyMetadata {
id: storage_version.policy_id.clone(), id: storage_version.policy_id.clone(),
name: String::new(), name: String::new(),
@ -225,7 +230,8 @@ impl PolicyVersionManager {
policy_id: &str, policy_id: &str,
version_number: u32, version_number: u32,
) -> Result<Option<PolicyVersion>> { ) -> Result<Option<PolicyVersion>> {
let storage_version = self.storage let storage_version = self
.storage
.get_policy_version(policy_id, version_number) .get_policy_version(policy_id, version_number)
.await?; .await?;
Ok(storage_version.map(|v| convert_storage_version(v, None))) Ok(storage_version.map(|v| convert_storage_version(v, None)))
@ -240,7 +246,10 @@ impl PolicyVersionManager {
/// List all versions for a policy /// List all versions for a policy
pub async fn list_versions(&self, policy_id: &str) -> Result<Vec<PolicyVersion>> { pub async fn list_versions(&self, policy_id: &str) -> Result<Vec<PolicyVersion>> {
let storage_versions = self.storage.list_policy_versions(policy_id).await?; let storage_versions = self.storage.list_policy_versions(policy_id).await?;
Ok(storage_versions.into_iter().map(|v| convert_storage_version(v, None)).collect()) Ok(storage_versions
.into_iter()
.map(|v| convert_storage_version(v, None))
.collect())
} }
/// Get version history with changes /// Get version history with changes
@ -268,7 +277,9 @@ impl PolicyVersionManager {
summary: version.change_summary.clone(), summary: version.change_summary.clone(),
details: self.extract_change_details( details: self.extract_change_details(
&version.content, &version.content,
versions.get(i.saturating_sub(1)).map(|v| v.content.as_str()), versions
.get(i.saturating_sub(1))
.map(|v| v.content.as_str()),
), ),
impact_analysis, impact_analysis,
}); });
@ -401,10 +412,14 @@ impl PolicyVersionManager {
policy_id: &str, policy_id: &str,
tag: &str, tag: &str,
) -> Result<Vec<PolicyVersion>> { ) -> Result<Vec<PolicyVersion>> {
let storage_versions = self.storage let storage_versions = self
.storage
.get_policy_versions_by_tag(policy_id, tag) .get_policy_versions_by_tag(policy_id, tag)
.await?; .await?;
Ok(storage_versions.into_iter().map(|v| convert_storage_version(v, None)).collect()) Ok(storage_versions
.into_iter()
.map(|v| convert_storage_version(v, None))
.collect())
} }
/// Get latest version number /// Get latest version number

View File

@ -18,8 +18,8 @@ use serde::{Deserialize, Serialize};
// Import from surrealdb_storage (temporary placeholders) // Import from surrealdb_storage (temporary placeholders)
pub use surrealdb_storage::{PolicyMetadata, PolicyVersion, RollbackResult}; pub use surrealdb_storage::{PolicyMetadata, PolicyVersion, RollbackResult};
use crate::simple_config::Config;
use crate::error::{infrastructure, ControlCenterError, Result}; use crate::error::{infrastructure, ControlCenterError, Result};
use crate::simple_config::Config;
/// Policy storage trait /// Policy storage trait
#[async_trait] #[async_trait]

View File

@ -13,9 +13,9 @@ use serde::{Deserialize, Serialize};
use super::{ use super::{
ComplianceCheckResult, PolicyEvaluationEvent, PolicyMetrics, PolicySearchQuery, PolicyStorage, ComplianceCheckResult, PolicyEvaluationEvent, PolicyMetrics, PolicySearchQuery, PolicyStorage,
}; };
use crate::simple_config::Config;
use crate::error::{auth, policy, ControlCenterError, Result}; use crate::error::{auth, policy, ControlCenterError, Result};
use crate::services::secrets::SecretType; use crate::services::secrets::SecretType;
use crate::simple_config::Config;
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PolicyMetadata { pub struct PolicyMetadata {

View File

@ -20,7 +20,8 @@ use control_center::auth::{
AuthService, AuthService,
}; };
/// Generate RSA key pair for testing (pre-generated to avoid rand_core conflict) /// Generate RSA key pair for testing (pre-generated to avoid rand_core
/// conflict)
fn generate_test_keys() -> (Vec<u8>, Vec<u8>) { fn generate_test_keys() -> (Vec<u8>, Vec<u8>) {
let private_pem = b"-----BEGIN PRIVATE KEY----- let private_pem = b"-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC7F43HxrVfJJ+k MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC7F43HxrVfJJ+k
@ -61,10 +62,7 @@ WCemS72CI+y72SqwgaZ94eUO4WKV2agxmHeXQ8FhSd8ZFTJCTBKDW91Kykdo7yUx
IQIDAQAB IQIDAQAB
-----END PUBLIC KEY-----"; -----END PUBLIC KEY-----";
( (private_pem.to_vec(), public_pem.to_vec())
private_pem.to_vec(),
public_pem.to_vec(),
)
} }
/// Create JWT service for testing /// Create JWT service for testing

View File

@ -38,7 +38,7 @@ React/TypeScript frontend for the Control Center vault secrets management.
```bash ```bash
cd provisioning/platform/control-center/web cd provisioning/platform/control-center/web
npm install npm install
```plaintext ```text
## Development ## Development
@ -57,7 +57,7 @@ npm run lint
# Format code # Format code
npm run format npm run format
```plaintext ```text
## Environment Variables ## Environment Variables
@ -65,7 +65,7 @@ Create a `.env` file in the web directory:
```bash ```bash
REACT_APP_API_URL=http://localhost:8080 REACT_APP_API_URL=http://localhost:8080
```plaintext ```text
## Usage ## Usage
@ -81,7 +81,7 @@ function App() {
</div> </div>
); );
} }
```plaintext ```text
### API Client ### API Client
@ -119,7 +119,7 @@ const history = await secretsApi.getSecretHistory('database/prod/password');
// Restore version // Restore version
await secretsApi.restoreSecretVersion('database/prod/password', 2); await secretsApi.restoreSecretVersion('database/prod/password', 2);
```plaintext ```text
## Architecture ## Architecture
@ -137,7 +137,7 @@ SecretsManager (Orchestrator)
KMS Service (Encryption) KMS Service (Encryption)
RustyVault (Storage) RustyVault (Storage)
```plaintext ```text
## Security ## Security

View File

@ -13,7 +13,8 @@ Base URL: `http://localhost:8082/api/v1`
## Authentication ## Authentication
The Extension Registry API does not require authentication for read operations. Backend authentication (Gitea/OCI) is handled server-side via configuration. The Extension Registry API does not require authentication for read operations. Backend authentication (Gitea/OCI) is handled server-side via
configuration.
## Extension Endpoints ## Extension Endpoints
@ -36,7 +37,7 @@ Retrieve a list of available extensions with optional filtering and pagination.
```bash ```bash
curl "http://localhost:8082/api/v1/extensions?type=provider&limit=10" curl "http://localhost:8082/api/v1/extensions?type=provider&limit=10"
```plaintext ```text
**Example Response** (200 OK): **Example Response** (200 OK):
@ -69,7 +70,7 @@ curl "http://localhost:8082/api/v1/extensions?type=provider&limit=10"
"size": 890000 "size": 890000
} }
] ]
```plaintext ```text
--- ---
@ -90,7 +91,7 @@ Retrieve detailed metadata for a specific extension.
```bash ```bash
curl "http://localhost:8082/api/v1/extensions/provider/aws" curl "http://localhost:8082/api/v1/extensions/provider/aws"
```plaintext ```text
**Example Response** (200 OK): **Example Response** (200 OK):
@ -109,7 +110,7 @@ curl "http://localhost:8082/api/v1/extensions/provider/aws"
"size": 1024000, "size": 1024000,
"tags": ["cloud", "aws", "infrastructure"] "tags": ["cloud", "aws", "infrastructure"]
} }
```plaintext ```text
**Error Response** (404 Not Found): **Error Response** (404 Not Found):
@ -118,7 +119,7 @@ curl "http://localhost:8082/api/v1/extensions/provider/aws"
"error": "not_found", "error": "not_found",
"message": "Extension provider/nonexistent not found" "message": "Extension provider/nonexistent not found"
} }
```plaintext ```text
--- ---
@ -139,7 +140,7 @@ Get all available versions for a specific extension.
```bash ```bash
curl "http://localhost:8082/api/v1/extensions/taskserv/kubernetes/versions" curl "http://localhost:8082/api/v1/extensions/taskserv/kubernetes/versions"
```plaintext ```text
**Example Response** (200 OK): **Example Response** (200 OK):
@ -166,7 +167,7 @@ curl "http://localhost:8082/api/v1/extensions/taskserv/kubernetes/versions"
"size": 1950000 "size": 1950000
} }
] ]
```plaintext ```text
--- ---
@ -188,7 +189,7 @@ Download a specific version of an extension.
```bash ```bash
curl -OJ "http://localhost:8082/api/v1/extensions/provider/aws/1.2.0" curl -OJ "http://localhost:8082/api/v1/extensions/provider/aws/1.2.0"
```plaintext ```text
**Response**: **Response**:
@ -202,7 +203,7 @@ curl -OJ "http://localhost:8082/api/v1/extensions/provider/aws/1.2.0"
"error": "not_found", "error": "not_found",
"message": "Extension provider/aws version 1.2.0 not found" "message": "Extension provider/aws version 1.2.0 not found"
} }
```plaintext ```text
--- ---
@ -224,7 +225,7 @@ Search for extensions by name or description.
```bash ```bash
curl "http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv&limit=5" curl "http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv&limit=5"
```plaintext ```text
**Example Response** (200 OK): **Example Response** (200 OK):
@ -249,7 +250,7 @@ curl "http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv&
"published_at": "2025-09-20T14:30:00Z" "published_at": "2025-09-20T14:30:00Z"
} }
] ]
```plaintext ```text
--- ---
@ -265,7 +266,7 @@ Check service health and backend status.
```bash ```bash
curl "http://localhost:8082/api/v1/health" curl "http://localhost:8082/api/v1/health"
```plaintext ```text
**Example Response** (200 OK): **Example Response** (200 OK):
@ -286,7 +287,7 @@ curl "http://localhost:8082/api/v1/health"
} }
} }
} }
```plaintext ```text
**Degraded Status** (200 OK): **Degraded Status** (200 OK):
@ -307,7 +308,7 @@ curl "http://localhost:8082/api/v1/health"
} }
} }
} }
```plaintext ```text
--- ---
@ -321,7 +322,7 @@ Get Prometheus-formatted metrics.
```bash ```bash
curl "http://localhost:8082/api/v1/metrics" curl "http://localhost:8082/api/v1/metrics"
```plaintext ```text
**Example Response** (200 OK): **Example Response** (200 OK):
@ -349,7 +350,7 @@ cache_misses_total 247
# HELP extensions_total Total extensions # HELP extensions_total Total extensions
# TYPE extensions_total gauge # TYPE extensions_total gauge
extensions_total 45 extensions_total 45
```plaintext ```text
--- ---
@ -363,7 +364,7 @@ Get cache performance statistics.
```bash ```bash
curl "http://localhost:8082/api/v1/cache/stats" curl "http://localhost:8082/api/v1/cache/stats"
```plaintext ```text
**Example Response** (200 OK): **Example Response** (200 OK):
@ -374,7 +375,7 @@ curl "http://localhost:8082/api/v1/cache/stats"
"version_entries": 80, "version_entries": 80,
"total_entries": 245 "total_entries": 245
} }
```plaintext ```text
--- ---
@ -388,7 +389,7 @@ All error responses follow this format:
"message": "Human-readable error message", "message": "Human-readable error message",
"details": "Optional additional details" "details": "Optional additional details"
} }
```plaintext ```text
### HTTP Status Codes ### HTTP Status Codes
@ -435,19 +436,19 @@ interface Extension {
size?: number; // Size in bytes size?: number; // Size in bytes
tags?: string[]; // Tags tags?: string[]; // Tags
} }
```plaintext ```text
### ExtensionType ### ExtensionType
```typescript ```typescript
type ExtensionType = "provider" | "taskserv" | "cluster"; type ExtensionType = "provider" | "taskserv" | "cluster";
```plaintext ```text
### ExtensionSource ### ExtensionSource
```typescript ```typescript
type ExtensionSource = "gitea" | "oci"; type ExtensionSource = "gitea" | "oci";
```plaintext ```text
### ExtensionVersion ### ExtensionVersion
@ -459,7 +460,7 @@ interface ExtensionVersion {
checksum?: string; // Checksum checksum?: string; // Checksum
size?: number; // Size in bytes size?: number; // Size in bytes
} }
```plaintext ```text
### HealthResponse ### HealthResponse
@ -470,7 +471,7 @@ interface HealthResponse {
uptime: number; // Uptime in seconds uptime: number; // Uptime in seconds
backends: BackendHealth; // Backend health status backends: BackendHealth; // Backend health status
} }
```plaintext ```text
### BackendHealth ### BackendHealth
@ -479,7 +480,7 @@ interface BackendHealth {
gitea: BackendStatus; gitea: BackendStatus;
oci: BackendStatus; oci: BackendStatus;
} }
```plaintext ```text
### BackendStatus ### BackendStatus
@ -489,7 +490,7 @@ interface BackendStatus {
healthy: boolean; // Backend healthy healthy: boolean; // Backend healthy
error?: string; // Error message if unhealthy error?: string; // Error message if unhealthy
} }
```plaintext ```text
--- ---
@ -550,7 +551,7 @@ curl -OJ "http://localhost:8082/api/v1/extensions/taskserv/kubernetes/1.28.0"
# 5. Verify checksum (if provided) # 5. Verify checksum (if provided)
sha256sum kubernetes_taskserv.tar.gz sha256sum kubernetes_taskserv.tar.gz
```plaintext ```text
### Pagination ### Pagination
@ -563,7 +564,7 @@ curl "http://localhost:8082/api/v1/extensions?limit=10&offset=10"
# Get third page # Get third page
curl "http://localhost:8082/api/v1/extensions?limit=10&offset=20" curl "http://localhost:8082/api/v1/extensions?limit=10&offset=20"
```plaintext ```text
### Filtering ### Filtering
@ -576,7 +577,7 @@ curl "http://localhost:8082/api/v1/extensions?type=taskserv&source=oci"
# All clusters # All clusters
curl "http://localhost:8082/api/v1/extensions?type=cluster" curl "http://localhost:8082/api/v1/extensions?type=cluster"
```plaintext ```text
--- ---

View File

@ -1,6 +1,7 @@
# Extension Registry Service # Extension Registry Service
A high-performance Rust microservice that provides a unified REST API for extension discovery, versioning, and download from multiple sources (Gitea releases and OCI registries). A high-performance Rust microservice that provides a unified REST API for extension discovery, versioning,
and download from multiple sources (Gitea releases and OCI registries).
## Features ## Features
@ -16,24 +17,29 @@ A high-performance Rust microservice that provides a unified REST API for extens
## Architecture ## Architecture
```plaintext ```plaintext
┌─────────────────────────────────────────────────────────────┐ ┌─────────────────────────────────────────────────
────────────┐
│ Extension Registry API │ │ Extension Registry API │
│ (axum) │ │ (axum) │
├─────────────────────────────────────────────────────────────┤ ├─────────────────────────────────────────────────
────────────┤
│ │ │ │
│ ┌────────────────┐ ┌────────────────┐ ┌──────────────┐ │ │ ┌────────────────┐ ┌────────────────┐
┌──────────────┐ │
│ │ Gitea Client │ │ OCI Client │ │ LRU Cache │ │ │ │ Gitea Client │ │ OCI Client │ │ LRU Cache │ │
│ │ (reqwest) │ │ (reqwest) │ │ (parking) │ │ │ │ (reqwest) │ │ (reqwest) │ │ (parking) │ │
│ └────────────────┘ └────────────────┘ └──────────────┘ │ │ └────────────────┘ └────────────────┘
└──────────────┘ │
│ │ │ │ │ │ │ │ │ │
└─────────┼────────────────────┼────────────────────┼─────────┘ └─────────┼────────────────────┼──────────────────
──┼─────────┘
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Gitea │ │ OCI │ │ Memory │ │ Gitea │ │ OCI │ │ Memory │
│ Releases │ │ Registry │ │ │ │ Releases │ │ Registry │ │ │
└──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘
```plaintext ```text
## Installation ## Installation
@ -42,19 +48,19 @@ A high-performance Rust microservice that provides a unified REST API for extens
```bash ```bash
cd provisioning/platform/extension-registry cd provisioning/platform/extension-registry
cargo build --release cargo build --release
```plaintext ```text
### Docker Build ### Docker Build
```bash ```bash
docker build -t extension-registry:latest . docker build -t extension-registry:latest .
```plaintext ```text
### Running with Cargo ### Running with Cargo
```bash ```bash
cargo run -- --config config.toml --port 8082 cargo run -- --config config.toml --port 8082
```plaintext ```text
### Running with Docker ### Running with Docker
@ -64,7 +70,7 @@ docker run -d \
-v $(pwd)/config.toml:/app/config.toml:ro \ -v $(pwd)/config.toml:/app/config.toml:ro \
-v $(pwd)/tokens:/app/tokens:ro \ -v $(pwd)/tokens:/app/tokens:ro \
extension-registry:latest extension-registry:latest
```plaintext ```text
## Configuration ## Configuration
@ -100,7 +106,7 @@ capacity = 1000
ttl_seconds = 300 ttl_seconds = 300
enable_metadata_cache = true enable_metadata_cache = true
enable_list_cache = true enable_list_cache = true
```plaintext ```text
**Note**: At least one backend (Gitea or OCI) must be configured. **Note**: At least one backend (Gitea or OCI) must be configured.
@ -112,7 +118,7 @@ enable_list_cache = true
```bash ```bash
GET /api/v1/extensions GET /api/v1/extensions
```plaintext ```text
Query parameters: Query parameters:
@ -125,7 +131,7 @@ Example:
```bash ```bash
curl http://localhost:8082/api/v1/extensions?type=provider&limit=10 curl http://localhost:8082/api/v1/extensions?type=provider&limit=10
```plaintext ```text
Response: Response:
@ -144,31 +150,31 @@ Response:
"size": 1024000 "size": 1024000
} }
] ]
```plaintext ```text
#### Get Extension #### Get Extension
```bash ```bash
GET /api/v1/extensions/{type}/{name} GET /api/v1/extensions/{type}/{name}
```plaintext ```text
Example: Example:
```bash ```bash
curl http://localhost:8082/api/v1/extensions/provider/aws curl http://localhost:8082/api/v1/extensions/provider/aws
```plaintext ```text
#### List Versions #### List Versions
```bash ```bash
GET /api/v1/extensions/{type}/{name}/versions GET /api/v1/extensions/{type}/{name}/versions
```plaintext ```text
Example: Example:
```bash ```bash
curl http://localhost:8082/api/v1/extensions/provider/aws/versions curl http://localhost:8082/api/v1/extensions/provider/aws/versions
```plaintext ```text
Response: Response:
@ -187,19 +193,19 @@ Response:
"size": 980000 "size": 980000
} }
] ]
```plaintext ```text
#### Download Extension #### Download Extension
```bash ```bash
GET /api/v1/extensions/{type}/{name}/{version} GET /api/v1/extensions/{type}/{name}/{version}
```plaintext ```text
Example: Example:
```bash ```bash
curl -O http://localhost:8082/api/v1/extensions/provider/aws/1.2.0 curl -O http://localhost:8082/api/v1/extensions/provider/aws/1.2.0
```plaintext ```text
Returns binary data with `Content-Type: application/octet-stream`. Returns binary data with `Content-Type: application/octet-stream`.
@ -207,7 +213,7 @@ Returns binary data with `Content-Type: application/octet-stream`.
```bash ```bash
GET /api/v1/extensions/search?q={query} GET /api/v1/extensions/search?q={query}
```plaintext ```text
Query parameters: Query parameters:
@ -219,7 +225,7 @@ Example:
```bash ```bash
curl http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv curl http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv
```plaintext ```text
### System Endpoints ### System Endpoints
@ -227,13 +233,13 @@ curl http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv
```bash ```bash
GET /api/v1/health GET /api/v1/health
```plaintext ```text
Example: Example:
```bash ```bash
curl http://localhost:8082/api/v1/health curl http://localhost:8082/api/v1/health
```plaintext ```text
Response: Response:
@ -253,13 +259,13 @@ Response:
} }
} }
} }
```plaintext ```text
#### Metrics #### Metrics
```bash ```bash
GET /api/v1/metrics GET /api/v1/metrics
```plaintext ```text
Returns Prometheus-formatted metrics: Returns Prometheus-formatted metrics:
@ -275,13 +281,13 @@ cache_hits_total 567
# HELP cache_misses_total Total cache misses # HELP cache_misses_total Total cache misses
# TYPE cache_misses_total counter # TYPE cache_misses_total counter
cache_misses_total 123 cache_misses_total 123
```plaintext ```text
#### Cache Statistics #### Cache Statistics
```bash ```bash
GET /api/v1/cache/stats GET /api/v1/cache/stats
```plaintext ```text
Response: Response:
@ -292,7 +298,7 @@ Response:
"version_entries": 80, "version_entries": 80,
"total_entries": 245 "total_entries": 245
} }
```plaintext ```text
## Extension Naming Conventions ## Extension Naming Conventions
@ -351,7 +357,7 @@ Error response format:
"error": "not_found", "error": "not_found",
"message": "Extension provider/nonexistent not found" "message": "Extension provider/nonexistent not found"
} }
```plaintext ```text
## Metrics and Monitoring ## Metrics and Monitoring
@ -403,7 +409,7 @@ extension-registry/
│ └── integration_test.rs # Integration tests │ └── integration_test.rs # Integration tests
├── Dockerfile # Docker build ├── Dockerfile # Docker build
└── README.md # This file └── README.md # This file
```plaintext ```text
### Running Tests ### Running Tests
@ -416,7 +422,7 @@ cargo test -- --nocapture
# Run specific test # Run specific test
cargo test test_health_check cargo test test_health_check
```plaintext ```text
### Code Quality ### Code Quality
@ -429,7 +435,7 @@ cargo clippy
# Check for security vulnerabilities # Check for security vulnerabilities
cargo audit cargo audit
```plaintext ```text
## Deployment ## Deployment
@ -452,7 +458,7 @@ RestartSec=5s
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
```plaintext ```text
Enable and start: Enable and start:
@ -460,7 +466,7 @@ Enable and start:
sudo systemctl enable extension-registry sudo systemctl enable extension-registry
sudo systemctl start extension-registry sudo systemctl start extension-registry
sudo systemctl status extension-registry sudo systemctl status extension-registry
```plaintext ```text
### Docker Compose ### Docker Compose
@ -482,7 +488,7 @@ services:
timeout: 3s timeout: 3s
retries: 3 retries: 3
start_period: 5s start_period: 5s
```plaintext ```text
### Kubernetes Deployment ### Kubernetes Deployment
@ -543,7 +549,7 @@ spec:
- port: 8082 - port: 8082
targetPort: 8082 targetPort: 8082
type: ClusterIP type: ClusterIP
```plaintext ```text
## Security ## Security
@ -608,13 +614,13 @@ Enable debug logging:
```bash ```bash
extension-registry --log-level debug extension-registry --log-level debug
```plaintext ```text
Enable JSON logging for structured logs: Enable JSON logging for structured logs:
```bash ```bash
extension-registry --json-log extension-registry --json-log
```plaintext ```text
## License ## License

View File

@ -4,14 +4,13 @@ use std::sync::Arc;
use tracing::info; use tracing::info;
use super::traits::{DistributionClient, SourceClient};
use super::{ForgejoClient, GitHubClient};
use crate::config::Config; use crate::config::Config;
use crate::error::{RegistryError, Result}; use crate::error::{RegistryError, Result};
use crate::gitea::GiteaClient as GiteaClientImpl; use crate::gitea::GiteaClient as GiteaClientImpl;
use crate::oci::OciClient as OciClientImpl; use crate::oci::OciClient as OciClientImpl;
use super::traits::{DistributionClient, SourceClient};
use super::{ForgejoClient, GitHubClient};
/// Factory for creating extension clients /// Factory for creating extension clients
pub struct ClientFactory; pub struct ClientFactory;
@ -19,10 +18,7 @@ impl ClientFactory {
/// Create all configured clients from configuration /// Create all configured clients from configuration
pub fn create_from_config( pub fn create_from_config(
config: &Config, config: &Config,
) -> Result<( ) -> Result<(Vec<Arc<dyn SourceClient>>, Vec<Arc<dyn DistributionClient>>)> {
Vec<Arc<dyn SourceClient>>,
Vec<Arc<dyn DistributionClient>>,
)> {
let mut source_clients: Vec<Arc<dyn SourceClient>> = Vec::new(); let mut source_clients: Vec<Arc<dyn SourceClient>> = Vec::new();
let mut distribution_clients: Vec<Arc<dyn DistributionClient>> = Vec::new(); let mut distribution_clients: Vec<Arc<dyn DistributionClient>> = Vec::new();

View File

@ -8,11 +8,10 @@
use async_trait::async_trait; use async_trait::async_trait;
use bytes::Bytes; use bytes::Bytes;
use super::traits::{BackendType, ExtensionClient, ReleaseInfo, SourceClient};
use crate::error::Result; use crate::error::Result;
use crate::models::{Extension, ExtensionType, ExtensionVersion}; use crate::models::{Extension, ExtensionType, ExtensionVersion};
use super::traits::{BackendType, ExtensionClient, ReleaseInfo, SourceClient};
/// Forgejo client (wraps GiteaClient since Forgejo API is Gitea-compatible) /// Forgejo client (wraps GiteaClient since Forgejo API is Gitea-compatible)
pub struct ForgejoClient { pub struct ForgejoClient {
backend_id: String, backend_id: String,
@ -44,7 +43,10 @@ impl ExtensionClient for ForgejoClient {
BackendType::Forgejo BackendType::Forgejo
} }
async fn list_extensions(&self, extension_type: Option<ExtensionType>) -> Result<Vec<Extension>> { async fn list_extensions(
&self,
extension_type: Option<ExtensionType>,
) -> Result<Vec<Extension>> {
self.inner.list_extensions(extension_type).await self.inner.list_extensions(extension_type).await
} }
@ -66,7 +68,9 @@ impl ExtensionClient for ForgejoClient {
name: &str, name: &str,
version: &str, version: &str,
) -> Result<Bytes> { ) -> Result<Bytes> {
self.inner.download_extension(extension_type, name, version).await self.inner
.download_extension(extension_type, name, version)
.await
} }
async fn health_check(&self) -> Result<()> { async fn health_check(&self) -> Result<()> {
@ -77,7 +81,11 @@ impl ExtensionClient for ForgejoClient {
/// SourceClient trait implementation for Forgejo /// SourceClient trait implementation for Forgejo
#[async_trait] #[async_trait]
impl SourceClient for ForgejoClient { impl SourceClient for ForgejoClient {
async fn get_repository_url(&self, extension_type: ExtensionType, name: &str) -> Result<String> { async fn get_repository_url(
&self,
extension_type: ExtensionType,
name: &str,
) -> Result<String> {
self.inner.get_repository_url(extension_type, name).await self.inner.get_repository_url(extension_type, name).await
} }

View File

@ -1,19 +1,20 @@
//! GitHub client implementation //! GitHub client implementation
//! //!
//! Integrates with GitHub Releases API to fetch provisioning extensions. //! Integrates with GitHub Releases API to fetch provisioning extensions.
//! Extensions are identified as releases within repositories under a GitHub organization. //! Extensions are identified as releases within repositories under a GitHub
//! organization.
use std::time::Duration;
use async_trait::async_trait; use async_trait::async_trait;
use bytes::Bytes; use bytes::Bytes;
use reqwest::Client; use reqwest::Client;
use std::time::Duration;
use tracing::debug; use tracing::debug;
use super::traits::{BackendType, ExtensionClient, ReleaseAsset, ReleaseInfo, SourceClient};
use crate::error::{RegistryError, Result}; use crate::error::{RegistryError, Result};
use crate::models::{Extension, ExtensionSource, ExtensionType, ExtensionVersion}; use crate::models::{Extension, ExtensionSource, ExtensionType, ExtensionVersion};
use super::traits::{BackendType, ExtensionClient, ReleaseAsset, ReleaseInfo, SourceClient};
/// GitHub configuration (reused GiteaConfig for compatibility) /// GitHub configuration (reused GiteaConfig for compatibility)
pub struct GitHubClient { pub struct GitHubClient {
backend_id: String, backend_id: String,
@ -121,7 +122,10 @@ impl ExtensionClient for GitHubClient {
BackendType::GitHub BackendType::GitHub
} }
async fn list_extensions(&self, _extension_type: Option<ExtensionType>) -> Result<Vec<Extension>> { async fn list_extensions(
&self,
_extension_type: Option<ExtensionType>,
) -> Result<Vec<Extension>> {
debug!( debug!(
"Fetching repositories for GitHub organization: {}", "Fetching repositories for GitHub organization: {}",
self.organization self.organization
@ -135,9 +139,14 @@ impl ExtensionClient for GitHubClient {
async fn get_extension(&self, extension_type: ExtensionType, name: &str) -> Result<Extension> { async fn get_extension(&self, extension_type: ExtensionType, name: &str) -> Result<Extension> {
let repo_name = self.format_repo_name(extension_type, name); let repo_name = self.format_repo_name(extension_type, name);
debug!("Fetching GitHub extension: {}/{}", self.organization, repo_name); debug!(
"Fetching GitHub extension: {}/{}",
self.organization, repo_name
);
let releases = self.list_releases_internal(&self.organization, &repo_name).await?; let releases = self
.list_releases_internal(&self.organization, &repo_name)
.await?;
let latest = releases let latest = releases
.iter() .iter()
@ -181,9 +190,14 @@ impl ExtensionClient for GitHubClient {
name: &str, name: &str,
) -> Result<Vec<ExtensionVersion>> { ) -> Result<Vec<ExtensionVersion>> {
let repo_name = self.format_repo_name(extension_type, name); let repo_name = self.format_repo_name(extension_type, name);
debug!("Fetching GitHub versions for: {}/{}", self.organization, repo_name); debug!(
"Fetching GitHub versions for: {}/{}",
self.organization, repo_name
);
let releases = self.list_releases_internal(&self.organization, &repo_name).await?; let releases = self
.list_releases_internal(&self.organization, &repo_name)
.await?;
Ok(releases Ok(releases
.into_iter() .into_iter()
@ -210,7 +224,9 @@ impl ExtensionClient for GitHubClient {
self.organization, repo_name, version self.organization, repo_name, version
); );
let releases = self.list_releases_internal(&self.organization, &repo_name).await?; let releases = self
.list_releases_internal(&self.organization, &repo_name)
.await?;
let release = releases let release = releases
.iter() .iter()
@ -274,7 +290,11 @@ impl ExtensionClient for GitHubClient {
/// SourceClient trait implementation for GitHub /// SourceClient trait implementation for GitHub
#[async_trait] #[async_trait]
impl SourceClient for GitHubClient { impl SourceClient for GitHubClient {
async fn get_repository_url(&self, extension_type: ExtensionType, name: &str) -> Result<String> { async fn get_repository_url(
&self,
extension_type: ExtensionType,
name: &str,
) -> Result<String> {
let repo_name = self.format_repo_name(extension_type, name); let repo_name = self.format_repo_name(extension_type, name);
Ok(format!( Ok(format!(
"https://github.com/{}/{}", "https://github.com/{}/{}",
@ -283,7 +303,9 @@ impl SourceClient for GitHubClient {
} }
async fn list_releases(&self, repo_name: &str) -> Result<Vec<ReleaseInfo>> { async fn list_releases(&self, repo_name: &str) -> Result<Vec<ReleaseInfo>> {
let releases = self.list_releases_internal(&self.organization, repo_name).await?; let releases = self
.list_releases_internal(&self.organization, repo_name)
.await?;
Ok(releases Ok(releases
.into_iter() .into_iter()
.map(|r| ReleaseInfo { .map(|r| ReleaseInfo {
@ -305,7 +327,9 @@ impl SourceClient for GitHubClient {
} }
async fn get_release_notes(&self, repo_name: &str, version: &str) -> Result<String> { async fn get_release_notes(&self, repo_name: &str, version: &str) -> Result<String> {
let releases = self.list_releases_internal(&self.organization, repo_name).await?; let releases = self
.list_releases_internal(&self.organization, repo_name)
.await?;
let release = releases let release = releases
.iter() .iter()
.find(|r| r.tag_name == version) .find(|r| r.tag_name == version)

View File

@ -33,7 +33,10 @@ pub trait ExtensionClient: Send + Sync {
fn backend_type(&self) -> BackendType; fn backend_type(&self) -> BackendType;
/// List all extensions available from this backend /// List all extensions available from this backend
async fn list_extensions(&self, extension_type: Option<ExtensionType>) -> Result<Vec<Extension>>; async fn list_extensions(
&self,
extension_type: Option<ExtensionType>,
) -> Result<Vec<Extension>>;
/// Get specific extension metadata /// Get specific extension metadata
async fn get_extension(&self, extension_type: ExtensionType, name: &str) -> Result<Extension>; async fn get_extension(&self, extension_type: ExtensionType, name: &str) -> Result<Extension>;
@ -65,7 +68,8 @@ pub trait ExtensionClient: Send + Sync {
#[async_trait] #[async_trait]
pub trait SourceClient: ExtensionClient { pub trait SourceClient: ExtensionClient {
/// Get repository URL for an extension /// Get repository URL for an extension
async fn get_repository_url(&self, extension_type: ExtensionType, name: &str) -> Result<String>; async fn get_repository_url(&self, extension_type: ExtensionType, name: &str)
-> Result<String>;
/// List all releases for a repository /// List all releases for a repository
/// ///

View File

@ -6,11 +6,13 @@ use reqwest::Client;
use tracing::debug; use tracing::debug;
use url::Url; use url::Url;
use crate::client::traits::{
BackendType, ExtensionClient, ReleaseAsset, ReleaseInfo, SourceClient,
};
use crate::config::GiteaConfig; use crate::config::GiteaConfig;
use crate::error::{RegistryError, Result}; use crate::error::{RegistryError, Result};
use crate::gitea::models::{GiteaRelease, GiteaRepository}; use crate::gitea::models::{GiteaRelease, GiteaRepository};
use crate::models::{Extension, ExtensionSource, ExtensionType, ExtensionVersion}; use crate::models::{Extension, ExtensionSource, ExtensionType, ExtensionVersion};
use crate::client::traits::{BackendType, ExtensionClient, ReleaseAsset, ReleaseInfo, SourceClient};
/// Gitea API client /// Gitea API client
pub struct GiteaClient { pub struct GiteaClient {
@ -432,7 +434,10 @@ impl ExtensionClient for GiteaClient {
BackendType::Gitea BackendType::Gitea
} }
async fn list_extensions(&self, extension_type: Option<ExtensionType>) -> Result<Vec<Extension>> { async fn list_extensions(
&self,
extension_type: Option<ExtensionType>,
) -> Result<Vec<Extension>> {
GiteaClient::list_extensions(self, extension_type).await GiteaClient::list_extensions(self, extension_type).await
} }
@ -465,7 +470,11 @@ impl ExtensionClient for GiteaClient {
/// SourceClient trait implementation for Gitea /// SourceClient trait implementation for Gitea
#[async_trait] #[async_trait]
impl SourceClient for GiteaClient { impl SourceClient for GiteaClient {
async fn get_repository_url(&self, extension_type: ExtensionType, name: &str) -> Result<String> { async fn get_repository_url(
&self,
extension_type: ExtensionType,
name: &str,
) -> Result<String> {
let repo_name = self.format_repo_name(extension_type, name); let repo_name = self.format_repo_name(extension_type, name);
let repo = self.get_repository(&repo_name).await?; let repo = self.get_repository(&repo_name).await?;
Ok(repo.html_url) Ok(repo.html_url)

View File

@ -6,11 +6,13 @@ use chrono::Utc;
use reqwest::Client; use reqwest::Client;
use tracing::debug; use tracing::debug;
use crate::client::traits::{
BackendType, DistributionClient, ExtensionClient, LayerInfo, ManifestInfo,
};
use crate::config::OciConfig; use crate::config::OciConfig;
use crate::error::{RegistryError, Result}; use crate::error::{RegistryError, Result};
use crate::models::{Extension, ExtensionSource, ExtensionType, ExtensionVersion}; use crate::models::{Extension, ExtensionSource, ExtensionType, ExtensionVersion};
use crate::oci::models::{OciCatalog, OciManifest, OciTagsList}; use crate::oci::models::{OciCatalog, OciManifest, OciTagsList};
use crate::client::traits::{BackendType, DistributionClient, ExtensionClient, LayerInfo, ManifestInfo};
/// OCI registry client /// OCI registry client
pub struct OciClient { pub struct OciClient {
@ -416,7 +418,10 @@ impl ExtensionClient for OciClient {
BackendType::Oci BackendType::Oci
} }
async fn list_extensions(&self, extension_type: Option<ExtensionType>) -> Result<Vec<Extension>> { async fn list_extensions(
&self,
extension_type: Option<ExtensionType>,
) -> Result<Vec<Extension>> {
OciClient::list_extensions(self, extension_type).await OciClient::list_extensions(self, extension_type).await
} }
@ -451,8 +456,8 @@ impl ExtensionClient for OciClient {
impl DistributionClient for OciClient { impl DistributionClient for OciClient {
async fn get_manifest(&self, repo_name: &str, tag: &str) -> Result<ManifestInfo> { async fn get_manifest(&self, repo_name: &str, tag: &str) -> Result<ManifestInfo> {
let manifest = self.get_manifest(repo_name, tag).await?; let manifest = self.get_manifest(repo_name, tag).await?;
let total_size: u64 = manifest.layers.iter().map(|l| l.size).sum::<u64>() let total_size: u64 =
+ manifest.config.size; manifest.layers.iter().map(|l| l.size).sum::<u64>() + manifest.config.size;
Ok(ManifestInfo { Ok(ManifestInfo {
config_digest: manifest.config.digest, config_digest: manifest.config.digest,

View File

@ -1,8 +1,8 @@
use axum::body::Body; use axum::body::Body;
use axum::http::{Request, StatusCode}; use axum::http::{Request, StatusCode};
use extension_registry::{build_routes, AppState, Config}; use extension_registry::{build_routes, AppState, Config};
use tower::ServiceExt;
use http_body_util::BodyExt; use http_body_util::BodyExt;
use tower::ServiceExt;
#[tokio::test] #[tokio::test]
async fn test_health_check() { async fn test_health_check() {

View File

@ -2,7 +2,9 @@
## Overview ## Overview
A **Rust-native Model Context Protocol (MCP) server** for infrastructure automation and AI-assisted DevOps operations. This replaces the Python implementation, providing significant performance improvements and maintaining philosophical consistency with the Rust ecosystem approach. A **Rust-native Model Context Protocol (MCP) server** for infrastructure automation and AI-assisted DevOps operations.
This replaces the Python implementation, providing significant performance improvements and maintaining philosophical consistency
with the Rust ecosystem approach.
## ✅ Project Status: **PROOF OF CONCEPT COMPLETE** ## ✅ Project Status: **PROOF OF CONCEPT COMPLETE**
@ -37,7 +39,7 @@ A **Rust-native Model Context Protocol (MCP) server** for infrastructure automat
• Configuration access: Microsecond latency • Configuration access: Microsecond latency
• Memory efficient: Small struct footprint • Memory efficient: Small struct footprint
• Zero-copy string operations where possible • Zero-copy string operations where possible
```plaintext ```text
### 🏗️ Architecture ### 🏗️ Architecture
@ -51,7 +53,7 @@ src/
├── tools.rs # AI-powered parsing tools ├── tools.rs # AI-powered parsing tools
├── errors.rs # Error handling ├── errors.rs # Error handling
└── performance_test.rs # Performance benchmarking └── performance_test.rs # Performance benchmarking
```plaintext ```text
### 🎲 Key Features ### 🎲 Key Features
@ -85,7 +87,7 @@ cargo test
# Run benchmarks # Run benchmarks
cargo run --bin provisioning-mcp-server --release cargo run --bin provisioning-mcp-server --release
```plaintext ```text
### 🔧 Configuration ### 🔧 Configuration
@ -96,7 +98,7 @@ export PROVISIONING_PATH=/path/to/provisioning
export PROVISIONING_AI_PROVIDER=openai export PROVISIONING_AI_PROVIDER=openai
export OPENAI_API_KEY=your-key export OPENAI_API_KEY=your-key
export PROVISIONING_DEBUG=true export PROVISIONING_DEBUG=true
```plaintext ```text
### 📈 Integration Benefits ### 📈 Integration Benefits

View File

@ -1,6 +1,7 @@
# Provisioning Orchestrator # Provisioning Orchestrator
A Rust-based orchestrator service that coordinates infrastructure provisioning workflows with pluggable storage backends and comprehensive migration tools. A Rust-based orchestrator service that coordinates infrastructure provisioning workflows with pluggable storage backends and comprehensive migration
tools.
## Architecture ## Architecture
@ -36,7 +37,7 @@ The orchestrator implements a hybrid multi-storage approach:
cd src/orchestrator cd src/orchestrator
cargo build --release cargo build --release
cargo run -- --port 8080 --data-dir ./data cargo run -- --port 8080 --data-dir ./data
```plaintext ```text
**With SurrealDB Support**: **With SurrealDB Support**:
@ -51,7 +52,7 @@ cargo run --features surrealdb -- --storage-type surrealdb-embedded --data-dir .
cargo run --features surrealdb -- --storage-type surrealdb-server \ cargo run --features surrealdb -- --storage-type surrealdb-server \
--surrealdb-url ws://localhost:8000 \ --surrealdb-url ws://localhost:8000 \
--surrealdb-username admin --surrealdb-password secret --surrealdb-username admin --surrealdb-password secret
```plaintext ```text
### Submit a Server Creation Workflow ### Submit a Server Creation Workflow
@ -65,19 +66,19 @@ curl -X POST http://localhost:8080/workflows/servers/create \
"check_mode": false, "check_mode": false,
"wait": true "wait": true
}' }'
```plaintext ```text
### Check Task Status ### Check Task Status
```bash ```bash
curl http://localhost:8080/tasks/{task_id} curl http://localhost:8080/tasks/{task_id}
```plaintext ```text
### List All Tasks ### List All Tasks
```bash ```bash
curl http://localhost:8080/tasks curl http://localhost:8080/tasks
```plaintext ```text
## API Endpoints ## API Endpoints
@ -107,7 +108,8 @@ curl http://localhost:8080/tasks
## Test Environment Service ## Test Environment Service
The orchestrator includes a comprehensive test environment service for automated containerized testing of taskservs, complete servers, and multi-node clusters. The orchestrator includes a comprehensive test environment service for automated containerized testing
of taskservs, complete servers, and multi-node clusters.
### Overview ### Overview
@ -142,7 +144,7 @@ curl -X POST http://localhost:8080/test/environments/create \
"auto_start": true, "auto_start": true,
"auto_cleanup": false "auto_cleanup": false
}' }'
```plaintext ```text
#### 2. Server Simulation #### 2. Server Simulation
@ -161,7 +163,7 @@ curl -X POST http://localhost:8080/test/environments/create \
"infra": "prod-stack", "infra": "prod-stack",
"auto_start": true "auto_start": true
}' }'
```plaintext ```text
#### 3. Cluster Topology #### 3. Cluster Topology
@ -202,7 +204,7 @@ curl -X POST http://localhost:8080/test/environments/create \
}, },
"auto_start": true "auto_start": true
}' }'
```plaintext ```text
### Nushell CLI Integration ### Nushell CLI Integration
@ -232,7 +234,7 @@ provisioning test env logs <env-id>
# Cleanup # Cleanup
provisioning test env cleanup <env-id> provisioning test env cleanup <env-id>
```plaintext ```text
### Topology Templates ### Topology Templates
@ -274,7 +276,7 @@ Isolated Test Containers
• Resource limits • Resource limits
• Volume mounts • Volume mounts
• Multi-node support • Multi-node support
```plaintext ```text
### Key Components ### Key Components
@ -311,7 +313,7 @@ test-infrastructure:
- provisioning test quick kubernetes - provisioning test quick kubernetes
- provisioning test quick postgres - provisioning test quick postgres
- provisioning test quick redis - provisioning test quick redis
```plaintext ```text
### Documentation ### Documentation
@ -364,7 +366,7 @@ workflow status $task_id
# List all workflows # List all workflows
workflow list workflow list
```plaintext ```text
## Task States ## Task States
@ -415,7 +417,7 @@ Seamless migration between storage backends:
# Validate migration setup # Validate migration setup
./scripts/migrate-storage.nu validate --from filesystem --to surrealdb-server ./scripts/migrate-storage.nu validate --from filesystem --to surrealdb-server
```plaintext ```text
## Error Handling ## Error Handling
@ -473,7 +475,7 @@ cargo test --features surrealdb
cargo test --test storage_integration cargo test --test storage_integration
cargo test --test migration_tests cargo test --test migration_tests
cargo test --test factory_tests cargo test --test factory_tests
```plaintext ```text
**Performance Benchmarks**: **Performance Benchmarks**:
@ -487,7 +489,7 @@ cargo bench --bench migration_benchmarks
# Generate HTML reports # Generate HTML reports
cargo bench --features surrealdb cargo bench --features surrealdb
open target/criterion/reports/index.html open target/criterion/reports/index.html
```plaintext ```text
**Test Configuration**: **Test Configuration**:
@ -498,7 +500,7 @@ TEST_STORAGE=surrealdb-embedded cargo test --features surrealdb
# Verbose testing # Verbose testing
cargo test -- --nocapture cargo test -- --nocapture
```plaintext ```text
## Migration from Deep Call Stack Issues ## Migration from Deep Call Stack Issues
@ -509,4 +511,5 @@ This orchestrator solves the Nushell deep call stack limitations by:
3. Managing parallel execution externally 3. Managing parallel execution externally
4. Preserving all existing business logic in Nushell 4. Preserving all existing business logic in Nushell
The existing `on_create_servers` function can be replaced with `on_create_servers_workflow` for orchestrated execution while maintaining full compatibility. The existing `on_create_servers` function can be replaced with `on_create_servers_workflow` for orchestrated execution while maintaining full
compatibility.

View File

@ -32,7 +32,7 @@ The DNS integration module provides automatic DNS registration and management fo
│ CoreDNS │ │ CoreDNS │
│ Service │ │ Service │
└─────────────────┘ └─────────────────┘
```plaintext ```text
## Features ## Features
@ -44,7 +44,7 @@ When a server is created, the orchestrator automatically registers its DNS recor
// In server creation workflow // In server creation workflow
let ip = server.get_ip_address(); let ip = server.get_ip_address();
state.dns_manager.register_server_dns(&hostname, ip).await?; state.dns_manager.register_server_dns(&hostname, ip).await?;
```plaintext ```text
### 2. DNS Record Types ### 2. DNS Record Types
@ -64,7 +64,7 @@ let verified = state.dns_manager.verify_dns_resolution("server.example.com").awa
if verified { if verified {
info!("DNS resolution verified"); info!("DNS resolution verified");
} }
```plaintext ```text
### 4. Automatic Cleanup ### 4. Automatic Cleanup
@ -72,7 +72,7 @@ When a server is deleted, DNS records are automatically removed:
```rust ```rust
state.dns_manager.unregister_server_dns(&hostname).await?; state.dns_manager.unregister_server_dns(&hostname).await?;
```plaintext ```text
## Configuration ## Configuration
@ -83,7 +83,7 @@ DNS settings in `config.defaults.toml`:
coredns_url = "http://localhost:53" coredns_url = "http://localhost:53"
auto_register = true auto_register = true
ttl = 300 ttl = 300
```plaintext ```text
### Configuration Options ### Configuration Options
@ -97,7 +97,7 @@ ttl = 300
```http ```http
GET /api/v1/dns/records GET /api/v1/dns/records
```plaintext ```text
**Response:** **Response:**
@ -113,7 +113,7 @@ GET /api/v1/dns/records
} }
] ]
} }
```plaintext ```text
## Usage Examples ## Usage Examples
@ -124,20 +124,20 @@ use std::net::IpAddr;
let ip: IpAddr = "192.168.1.10".parse()?; let ip: IpAddr = "192.168.1.10".parse()?;
dns_manager.register_server_dns("web-01.example.com", ip).await?; dns_manager.register_server_dns("web-01.example.com", ip).await?;
```plaintext ```text
### Unregister Server DNS ### Unregister Server DNS
```rust ```rust
dns_manager.unregister_server_dns("web-01.example.com").await?; dns_manager.unregister_server_dns("web-01.example.com").await?;
```plaintext ```text
### Update DNS Record ### Update DNS Record
```rust ```rust
let new_ip: IpAddr = "192.168.1.20".parse()?; let new_ip: IpAddr = "192.168.1.20".parse()?;
dns_manager.update_dns_record("web-01.example.com", new_ip).await?; dns_manager.update_dns_record("web-01.example.com", new_ip).await?;
```plaintext ```text
### List All Records ### List All Records
@ -146,7 +146,7 @@ let records = dns_manager.list_records().await?;
for record in records { for record in records {
println!("{} -> {} ({})", record.name, record.value, record.record_type); println!("{} -> {} ({})", record.name, record.value, record.record_type);
} }
```plaintext ```text
## Integration with Workflows ## Integration with Workflows
@ -180,7 +180,7 @@ Run DNS integration tests:
```bash ```bash
cd provisioning/platform/orchestrator cd provisioning/platform/orchestrator
cargo test test_dns_integration cargo test test_dns_integration
```plaintext ```text
## Troubleshooting ## Troubleshooting

View File

@ -32,7 +32,7 @@ The extension loading module provides dynamic loading of providers, taskservs, a
│ Nushell Scripts │ │ Nushell Scripts │
│ (module load) │ │ (module load) │
└──────────────────┘ └──────────────────┘
```plaintext ```text
## Extension Types ## Extension Types
@ -46,7 +46,7 @@ let provider = extension_manager.load_extension(
"aws".to_string(), "aws".to_string(),
Some("2.0.0".to_string()) Some("2.0.0".to_string())
).await?; ).await?;
```plaintext ```text
### 2. Taskservs ### 2. Taskservs
@ -58,7 +58,7 @@ let taskserv = extension_manager.load_extension(
"kubernetes".to_string(), "kubernetes".to_string(),
None // Load latest version None // Load latest version
).await?; ).await?;
```plaintext ```text
### 3. Clusters ### 3. Clusters
@ -70,7 +70,7 @@ let cluster = extension_manager.load_extension(
"buildkit".to_string(), "buildkit".to_string(),
Some("1.0.0".to_string()) Some("1.0.0".to_string())
).await?; ).await?;
```plaintext ```text
## Features ## Features
@ -102,7 +102,7 @@ pub struct ExtensionMetadata {
pub author: Option<String>, pub author: Option<String>,
pub repository: Option<String>, pub repository: Option<String>,
} }
```plaintext ```text
### Version Management ### Version Management
@ -122,7 +122,7 @@ let ext = extension_manager.load_extension(
"kubernetes".to_string(), "kubernetes".to_string(),
None None
).await?; ).await?;
```plaintext ```text
## Configuration ## Configuration
@ -132,7 +132,7 @@ Extension settings in `config.defaults.toml`:
[orchestrator.extensions] [orchestrator.extensions]
auto_load = true auto_load = true
cache_dir = "{{orchestrator.paths.data_dir}}/extensions" cache_dir = "{{orchestrator.paths.data_dir}}/extensions"
```plaintext ```text
### Configuration Options ### Configuration Options
@ -145,7 +145,7 @@ cache_dir = "{{orchestrator.paths.data_dir}}/extensions"
```http ```http
GET /api/v1/extensions/loaded GET /api/v1/extensions/loaded
```plaintext ```text
**Response:** **Response:**
@ -168,7 +168,7 @@ GET /api/v1/extensions/loaded
} }
] ]
} }
```plaintext ```text
### Reload Extension ### Reload Extension
@ -180,7 +180,7 @@ Content-Type: application/json
"extension_type": "taskserv", "extension_type": "taskserv",
"name": "kubernetes" "name": "kubernetes"
} }
```plaintext ```text
**Response:** **Response:**
@ -189,7 +189,7 @@ Content-Type: application/json
"success": true, "success": true,
"data": "Extension kubernetes reloaded" "data": "Extension kubernetes reloaded"
} }
```plaintext ```text
## Usage Examples ## Usage Examples
@ -210,7 +210,7 @@ let extension = manager.load_extension(
).await?; ).await?;
println!("Loaded: {} v{}", extension.metadata.name, extension.metadata.version); println!("Loaded: {} v{}", extension.metadata.name, extension.metadata.version);
```plaintext ```text
### List Loaded Extensions ### List Loaded Extensions
@ -223,7 +223,7 @@ for ext in extensions {
ext.loaded_at ext.loaded_at
); );
} }
```plaintext ```text
### Reload Extension ### Reload Extension
@ -232,7 +232,7 @@ let extension = manager.reload_extension(
ExtensionType::Taskserv, ExtensionType::Taskserv,
"kubernetes".to_string() "kubernetes".to_string()
).await?; ).await?;
```plaintext ```text
### Check if Loaded ### Check if Loaded
@ -250,13 +250,13 @@ if !is_loaded {
None None
).await?; ).await?;
} }
```plaintext ```text
### Clear Cache ### Clear Cache
```rust ```rust
manager.clear_cache().await; manager.clear_cache().await;
```plaintext ```text
## Integration with Workflows ## Integration with Workflows
@ -282,7 +282,7 @@ for dep in &extension.metadata.dependencies {
} }
// Continue with installation... // Continue with installation...
```plaintext ```text
## Nushell Integration ## Nushell Integration
@ -297,7 +297,7 @@ provisioning module discover taskserv --output json
# Get extension metadata # Get extension metadata
provisioning module discover taskserv --name kubernetes --output json provisioning module discover taskserv --name kubernetes --output json
```plaintext ```text
## Error Handling ## Error Handling
@ -315,7 +315,7 @@ Run extension loading tests:
```bash ```bash
cd provisioning/platform/orchestrator cd provisioning/platform/orchestrator
cargo test test_extension_loading cargo test test_extension_loading
```plaintext ```text
## Troubleshooting ## Troubleshooting
@ -357,7 +357,7 @@ let total_loads = metrics.total_extension_loads;
let cache_hits = metrics.cache_hits; let cache_hits = metrics.cache_hits;
let hit_ratio = cache_hits as f64 / total_loads as f64; let hit_ratio = cache_hits as f64 / total_loads as f64;
println!("Cache hit ratio: {:.2}%", hit_ratio * 100.0); println!("Cache hit ratio: {:.2}%", hit_ratio * 100.0);
```plaintext ```text
### Loading Time ### Loading Time

View File

@ -32,7 +32,7 @@ The OCI integration module provides OCI Distribution Spec v2 compliant registry
│ OCI Registry │ │ OCI Registry │
│ (HTTP API v2) │ │ (HTTP API v2) │
└──────────────────┘ └──────────────────┘
```plaintext ```text
## Features ## Features
@ -45,7 +45,7 @@ let package_path = oci_manager.pull_kcl_package(
"provisioning-core", "provisioning-core",
"1.0.0" "1.0.0"
).await?; ).await?;
```plaintext ```text
### 2. Extension Artifacts ### 2. Extension Artifacts
@ -57,7 +57,7 @@ let artifact_path = oci_manager.pull_extension_artifact(
"kubernetes", // Extension name "kubernetes", // Extension name
"1.28.0" // Version "1.28.0" // Version
).await?; ).await?;
```plaintext ```text
### 3. Manifest Caching ### 3. Manifest Caching
@ -76,7 +76,7 @@ let artifacts = oci_manager.list_oci_artifacts("kcl").await?;
for artifact in artifacts { for artifact in artifacts {
println!("{} v{} ({})", artifact.name, artifact.version, artifact.size); println!("{} v{} ({})", artifact.name, artifact.version, artifact.size);
} }
```plaintext ```text
## OCI Distribution Spec v2 ## OCI Distribution Spec v2
@ -96,7 +96,7 @@ OCI settings in `config.defaults.toml`:
registry_url = "http://localhost:5000" registry_url = "http://localhost:5000"
namespace = "provisioning-extensions" namespace = "provisioning-extensions"
cache_dir = "{{orchestrator.paths.data_dir}}/oci-cache" cache_dir = "{{orchestrator.paths.data_dir}}/oci-cache"
```plaintext ```text
### Configuration Options ### Configuration Options
@ -115,7 +115,7 @@ Content-Type: application/json
{ {
"namespace": "kcl" "namespace": "kcl"
} }
```plaintext ```text
**Response:** **Response:**
@ -133,7 +133,7 @@ Content-Type: application/json
} }
] ]
} }
```plaintext ```text
## Usage Examples ## Usage Examples
@ -159,7 +159,7 @@ println!("Package downloaded to: {}", package_path.display());
// Extract package // Extract package
// tar -xzf package_path // tar -xzf package_path
```plaintext ```text
### Pull Extension Artifact ### Pull Extension Artifact
@ -173,7 +173,7 @@ let artifact_path = oci_manager.pull_extension_artifact(
// Extract and install // Extract and install
// tar -xzf artifact_path -C /target/path // tar -xzf artifact_path -C /target/path
```plaintext ```text
### List Artifacts ### List Artifacts
@ -186,7 +186,7 @@ for artifact in artifacts {
println!(" Digest: {}", artifact.digest); println!(" Digest: {}", artifact.digest);
println!(); println!();
} }
```plaintext ```text
### Check Artifact Exists ### Check Artifact Exists
@ -201,7 +201,7 @@ if exists {
} else { } else {
println!("Artifact not found"); println!("Artifact not found");
} }
```plaintext ```text
### Get Manifest (with caching) ### Get Manifest (with caching)
@ -214,13 +214,13 @@ let manifest = oci_manager.get_manifest(
println!("Schema version: {}", manifest.schema_version); println!("Schema version: {}", manifest.schema_version);
println!("Media type: {}", manifest.media_type); println!("Media type: {}", manifest.media_type);
println!("Layers: {}", manifest.layers.len()); println!("Layers: {}", manifest.layers.len());
```plaintext ```text
### Clear Manifest Cache ### Clear Manifest Cache
```rust ```rust
oci_manager.clear_cache().await; oci_manager.clear_cache().await;
```plaintext ```text
## OCI Artifact Structure ## OCI Artifact Structure
@ -247,7 +247,7 @@ oci_manager.clear_cache().await;
"org.opencontainers.image.version": "1.0.0" "org.opencontainers.image.version": "1.0.0"
} }
} }
```plaintext ```text
## Integration with Workflows ## Integration with Workflows
@ -284,7 +284,7 @@ async fn install_taskserv_from_oci(
Ok(()) Ok(())
} }
```plaintext ```text
## Cache Management ## Cache Management
@ -305,7 +305,7 @@ async fn install_taskserv_from_oci(
│ └── aws/ │ └── aws/
│ └── 2.0.0/ │ └── 2.0.0/
│ └── artifact.tar.gz │ └── artifact.tar.gz
```plaintext ```text
### Cache Cleanup ### Cache Cleanup
@ -330,7 +330,7 @@ async fn cleanup_old_artifacts(cache_dir: &Path, max_age_days: u64) -> Result<()
Ok(()) Ok(())
} }
```plaintext ```text
## Error Handling ## Error Handling
@ -348,7 +348,7 @@ Run OCI integration tests:
```bash ```bash
cd provisioning/platform/orchestrator cd provisioning/platform/orchestrator
cargo test test_oci_integration cargo test test_oci_integration
```plaintext ```text
## Troubleshooting ## Troubleshooting

View File

@ -32,7 +32,7 @@ The service orchestration module manages platform services with dependency-based
│ Platform Services │ │ Platform Services │
│ (CoreDNS, OCI, etc) │ │ (CoreDNS, OCI, etc) │
└──────────────────────┘ └──────────────────────┘
```plaintext ```text
## Features ## Features
@ -46,7 +46,7 @@ let order = service_orchestrator.resolve_startup_order(&[
]).await?; ]).await?;
// Returns: ["service-a", "service-b", "service-c"] // Returns: ["service-a", "service-b", "service-c"]
```plaintext ```text
### 2. Automatic Dependency Startup ### 2. Automatic Dependency Startup
@ -57,7 +57,7 @@ When enabled, dependencies are started automatically:
service_orchestrator.start_service("web-app").await?; service_orchestrator.start_service("web-app").await?;
// Automatically starts: database -> cache -> web-app // Automatically starts: database -> cache -> web-app
```plaintext ```text
### 3. Health Checking ### 3. Health Checking
@ -69,7 +69,7 @@ let health = service_orchestrator.check_service_health("web-app").await?;
if health.healthy { if health.healthy {
println!("Service is healthy: {}", health.message); println!("Service is healthy: {}", health.message);
} }
```plaintext ```text
### 4. Service Status ### 4. Service Status
@ -84,7 +84,7 @@ match status {
ServiceStatus::Failed => println!("Service has failed"), ServiceStatus::Failed => println!("Service has failed"),
ServiceStatus::Unknown => println!("Service status unknown"), ServiceStatus::Unknown => println!("Service status unknown"),
} }
```plaintext ```text
## Service Definition ## Service Definition
@ -99,7 +99,7 @@ pub struct Service {
pub stop_command: String, pub stop_command: String,
pub health_check_endpoint: Option<String>, pub health_check_endpoint: Option<String>,
} }
```plaintext ```text
### Example Service Definition ### Example Service Definition
@ -112,7 +112,7 @@ let coredns_service = Service {
stop_command: "systemctl stop coredns".to_string(), stop_command: "systemctl stop coredns".to_string(),
health_check_endpoint: Some("http://localhost:53/health".to_string()), health_check_endpoint: Some("http://localhost:53/health".to_string()),
}; };
```plaintext ```text
### Service with Dependencies ### Service with Dependencies
@ -125,7 +125,7 @@ let oci_registry = Service {
stop_command: "systemctl stop oci-registry".to_string(), stop_command: "systemctl stop oci-registry".to_string(),
health_check_endpoint: Some("http://localhost:5000/v2/".to_string()), health_check_endpoint: Some("http://localhost:5000/v2/".to_string()),
}; };
```plaintext ```text
## Configuration ## Configuration
@ -135,7 +135,7 @@ Service orchestration settings in `config.defaults.toml`:
[orchestrator.services] [orchestrator.services]
manager_enabled = true manager_enabled = true
auto_start_dependencies = true auto_start_dependencies = true
```plaintext ```text
### Configuration Options ### Configuration Options
@ -148,7 +148,7 @@ auto_start_dependencies = true
```http ```http
GET /api/v1/services/list GET /api/v1/services/list
```plaintext ```text
**Response:** **Response:**
@ -166,13 +166,13 @@ GET /api/v1/services/list
} }
] ]
} }
```plaintext ```text
### Get Services Status ### Get Services Status
```http ```http
GET /api/v1/services/status GET /api/v1/services/status
```plaintext ```text
**Response:** **Response:**
@ -190,7 +190,7 @@ GET /api/v1/services/status
} }
] ]
} }
```plaintext ```text
## Usage Examples ## Usage Examples
@ -228,14 +228,14 @@ let oci = Service {
}; };
orchestrator.register_service(oci).await; orchestrator.register_service(oci).await;
```plaintext ```text
### Start Service with Dependencies ### Start Service with Dependencies
```rust ```rust
// This will automatically start coredns first, then oci-registry // This will automatically start coredns first, then oci-registry
orchestrator.start_service("oci-registry").await?; orchestrator.start_service("oci-registry").await?;
```plaintext ```text
### Resolve Startup Order ### Resolve Startup Order
@ -251,7 +251,7 @@ println!("Startup order:");
for (i, service) in order.iter().enumerate() { for (i, service) in order.iter().enumerate() {
println!("{}. {}", i + 1, service); println!("{}. {}", i + 1, service);
} }
```plaintext ```text
### Start All Services ### Start All Services
@ -262,7 +262,7 @@ println!("Started {} services:", started.len());
for service in started { for service in started {
println!(" ✓ {}", service); println!(" ✓ {}", service);
} }
```plaintext ```text
### Check Service Health ### Check Service Health
@ -277,7 +277,7 @@ if health.healthy {
println!("✗ {} is unhealthy", "coredns"); println!("✗ {} is unhealthy", "coredns");
println!(" Message: {}", health.message); println!(" Message: {}", health.message);
} }
```plaintext ```text
## Dependency Graph Examples ## Dependency Graph Examples
@ -285,7 +285,7 @@ if health.healthy {
```plaintext ```plaintext
A -> B -> C A -> B -> C
```plaintext ```text
Startup order: A, B, C Startup order: A, B, C
@ -293,7 +293,7 @@ Startup order: A, B, C
let a = Service { name: "a".to_string(), dependencies: vec![], /* ... */ }; let a = Service { name: "a".to_string(), dependencies: vec![], /* ... */ };
let b = Service { name: "b".to_string(), dependencies: vec!["a".to_string()], /* ... */ }; let b = Service { name: "b".to_string(), dependencies: vec!["a".to_string()], /* ... */ };
let c = Service { name: "c".to_string(), dependencies: vec!["b".to_string()], /* ... */ }; let c = Service { name: "c".to_string(), dependencies: vec!["b".to_string()], /* ... */ };
```plaintext ```text
### Diamond Dependency ### Diamond Dependency
@ -303,7 +303,7 @@ let c = Service { name: "c".to_string(), dependencies: vec!["b".to_string()], /*
B C B C
\ / \ /
D D
```plaintext ```text
Startup order: A, B, C, D (B and C can start in parallel) Startup order: A, B, C, D (B and C can start in parallel)
@ -312,7 +312,7 @@ let a = Service { name: "a".to_string(), dependencies: vec![], /* ... */ };
let b = Service { name: "b".to_string(), dependencies: vec!["a".to_string()], /* ... */ }; let b = Service { name: "b".to_string(), dependencies: vec!["a".to_string()], /* ... */ };
let c = Service { name: "c".to_string(), dependencies: vec!["a".to_string()], /* ... */ }; let c = Service { name: "c".to_string(), dependencies: vec!["a".to_string()], /* ... */ };
let d = Service { name: "d".to_string(), dependencies: vec!["b".to_string(), "c".to_string()], /* ... */ }; let d = Service { name: "d".to_string(), dependencies: vec!["b".to_string(), "c".to_string()], /* ... */ };
```plaintext ```text
### Complex Dependency ### Complex Dependency
@ -326,7 +326,7 @@ let d = Service { name: "d".to_string(), dependencies: vec!["b".to_string(), "c"
E F E F
\ / \ /
G G
```plaintext ```text
Startup order: A, B, C, D, E, F, G Startup order: A, B, C, D, E, F, G
@ -343,7 +343,7 @@ let coredns = Service {
stop_command: "systemctl stop coredns".to_string(), stop_command: "systemctl stop coredns".to_string(),
health_check_endpoint: Some("http://localhost:53/health".to_string()), health_check_endpoint: Some("http://localhost:53/health".to_string()),
}; };
```plaintext ```text
### OCI Registry Service ### OCI Registry Service
@ -356,7 +356,7 @@ let oci_registry = Service {
stop_command: "systemctl stop oci-registry".to_string(), stop_command: "systemctl stop oci-registry".to_string(),
health_check_endpoint: Some("http://localhost:5000/v2/".to_string()), health_check_endpoint: Some("http://localhost:5000/v2/".to_string()),
}; };
```plaintext ```text
### Orchestrator Service ### Orchestrator Service
@ -369,7 +369,7 @@ let orchestrator = Service {
stop_command: "./scripts/start-orchestrator.nu --stop".to_string(), stop_command: "./scripts/start-orchestrator.nu --stop".to_string(),
health_check_endpoint: Some("http://localhost:9090/health".to_string()), health_check_endpoint: Some("http://localhost:9090/health".to_string()),
}; };
```plaintext ```text
## Error Handling ## Error Handling
@ -391,7 +391,7 @@ let c = Service { name: "c".to_string(), dependencies: vec!["b".to_string()], /*
// Error: Circular dependency detected // Error: Circular dependency detected
let result = orchestrator.resolve_startup_order(&["a".to_string()]).await; let result = orchestrator.resolve_startup_order(&["a".to_string()]).await;
assert!(result.is_err()); assert!(result.is_err());
```plaintext ```text
## Testing ## Testing
@ -400,7 +400,7 @@ Run service orchestration tests:
```bash ```bash
cd provisioning/platform/orchestrator cd provisioning/platform/orchestrator
cargo test test_service_orchestration cargo test test_service_orchestration
```plaintext ```text
## Troubleshooting ## Troubleshooting

View File

@ -2,7 +2,8 @@
## Overview ## Overview
The SSH Temporal Key Management System provides automated generation, deployment, and cleanup of short-lived SSH keys for secure server access. It eliminates the need for static SSH keys by generating keys on-demand with automatic expiration. The SSH Temporal Key Management System provides automated generation, deployment, and cleanup of short-lived SSH keys
for secure server access. It eliminates the need for static SSH keys by generating keys on-demand with automatic expiration.
## Features ## Features
@ -30,25 +31,30 @@ The SSH Temporal Key Management System provides automated generation, deployment
## Architecture ## Architecture
```plaintext ```plaintext
┌─────────────────────────────────────────────────────────────┐ ┌─────────────────────────────────────────────────
────────────┐
│ SSH Key Manager │ │ SSH Key Manager │
├─────────────────────────────────────────────────────────────┤ ├─────────────────────────────────────────────────
────────────┤
│ │ │ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ ┌──────────────┐ ┌──────────────┐
┌──────────────┐ │
│ │ Key Generator│ │ Key Deployer │ │ Temporal │ │ │ │ Key Generator│ │ Key Deployer │ │ Temporal │ │
│ │ (Ed25519) │ │ (SSH Deploy) │ │ Manager │ │ │ │ (Ed25519) │ │ (SSH Deploy) │ │ Manager │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │ │ └──────────────┘ └──────────────┘
└──────────────┘ │
│ │ │ │
│ ┌──────────────┐ ┌──────────────┐ │ │ ┌──────────────┐ ┌──────────────┐ │
│ │ Vault │ │ Authorized │ │ │ │ Vault │ │ Authorized │ │
│ │ SSH Engine │ │ Keys Manager │ │ │ │ SSH Engine │ │ Keys Manager │ │
│ └──────────────┘ └──────────────┘ │ │ └──────────────┘ └──────────────┘ │
│ │ │ │
└─────────────────────────────────────────────────────────────┘ └─────────────────────────────────────────────────
────────────┘
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
REST API Nushell CLI Background Tasks REST API Nushell CLI Background Tasks
```plaintext ```text
## Key Types ## Key Types
@ -62,7 +68,7 @@ Generated on-demand Ed25519 keys that are automatically deployed and cleaned up.
```bash ```bash
ssh generate-key server.example.com --user root --ttl 30min ssh generate-key server.example.com --user root --ttl 30min
```plaintext ```text
### 2. Vault OTP (One-Time Password) ### 2. Vault OTP (One-Time Password)
@ -76,7 +82,7 @@ Vault generates a one-time password for SSH authentication.
```bash ```bash
ssh generate-key server.example.com --type otp --ip 192.168.1.100 ssh generate-key server.example.com --type otp --ip 192.168.1.100
```plaintext ```text
### 3. Vault CA-Signed Certificates ### 3. Vault CA-Signed Certificates
@ -90,7 +96,7 @@ Vault acts as SSH CA, signing user public keys with short TTL.
```bash ```bash
ssh generate-key server.example.com --type ca --principal admin --ttl 1hr ssh generate-key server.example.com --type ca --principal admin --ttl 1hr
```plaintext ```text
## REST API Endpoints ## REST API Endpoints
@ -126,7 +132,7 @@ Response:
"deployed": false "deployed": false
} }
} }
```plaintext ```text
### Deploy SSH Key ### Deploy SSH Key
@ -143,7 +149,7 @@ Response:
"deployed_at": "2024-01-01T00:00:00Z" "deployed_at": "2024-01-01T00:00:00Z"
} }
} }
```plaintext ```text
### List SSH Keys ### List SSH Keys
@ -164,7 +170,7 @@ Response:
} }
] ]
} }
```plaintext ```text
### Revoke SSH Key ### Revoke SSH Key
@ -176,7 +182,7 @@ Response:
"success": true, "success": true,
"data": "Key uuid revoked successfully" "data": "Key uuid revoked successfully"
} }
```plaintext ```text
### Get SSH Key ### Get SSH Key
@ -192,7 +198,7 @@ Response:
... ...
} }
} }
```plaintext ```text
### Cleanup Expired Keys ### Cleanup Expired Keys
@ -207,7 +213,7 @@ Response:
"cleaned_key_ids": ["uuid1", "uuid2", ...] "cleaned_key_ids": ["uuid1", "uuid2", ...]
} }
} }
```plaintext ```text
### Get Statistics ### Get Statistics
@ -230,7 +236,7 @@ Response:
"last_cleanup_at": "2024-01-01T00:00:00Z" "last_cleanup_at": "2024-01-01T00:00:00Z"
} }
} }
```plaintext ```text
## Nushell CLI Commands ## Nushell CLI Commands
@ -250,7 +256,7 @@ Examples:
ssh generate-key server.example.com ssh generate-key server.example.com
ssh generate-key server.example.com --user deploy --ttl 30min ssh generate-key server.example.com --user deploy --ttl 30min
ssh generate-key server.example.com --type ca --principal admin ssh generate-key server.example.com --type ca --principal admin
```plaintext ```text
### Deploy Key ### Deploy Key
@ -259,7 +265,7 @@ ssh deploy-key <key_id>
Example: Example:
ssh deploy-key abc-123-def-456 ssh deploy-key abc-123-def-456
```plaintext ```text
### List Keys ### List Keys
@ -269,7 +275,7 @@ ssh list-keys [--expired]
Example: Example:
ssh list-keys ssh list-keys
ssh list-keys | where deployed == true ssh list-keys | where deployed == true
```plaintext ```text
### Revoke Key ### Revoke Key
@ -278,7 +284,7 @@ ssh revoke-key <key_id>
Example: Example:
ssh revoke-key abc-123-def-456 ssh revoke-key abc-123-def-456
```plaintext ```text
### Connect with Auto-Generated Key ### Connect with Auto-Generated Key
@ -293,7 +299,7 @@ Options:
Example: Example:
ssh connect server.example.com --user deploy ssh connect server.example.com --user deploy
```plaintext ```text
This command: This command:
@ -320,7 +326,7 @@ Example output:
Last cleanup: 2024-01-01T00:00:00Z Last cleanup: 2024-01-01T00:00:00Z
Cleaned keys: 5 Cleaned keys: 5
```plaintext ```text
### Manual Cleanup ### Manual Cleanup
@ -333,7 +339,7 @@ Example output:
- abc-123 - abc-123
- def-456 - def-456
... ...
```plaintext ```text
## Configuration ## Configuration
@ -361,7 +367,7 @@ let ssh_manager = Arc::new(SshKeyManager::new(ssh_config).await?);
// Start background cleanup task // Start background cleanup task
Arc::clone(&ssh_manager).start_cleanup_task().await; Arc::clone(&ssh_manager).start_cleanup_task().await;
```plaintext ```text
### Vault SSH Configuration ### Vault SSH Configuration
@ -376,7 +382,7 @@ vault write ssh/roles/otp_key_role \
key_type=otp \ key_type=otp \
default_user=root \ default_user=root \
cidr_list=0.0.0.0/0 cidr_list=0.0.0.0/0
```plaintext ```text
#### CA Mode #### CA Mode
@ -398,13 +404,13 @@ vault write ssh/roles/default \
# Get CA public key (add to servers' /etc/ssh/trusted-user-ca-keys.pem) # Get CA public key (add to servers' /etc/ssh/trusted-user-ca-keys.pem)
vault read -field=public_key ssh/config/ca vault read -field=public_key ssh/config/ca
```plaintext ```text
Server configuration (`/etc/ssh/sshd_config`): Server configuration (`/etc/ssh/sshd_config`):
```plaintext ```plaintext
TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
```plaintext ```text
## Deployment ## Deployment
@ -423,7 +429,7 @@ export VAULT_TOKEN=your-vault-token
# Provisioning SSH key path # Provisioning SSH key path
export PROVISIONING_SSH_KEY=/path/to/provisioning/key export PROVISIONING_SSH_KEY=/path/to/provisioning/key
```plaintext ```text
### Integration with Workflows ### Integration with Workflows
@ -439,7 +445,7 @@ ssh root@$server "install-kubernetes.sh"
# Auto-revoke after workflow # Auto-revoke after workflow
ssh revoke-key $ssh_key.id ssh revoke-key $ssh_key.id
```plaintext ```text
## Security Considerations ## Security Considerations
@ -458,13 +464,13 @@ Check SSH connectivity:
```bash ```bash
ssh -i /path/to/provisioning/key root@server.example.com ssh -i /path/to/provisioning/key root@server.example.com
```plaintext ```text
Verify SSH daemon is running: Verify SSH daemon is running:
```bash ```bash
systemctl status sshd systemctl status sshd
```plaintext ```text
### Cleanup Not Working ### Cleanup Not Working
@ -472,13 +478,13 @@ Check orchestrator logs:
```bash ```bash
tail -f ./data/orchestrator.log | grep SSH tail -f ./data/orchestrator.log | grep SSH
```plaintext ```text
Manual cleanup: Manual cleanup:
```bash ```bash
ssh cleanup ssh cleanup
```plaintext ```text
### Vault Integration Issues ### Vault Integration Issues
@ -487,14 +493,14 @@ Test Vault connectivity:
```bash ```bash
vault status vault status
vault token lookup vault token lookup
```plaintext ```text
Check SSH secrets engine: Check SSH secrets engine:
```bash ```bash
vault secrets list vault secrets list
vault read ssh/config/ca vault read ssh/config/ca
```plaintext ```text
## Performance ## Performance

View File

@ -45,7 +45,7 @@ The default storage backend using JSON files for task persistence. Ideal for dev
# Custom data directory # Custom data directory
./orchestrator --storage-type filesystem --data-dir /var/lib/orchestrator ./orchestrator --storage-type filesystem --data-dir /var/lib/orchestrator
```plaintext ```text
### File Structure ### File Structure
@ -60,7 +60,7 @@ data/
├── uuid1.json # Queue entries with priority ├── uuid1.json # Queue entries with priority
├── uuid2.json ├── uuid2.json
└── ... └── ...
```plaintext ```text
### Features ### Features
@ -93,7 +93,7 @@ cargo build --features surrealdb
# Run with embedded SurrealDB # Run with embedded SurrealDB
./orchestrator --storage-type surrealdb-embedded --data-dir ./data ./orchestrator --storage-type surrealdb-embedded --data-dir ./data
```plaintext ```text
### Database Schema ### Database Schema
@ -126,7 +126,7 @@ cargo build --features surrealdb
--data-dir ./data \ --data-dir ./data \
--surrealdb-namespace production \ --surrealdb-namespace production \
--surrealdb-database orchestrator --surrealdb-database orchestrator
```plaintext ```text
### Best Use Cases ### Best Use Cases
@ -162,7 +162,7 @@ surreal start --log trace --user root --pass root file:orchestrator.db
# Or with TiKV (distributed) # Or with TiKV (distributed)
surreal start --log trace --user root --pass root tikv://localhost:2379 surreal start --log trace --user root --pass root tikv://localhost:2379
```plaintext ```text
### Configuration ### Configuration
@ -180,7 +180,7 @@ surreal start --log trace --user root --pass root tikv://localhost:2379
--surrealdb-database orchestrator \ --surrealdb-database orchestrator \
--surrealdb-username orchestrator-service \ --surrealdb-username orchestrator-service \
--surrealdb-password "$SURREALDB_PASSWORD" --surrealdb-password "$SURREALDB_PASSWORD"
```plaintext ```text
### Features ### Features
@ -223,7 +223,7 @@ Use the migration script to move data between any backend combination:
# Validation and dry-run # Validation and dry-run
./scripts/migrate-storage.nu validate --from filesystem --to surrealdb-embedded ./scripts/migrate-storage.nu validate --from filesystem --to surrealdb-embedded
./scripts/migrate-storage.nu --from filesystem --to surrealdb-embedded --dry-run ./scripts/migrate-storage.nu --from filesystem --to surrealdb-embedded --dry-run
```plaintext ```text
### Migration Features ### Migration Features
@ -242,7 +242,7 @@ Use the migration script to move data between any backend combination:
./scripts/migrate-storage.nu --from filesystem --to surrealdb-embedded \ ./scripts/migrate-storage.nu --from filesystem --to surrealdb-embedded \
--source-dir ./dev-data --target-dir ./prod-data \ --source-dir ./dev-data --target-dir ./prod-data \
--batch-size 100 --verify --batch-size 100 --verify
```plaintext ```text
#### Scaling Up #### Scaling Up
@ -253,7 +253,7 @@ Use the migration script to move data between any backend combination:
--surrealdb-url ws://production-surreal:8000 \ --surrealdb-url ws://production-surreal:8000 \
--username orchestrator --password "$PROD_PASSWORD" \ --username orchestrator --password "$PROD_PASSWORD" \
--namespace production --database main --namespace production --database main
```plaintext ```text
#### Disaster Recovery #### Disaster Recovery
@ -263,7 +263,7 @@ Use the migration script to move data between any backend combination:
--surrealdb-url ws://failing-server:8000 \ --surrealdb-url ws://failing-server:8000 \
--username admin --password "$PASSWORD" \ --username admin --password "$PASSWORD" \
--target-dir ./emergency-backup --target-dir ./emergency-backup
```plaintext ```text
## Performance Considerations ## Performance Considerations
@ -319,7 +319,7 @@ chmod -R 755 ./data
# Corrupted JSON files # Corrupted JSON files
rm ./data/queue.rkvs/tasks/corrupted-file.json rm ./data/queue.rkvs/tasks/corrupted-file.json
```plaintext ```text
#### SurrealDB Embedded #### SurrealDB Embedded
@ -330,7 +330,7 @@ rm -rf ./data/orchestrator.db
# Permission issues # Permission issues
sudo chown -R $USER:$USER ./data sudo chown -R $USER:$USER ./data
```plaintext ```text
#### SurrealDB Server #### SurrealDB Server
@ -341,7 +341,7 @@ telnet surreal-server 8000
# Authentication failures # Authentication failures
# Verify credentials and user permissions # Verify credentials and user permissions
```plaintext ```text
### Debugging Commands ### Debugging Commands
@ -357,7 +357,7 @@ telnet surreal-server 8000
# Monitor migration progress # Monitor migration progress
./scripts/migrate-storage.nu --from filesystem --to surrealdb-embedded --verbose ./scripts/migrate-storage.nu --from filesystem --to surrealdb-embedded --verbose
```plaintext ```text
## Recommendations ## Recommendations

View File

@ -1,53 +0,0 @@
Immediate Testing Opportunities
1. Test the Original Problem:
cd klab/wuji
provisioning -c c server --orchestrated
This should now work through the orchestrator, bypassing the deep call stack issue.
2. Test Workflow Management:
# List workflows
nu -c "use ../core/nulib/workflows/management.nu; workflow list"
# Check orchestrator status
nu -c "use ../core/nulib/workflows/management.nu; workflow orchestrator"
# Submit a test server workflow
nu -c "use ../core/nulib/workflows/server_create.nu; server_create_workflow 'test-infra' 'config.user.toml' ['server1']"
Production Readiness Steps
1. Integration Testing:
- Test server creation workflows with real infrastructure
- Verify taskserv deployment through orchestrator
- Test cluster operations via REST API
1. Performance Validation:
- Confirm parallel processing works (the original foreach issue)
- Monitor orchestrator logs during heavy workloads
- Test with your target scale (10-12 servers, ~12 taskservs each)
1. Production Deployment:
- Move orchestrator to permanent location
- Set up systemd service or similar for auto-start
- Configure proper logging and monitoring
Next Development Phase
1. Enhanced Features:
- Dependency management between workflows
- Workflow rollback capabilities
- Real-time progress streaming
- Workflow templates and presets
The hybrid architecture is now complete and ready to solve your deep call stack limitations while preserving all existing Nushell business logic. Would you like to test the
original failing command first?

View File

@ -3,39 +3,27 @@ use std::sync::Arc;
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use axum::{ use axum::{
extract::{Path, State}, extract::{Path, State},
http::StatusCode,
response::Json, response::Json,
routing::{get, post}, routing::{get, post},
http::StatusCode,
Router, Router,
}; };
use clap::Parser; use clap::Parser;
// Use types from the library // Use types from the library
use provisioning_orchestrator::{ use provisioning_orchestrator::{
audit::{ audit::{AuditEvent, AuditFilter, AuditQuery, RetentionPolicy, SiemFormat},
AuditEvent, AuditFilter, AuditQuery, RetentionPolicy, SiemFormat, batch::{BatchOperationRequest, BatchOperationResult},
},
batch::{
BatchOperationRequest, BatchOperationResult,
},
compliance_routes, compliance_routes,
monitor::{ monitor::{MonitoringEvent, MonitoringEventType, SystemHealthStatus},
MonitoringEvent, MonitoringEventType, rollback::{Checkpoint, RollbackResult, RollbackStatistics},
SystemHealthStatus, state::{ProgressInfo, StateManagerStatistics, StateSnapshot, SystemMetrics},
}, test_environment::{
rollback::{ CreateTestEnvironmentRequest, RunTestRequest, TestEnvironment, TestEnvironmentResponse,
Checkpoint, RollbackResult, RollbackStatistics, TestResult,
},
state::{
ProgressInfo, StateManagerStatistics, StateSnapshot,
SystemMetrics,
}, },
workflow::WorkflowExecutionState, workflow::WorkflowExecutionState,
test_environment::{ AppState, Args, ClusterWorkflow, CreateServerWorkflow, SharedState, TaskStatus,
CreateTestEnvironmentRequest, RunTestRequest, TestEnvironmentResponse, TaskservWorkflow, WorkflowTask,
TestEnvironment, TestResult,
},
AppState, Args, ClusterWorkflow, CreateServerWorkflow, SharedState,
TaskStatus, TaskservWorkflow, WorkflowTask,
}; };
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tower_http::cors::CorsLayer; use tower_http::cors::CorsLayer;

View File

@ -1,6 +1,7 @@
# Testing Guide for Multi-Storage Orchestrator # Testing Guide for Multi-Storage Orchestrator
This document provides comprehensive guidance for testing the multi-storage orchestrator system, including unit tests, integration tests, benchmarks, and performance analysis. This document provides comprehensive guidance for testing the multi-storage orchestrator system,
including unit tests, integration tests, benchmarks, and performance analysis.
## Overview ## Overview
@ -29,7 +30,7 @@ src/
├── storage/ # Unit tests embedded in modules ├── storage/ # Unit tests embedded in modules
├── migration/tests.rs # Migration unit tests ├── migration/tests.rs # Migration unit tests
└── main.rs # Application integration tests └── main.rs # Application integration tests
```plaintext ```text
## Running Tests ## Running Tests
@ -49,7 +50,7 @@ cargo test --test factory_tests
# Run unit tests only # Run unit tests only
cargo test --lib cargo test --lib
```plaintext ```text
### Using Cargo Aliases ### Using Cargo Aliases
@ -70,7 +71,7 @@ cargo test-integration
cargo test-migration cargo test-migration
cargo test-factory cargo test-factory
cargo test-unit cargo test-unit
```plaintext ```text
## Test Features and Backends ## Test Features and Backends
@ -90,7 +91,7 @@ Tests automatically adapt to available features:
async fn test_surrealdb_specific_feature() { async fn test_surrealdb_specific_feature() {
// This test only runs when SurrealDB feature is enabled // This test only runs when SurrealDB feature is enabled
} }
```plaintext ```text
## Integration Tests ## Integration Tests
@ -108,7 +109,7 @@ test_all_backends!(test_basic_crud_operations, |storage, gen| async move {
// ... test implementation // ... test implementation
Ok(()) Ok(())
}); });
```plaintext ```text
**Key Test Scenarios:** **Key Test Scenarios:**
@ -134,7 +135,7 @@ cargo test --features surrealdb --test migration_tests
# Test specific migration scenarios # Test specific migration scenarios
cargo test --features surrealdb test_filesystem_to_embedded_migration cargo test --features surrealdb test_filesystem_to_embedded_migration
cargo test --features surrealdb test_large_dataset_migration_performance cargo test --features surrealdb test_large_dataset_migration_performance
```plaintext ```text
**Migration Test Coverage:** **Migration Test Coverage:**
@ -157,7 +158,7 @@ cargo test --test factory_tests
# Test configuration validation # Test configuration validation
cargo test test_storage_config_validation_failures cargo test test_storage_config_validation_failures
```plaintext ```text
## Benchmarks ## Benchmarks
@ -177,7 +178,7 @@ cargo bench-surrealdb # Requires --features surrealdb
cargo bench -- single_enqueue cargo bench -- single_enqueue
cargo bench -- batch_operations cargo bench -- batch_operations
cargo bench -- concurrent_operations cargo bench -- concurrent_operations
```plaintext ```text
**Benchmark Categories:** **Benchmark Categories:**
@ -198,7 +199,7 @@ cargo bench-migration
# Test migration performance # Test migration performance
cargo bench -- basic_migration cargo bench -- basic_migration
cargo bench -- migration_batch_sizes cargo bench -- migration_batch_sizes
```plaintext ```text
**Migration Benchmarks:** **Migration Benchmarks:**
@ -220,7 +221,7 @@ use crate::helpers::TestDataGenerator;
let gen = TestDataGenerator::new(); let gen = TestDataGenerator::new();
let task = gen.workflow_task(); let task = gen.workflow_task();
let batch = gen.workflow_tasks_batch(10); let batch = gen.workflow_tasks_batch(10);
```plaintext ```text
### StorageTestRunner ### StorageTestRunner
@ -231,7 +232,7 @@ use crate::helpers::StorageTestRunner;
let mut runner = StorageTestRunner::new(); let mut runner = StorageTestRunner::new();
runner.run_against_all_backends(test_function).await; runner.run_against_all_backends(test_function).await;
```plaintext ```text
### MockStorage ### MockStorage
@ -242,7 +243,7 @@ use crate::helpers::MockStorage;
let mock = MockStorage::new(); let mock = MockStorage::new();
mock.set_health(false); // Simulate failure mock.set_health(false); // Simulate failure
```plaintext ```text
## Performance Testing ## Performance Testing
@ -284,7 +285,7 @@ strategy:
rust: rust:
- stable - stable
- beta - beta
```plaintext ```text
### Test Commands for CI ### Test Commands for CI
@ -298,7 +299,7 @@ cargo test --doc --all-features
# Benchmark regression tests # Benchmark regression tests
cargo bench --all-features -- --test cargo bench --all-features -- --test
```plaintext ```text
## Debugging and Troubleshooting ## Debugging and Troubleshooting
@ -313,7 +314,7 @@ cargo test -- --nocapture
# Run single test with full output # Run single test with full output
cargo test test_name -- --exact --nocapture cargo test test_name -- --exact --nocapture
```plaintext ```text
### Common Issues ### Common Issues
@ -338,7 +339,7 @@ cargo test-coverage
# View coverage report # View coverage report
open target/tarpaulin-report.html open target/tarpaulin-report.html
```plaintext ```text
## Performance Profiling ## Performance Profiling
@ -352,7 +353,7 @@ cargo bench --bench migration_benchmarks -- --profile-time=10
# Generate flame graphs # Generate flame graphs
cargo install flamegraph cargo install flamegraph
cargo flamegraph --bench storage_benchmarks cargo flamegraph --bench storage_benchmarks
```plaintext ```text
## Best Practices ## Best Practices

View File

@ -1,6 +1,7 @@
# KMS Service - Key Management Service # KMS Service - Key Management Service
A unified Key Management Service for the Provisioning platform with support for multiple backends: **Age** (development), **Cosmian KMS** (privacy-preserving), **RustyVault** (self-hosted), **AWS KMS** (cloud-native), and **HashiCorp Vault** (enterprise). A unified Key Management Service for the Provisioning platform with support for multiple backends: **Age** (development),
**Cosmian KMS** (privacy-preserving), **RustyVault** (self-hosted), **AWS KMS** (cloud-native), and **HashiCorp Vault** (enterprise).
## Features ## Features
@ -43,22 +44,26 @@ A unified Key Management Service for the Provisioning platform with support for
## Architecture ## Architecture
```plaintext ```plaintext
┌─────────────────────────────────────────────────────────┐ ┌─────────────────────────────────────────────────
────────┐
│ KMS Service │ │ KMS Service │
├─────────────────────────────────────────────────────────┤ ├─────────────────────────────────────────────────
────────┤
│ REST API (Axum) │ │ REST API (Axum) │
│ ├─ /api/v1/kms/encrypt POST │ │ ├─ /api/v1/kms/encrypt POST │
│ ├─ /api/v1/kms/decrypt POST │ │ ├─ /api/v1/kms/decrypt POST │
│ ├─ /api/v1/kms/generate-key POST (Cosmian only) │ │ ├─ /api/v1/kms/generate-key POST (Cosmian only) │
│ ├─ /api/v1/kms/status GET │ │ ├─ /api/v1/kms/status GET │
│ └─ /api/v1/kms/health GET │ │ └─ /api/v1/kms/health GET │
├─────────────────────────────────────────────────────────┤ ├─────────────────────────────────────────────────
────────┤
│ Unified KMS Service Interface │ │ Unified KMS Service Interface │
│ ├─ encrypt(plaintext, context) -> ciphertext │ │ ├─ encrypt(plaintext, context) -> ciphertext │
│ ├─ decrypt(ciphertext, context) -> plaintext │ │ ├─ decrypt(ciphertext, context) -> plaintext │
│ ├─ generate_data_key(spec) -> DataKey │ │ ├─ generate_data_key(spec) -> DataKey │
│ └─ health_check() -> bool │ │ └─ health_check() -> bool │
├─────────────────────────────────────────────────────────┤ ├─────────────────────────────────────────────────
────────┤
│ Backend Implementations │ │ Backend Implementations │
│ ├─ Age Client │ │ ├─ Age Client │
│ │ ├─ X25519 encryption │ │ │ ├─ X25519 encryption │
@ -68,8 +73,9 @@ A unified Key Management Service for the Provisioning platform with support for
│ ├─ REST API integration │ │ ├─ REST API integration │
│ ├─ Zero-knowledge encryption │ │ ├─ Zero-knowledge encryption │
│ └─ Confidential computing │ │ └─ Confidential computing │
└─────────────────────────────────────────────────────────┘ └─────────────────────────────────────────────────
```plaintext ────────┘
```text
## Installation ## Installation
@ -87,7 +93,7 @@ cd provisioning/platform/kms-service
cargo build --release cargo build --release
# Binary will be at: target/release/kms-service # Binary will be at: target/release/kms-service
```plaintext ```text
## Configuration ## Configuration
@ -110,7 +116,7 @@ server_url = "${COSMIAN_KMS_URL:-https://kms.example.com}"
api_key = "${COSMIAN_API_KEY}" api_key = "${COSMIAN_API_KEY}"
default_key_id = "provisioning-master-key" default_key_id = "provisioning-master-key"
tls_verify = true tls_verify = true
```plaintext ```text
### Environment Variables ### Environment Variables
@ -123,7 +129,7 @@ export PROVISIONING_ENV=dev
export PROVISIONING_ENV=prod export PROVISIONING_ENV=prod
export COSMIAN_KMS_URL="https://kms.example.com" export COSMIAN_KMS_URL="https://kms.example.com"
export COSMIAN_API_KEY="your-api-key" export COSMIAN_API_KEY="your-api-key"
```plaintext ```text
## Quick Start ## Quick Start
@ -140,7 +146,7 @@ export PROVISIONING_ENV=dev
# 3. Start KMS service # 3. Start KMS service
cargo run --bin kms-service cargo run --bin kms-service
```plaintext ```text
### Production Setup (Cosmian) ### Production Setup (Cosmian)
@ -157,7 +163,7 @@ export COSMIAN_API_KEY=your-api-key-here
# 4. Start KMS service # 4. Start KMS service
cargo run --bin kms-service cargo run --bin kms-service
```plaintext ```text
## Usage ## Usage
@ -172,7 +178,7 @@ curl -X POST http://localhost:8082/api/v1/kms/encrypt \
"plaintext": "SGVsbG8sIFdvcmxkIQ==", "plaintext": "SGVsbG8sIFdvcmxkIQ==",
"context": "env=prod,service=api" "context": "env=prod,service=api"
}' }'
```plaintext ```text
#### Decrypt Data #### Decrypt Data
@ -183,7 +189,7 @@ curl -X POST http://localhost:8082/api/v1/kms/decrypt \
"ciphertext": "...", "ciphertext": "...",
"context": "env=prod,service=api" "context": "env=prod,service=api"
}' }'
```plaintext ```text
#### Generate Data Key (Cosmian only) #### Generate Data Key (Cosmian only)
@ -193,13 +199,13 @@ curl -X POST http://localhost:8082/api/v1/kms/generate-key \
-d '{ -d '{
"key_spec": "AES_256" "key_spec": "AES_256"
}' }'
```plaintext ```text
#### Health Check #### Health Check
```bash ```bash
curl http://localhost:8082/api/v1/kms/health curl http://localhost:8082/api/v1/kms/health
```plaintext ```text
### Nushell CLI Integration ### Nushell CLI Integration
@ -232,7 +238,7 @@ kms encrypt-file secrets.json --output secrets.enc --context "env=prod"
kms decrypt-file config.yaml.enc kms decrypt-file config.yaml.enc
kms decrypt-file secrets.enc --output secrets.json --context "env=prod" kms decrypt-file secrets.enc --output secrets.json --context "env=prod"
```plaintext ```text
## Backend Comparison ## Backend Comparison
@ -260,14 +266,14 @@ kms encrypt-file workspace/config/secrets.yaml
# SOPS can use KMS for key encryption # SOPS can use KMS for key encryption
# Configure in .sops.yaml to use KMS endpoint # Configure in .sops.yaml to use KMS endpoint
```plaintext ```text
### 2. Dynamic Secrets (Provider API Keys) ### 2. Dynamic Secrets (Provider API Keys)
```rust ```rust
// Rust orchestrator can call KMS API // Rust orchestrator can call KMS API
let encrypted_key = kms_client.encrypt(api_key.as_bytes(), &context).await?; let encrypted_key = kms_client.encrypt(api_key.as_bytes(), &context).await?;
```plaintext ```text
### 3. SSH Key Management ### 3. SSH Key Management
@ -275,7 +281,7 @@ let encrypted_key = kms_client.encrypt(api_key.as_bytes(), &context).await?;
# Generate and encrypt temporal SSH keys # Generate and encrypt temporal SSH keys
ssh-keygen -t ed25519 -f temp_key -N "" ssh-keygen -t ed25519 -f temp_key -N ""
kms encrypt-file temp_key --context "infra=prod,purpose=deployment" kms encrypt-file temp_key --context "infra=prod,purpose=deployment"
```plaintext ```text
### 4. Orchestrator (Workflow Data) ### 4. Orchestrator (Workflow Data)
@ -284,7 +290,7 @@ kms encrypt-file temp_key --context "infra=prod,purpose=deployment"
let encrypted_params = kms_service let encrypted_params = kms_service
.encrypt(params_json.as_bytes(), &workflow_context) .encrypt(params_json.as_bytes(), &workflow_context)
.await?; .await?;
```plaintext ```text
### 5. Control Center (Audit Logs) ### 5. Control Center (Audit Logs)
@ -298,7 +304,7 @@ let encrypted_params = kms_service
```bash ```bash
cargo test cargo test
```plaintext ```text
### Integration Tests ### Integration Tests
@ -310,7 +316,7 @@ cargo test age
export COSMIAN_KMS_URL=http://localhost:9999 export COSMIAN_KMS_URL=http://localhost:9999
export COSMIAN_API_KEY=test-key export COSMIAN_API_KEY=test-key
cargo test cosmian -- --ignored cargo test cosmian -- --ignored
```plaintext ```text
## Deployment ## Deployment
@ -328,7 +334,7 @@ RUN apt-get update && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/kms-service /usr/local/bin/ COPY --from=builder /app/target/release/kms-service /usr/local/bin/
ENTRYPOINT ["kms-service"] ENTRYPOINT ["kms-service"]
```plaintext ```text
### Kubernetes (Production with Cosmian) ### Kubernetes (Production with Cosmian)
@ -356,7 +362,7 @@ spec:
key: api-key key: api-key
ports: ports:
- containerPort: 8082 - containerPort: 8082
```plaintext ```text
### systemd Service ### systemd Service
@ -376,7 +382,7 @@ Restart=always
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
```plaintext ```text
## Security Best Practices ## Security Best Practices
@ -402,7 +408,7 @@ curl http://localhost:8082/api/v1/kms/status
# Health check # Health check
curl http://localhost:8082/api/v1/kms/health curl http://localhost:8082/api/v1/kms/health
```plaintext ```text
### Logs ### Logs
@ -412,7 +418,7 @@ export RUST_LOG="kms_service=debug,tower_http=debug"
# View logs # View logs
journalctl -u kms-service -f journalctl -u kms-service -f
```plaintext ```text
## Troubleshooting ## Troubleshooting
@ -429,7 +435,7 @@ cat ~/.config/provisioning/age/public_key.txt
# Test encryption manually # Test encryption manually
echo "test" | age -r $(cat ~/.config/provisioning/age/public_key.txt) > test.enc echo "test" | age -r $(cat ~/.config/provisioning/age/public_key.txt) > test.enc
age -d -i ~/.config/provisioning/age/private_key.txt test.enc age -d -i ~/.config/provisioning/age/private_key.txt test.enc
```plaintext ```text
### Cosmian KMS Issues ### Cosmian KMS Issues
@ -447,7 +453,7 @@ curl -X POST https://kms.example.com/api/v1/encrypt \
-H "X-API-Key: $COSMIAN_API_KEY" \ -H "X-API-Key: $COSMIAN_API_KEY" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{"keyId":"master-key","data":"SGVsbG8="}' -d '{"keyId":"master-key","data":"SGVsbG8="}'
```plaintext ```text
## License ## License