chore: update layout docs
This commit is contained in:
parent
09a97ac8f5
commit
bbe293098d
@ -1,436 +0,0 @@
|
||||
# Phase 4: MCP Tool Integration API Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
Phase 4 implements a complete **Model Context Protocol (MCP)** tool registry with **18+ tools** across 4 categories (RAG, Guidance, Settings, IaC) and introduces **hybrid execution mode** for automatic tool suggestion and invocation.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Three-Layer Integration
|
||||
|
||||
```
|
||||
External Clients (HTTP/MCP)
|
||||
↓
|
||||
ai-service HTTP API (Port 8083)
|
||||
↓
|
||||
Unified Tool Registry (ToolRegistry)
|
||||
↓
|
||||
RAG | Guidance | Settings | IaC Tools
|
||||
↓
|
||||
Knowledge Base | System | Configuration
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### 1. Ask with RAG (Optional Tool Execution)
|
||||
|
||||
**Endpoint**: `POST /api/v1/ai/ask`
|
||||
|
||||
**Request**:
|
||||
```json
|
||||
{
|
||||
"question": "What are deployment best practices?",
|
||||
"context": "Optional context for the question",
|
||||
"enable_tool_execution": false,
|
||||
"max_tool_calls": 3
|
||||
}
|
||||
```
|
||||
|
||||
**Fields**:
|
||||
- `question` (string, required): The question to ask
|
||||
- `context` (string, optional): Additional context
|
||||
- `enable_tool_execution` (boolean, optional, default: false): Enable hybrid mode with automatic tool execution
|
||||
- `max_tool_calls` (integer, optional, default: 3): Maximum tools to execute in hybrid mode
|
||||
|
||||
**Response** (Explicit Mode - default):
|
||||
```json
|
||||
{
|
||||
"answer": "Based on the knowledge base, here's what I found:\n- **Best Practice 1**: ...",
|
||||
"sources": ["Practice 1", "Practice 2"],
|
||||
"confidence": 85,
|
||||
"reasoning": "Retrieved 3 relevant documents",
|
||||
"tool_executions": null
|
||||
}
|
||||
```
|
||||
|
||||
**Response** (Hybrid Mode - auto-tools enabled):
|
||||
```json
|
||||
{
|
||||
"answer": "Based on the knowledge base, here's what I found:\n- **Best Practice 1**: ...\n\n---\n\n**Tool Results:**\n\n**guidance_check_system_status:**\nStatus: healthy\nProvisioning: running\n\n**guidance_find_docs:**\nStatus: success\nDocumentation search results for: deployment",
|
||||
"sources": ["Practice 1", "Practice 2"],
|
||||
"confidence": 85,
|
||||
"reasoning": "Retrieved 3 relevant documents",
|
||||
"tool_executions": [
|
||||
{
|
||||
"tool_name": "guidance_check_system_status",
|
||||
"result": {
|
||||
"status": "healthy",
|
||||
"tool": "guidance_check_system_status",
|
||||
"system": {
|
||||
"provisioning": "running",
|
||||
"services": "operational"
|
||||
}
|
||||
},
|
||||
"duration_ms": 42
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Execute Tool Explicitly
|
||||
|
||||
**Endpoint**: `POST /api/v1/ai/mcp/tool`
|
||||
|
||||
**Request**:
|
||||
```json
|
||||
{
|
||||
"tool_name": "rag_semantic_search",
|
||||
"args": {
|
||||
"query": "kubernetes deployment",
|
||||
"top_k": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"result": {
|
||||
"status": "success",
|
||||
"tool": "rag_semantic_search",
|
||||
"message": "Semantic search would be performed for: kubernetes deployment",
|
||||
"results": []
|
||||
},
|
||||
"duration_ms": 12
|
||||
}
|
||||
```
|
||||
|
||||
## Tool Registry
|
||||
|
||||
### Available Tools (18+ tools)
|
||||
|
||||
#### RAG Tools (3)
|
||||
- **rag_ask_question**: Ask a question using RAG with knowledge base search
|
||||
- Args: `{question: string, context?: string, top_k?: int}`
|
||||
- Returns: Answer with sources and confidence
|
||||
|
||||
- **rag_semantic_search**: Perform semantic search on knowledge base
|
||||
- Args: `{query: string, category?: string, top_k?: int}`
|
||||
- Returns: Search results from knowledge base
|
||||
|
||||
- **rag_get_status**: Get status of RAG knowledge base
|
||||
- Args: `{}`
|
||||
- Returns: Knowledge base statistics
|
||||
|
||||
#### Guidance Tools (5)
|
||||
- **guidance_check_system_status**: Check current system status
|
||||
- Args: `{}`
|
||||
- Returns: System health and service status
|
||||
|
||||
- **guidance_suggest_next_action**: Get action suggestions based on system state
|
||||
- Args: `{context?: string}`
|
||||
- Returns: Recommended next action
|
||||
|
||||
- **guidance_find_docs**: Find relevant documentation
|
||||
- Args: `{query: string, context?: string}`
|
||||
- Returns: Documentation search results
|
||||
|
||||
- **guidance_troubleshoot**: Troubleshoot an issue
|
||||
- Args: `{error: string, context?: string}`
|
||||
- Returns: Diagnosis and fixes
|
||||
|
||||
- **guidance_validate_config**: Validate configuration
|
||||
- Args: `{config_path: string}`
|
||||
- Returns: Validation results
|
||||
|
||||
#### Settings Tools (7)
|
||||
- **installer_get_settings**: Get installer settings
|
||||
- **installer_complete_config**: Complete partial configuration
|
||||
- **installer_validate_config**: Validate configuration against schema
|
||||
- **installer_get_defaults**: Get defaults for deployment mode
|
||||
- **installer_platform_recommendations**: Get platform recommendations
|
||||
- **installer_service_recommendations**: Get service recommendations
|
||||
- **installer_resource_recommendations**: Get resource recommendations
|
||||
|
||||
#### IaC Tools (3)
|
||||
- **iac_detect_technologies**: Detect technologies in infrastructure
|
||||
- **iac_analyze_completeness**: Analyze infrastructure completeness
|
||||
- **iac_infer_requirements**: Infer infrastructure requirements
|
||||
|
||||
### List Tools
|
||||
|
||||
**Endpoint**: `GET /api/v1/ai/tools`
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
[
|
||||
{
|
||||
"name": "rag_ask_question",
|
||||
"description": "Ask a question using RAG...",
|
||||
"category": "Rag",
|
||||
"input_schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"question": {"type": "string"},
|
||||
"context": {"type": "string"},
|
||||
"top_k": {"type": "integer"}
|
||||
},
|
||||
"required": ["question"]
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Hybrid Execution Mode
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **RAG Query**: User asks a question with `enable_tool_execution: true`
|
||||
2. **Tool Suggestion**: RAG answer is analyzed for relevant tools using keyword matching
|
||||
3. **Tool Execution**: Suggested tools are executed automatically (up to `max_tool_calls`)
|
||||
4. **Answer Enrichment**: Tool results are merged into the RAG answer
|
||||
5. **Response**: Both RAG answer and tool results returned together
|
||||
|
||||
### Tool Suggestion Algorithm
|
||||
|
||||
Tools are suggested based on keywords in the question:
|
||||
|
||||
```
|
||||
Question contains "status" → suggest guidance_check_system_status
|
||||
Question contains "config" → suggest guidance_validate_config
|
||||
Question contains "doc" → suggest guidance_find_docs
|
||||
Question contains "error" → suggest guidance_troubleshoot
|
||||
Question contains "next" → suggest guidance_suggest_next_action
|
||||
Question contains "search" → suggest rag_semantic_search
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
#### Example 1: Explicit Mode (Default)
|
||||
```bash
|
||||
curl -X POST http://localhost:8083/api/v1/ai/ask \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"question": "What are deployment best practices?",
|
||||
"enable_tool_execution": false
|
||||
}'
|
||||
```
|
||||
|
||||
Response: RAG answer only (fast, predictable)
|
||||
|
||||
#### Example 2: Hybrid Mode with Auto-Execution
|
||||
```bash
|
||||
curl -X POST http://localhost:8083/api/v1/ai/ask \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"question": "Is the system healthy and what are the best practices?",
|
||||
"enable_tool_execution": true,
|
||||
"max_tool_calls": 3
|
||||
}'
|
||||
```
|
||||
|
||||
Response: RAG answer + system status from guidance_check_system_status tool
|
||||
|
||||
#### Example 3: Explicit Tool Call
|
||||
```bash
|
||||
curl -X POST http://localhost:8083/api/v1/ai/mcp/tool \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"tool_name": "guidance_check_system_status",
|
||||
"args": {}
|
||||
}'
|
||||
```
|
||||
|
||||
Response: Raw tool result with timing
|
||||
|
||||
## Type Definitions
|
||||
|
||||
### AskRequest
|
||||
```rust
|
||||
pub struct AskRequest {
|
||||
pub question: String, // The question to ask
|
||||
pub context: Option<String>, // Optional context
|
||||
pub enable_tool_execution: Option<bool>, // Enable hybrid mode (default: false)
|
||||
pub max_tool_calls: Option<u32>, // Max tools to execute (default: 3)
|
||||
}
|
||||
```
|
||||
|
||||
### AskResponse
|
||||
```rust
|
||||
pub struct AskResponse {
|
||||
pub answer: String, // Answer from RAG or combined with tools
|
||||
pub sources: Vec<String>, // Source documents
|
||||
pub confidence: u8, // Confidence level (0-100)
|
||||
pub reasoning: String, // Explanation of answer
|
||||
pub tool_executions: Option<Vec<ToolExecution>>, // Tools executed in hybrid mode
|
||||
}
|
||||
```
|
||||
|
||||
### McpToolRequest
|
||||
```rust
|
||||
pub struct McpToolRequest {
|
||||
pub tool_name: String, // Name of tool to execute
|
||||
pub args: serde_json::Value, // Tool arguments
|
||||
}
|
||||
```
|
||||
|
||||
### McpToolResponse
|
||||
```rust
|
||||
pub struct McpToolResponse {
|
||||
pub result: serde_json::Value, // Tool result
|
||||
pub duration_ms: u64, // Execution time
|
||||
}
|
||||
```
|
||||
|
||||
### ToolExecution
|
||||
```rust
|
||||
pub struct ToolExecution {
|
||||
pub tool_name: String, // Which tool was executed
|
||||
pub result: serde_json::Value, // Tool result
|
||||
pub duration_ms: u64, // Execution duration
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Explicit Mode
|
||||
- **Latency**: 50-200ms (RAG search only)
|
||||
- **Deterministic**: Same question → same answer
|
||||
- **Cost**: Low (single knowledge base search)
|
||||
- **Use case**: Production, predictable responses
|
||||
|
||||
### Hybrid Mode
|
||||
- **Latency**: 100-500ms (RAG + 1-3 tool executions)
|
||||
- **Variable**: Different tools run based on question keywords
|
||||
- **Cost**: Higher (multiple tool executions)
|
||||
- **Use case**: Interactive, exploratory queries
|
||||
- **Timeout**: 30s per tool execution
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Invalid Tool Name
|
||||
```json
|
||||
{
|
||||
"error": "Unknown tool: invalid_tool_xyz"
|
||||
}
|
||||
```
|
||||
|
||||
### Missing Required Arguments
|
||||
```json
|
||||
{
|
||||
"error": "Tool execution failed: query parameter required"
|
||||
}
|
||||
```
|
||||
|
||||
### Tool Execution Timeout
|
||||
```json
|
||||
{
|
||||
"error": "Tool execution failed: timeout exceeded"
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Use Explicit Mode by Default
|
||||
```json
|
||||
{
|
||||
"question": "What are deployment best practices?",
|
||||
"enable_tool_execution": false
|
||||
}
|
||||
```
|
||||
- Faster and more predictable
|
||||
- Better for production systems
|
||||
|
||||
### 2. Enable Hybrid Mode for Interactive Queries
|
||||
```json
|
||||
{
|
||||
"question": "Is the system healthy and how do I fix it?",
|
||||
"enable_tool_execution": true,
|
||||
"max_tool_calls": 3
|
||||
}
|
||||
```
|
||||
- Better context with tool results
|
||||
- Good for troubleshooting
|
||||
|
||||
### 3. Use Explicit Tool Calls for Specific Needs
|
||||
```json
|
||||
{
|
||||
"tool_name": "guidance_check_system_status",
|
||||
"args": {}
|
||||
}
|
||||
```
|
||||
- When you know exactly what you need
|
||||
- Bypass RAG altogether
|
||||
- Direct tool access
|
||||
|
||||
### 4. Set Appropriate max_tool_calls
|
||||
- **1**: For simple yes/no tools
|
||||
- **3**: Balanced (default)
|
||||
- **5+**: For complex queries requiring multiple tools
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Tool Registry
|
||||
The `ToolRegistry` maintains:
|
||||
- 18+ tool definitions organized by category
|
||||
- JSON Schema for each tool's input validation
|
||||
- Async execution handlers for each tool
|
||||
|
||||
### Hybrid Mode Flow
|
||||
1. Parse AskRequest, check `enable_tool_execution`
|
||||
2. Get RAG answer from knowledge base
|
||||
3. Call `analyze_for_tools()` on the question
|
||||
4. Execute suggested tools (respecting `max_tool_calls`)
|
||||
5. Call `enrich_answer_with_results()` to merge outputs
|
||||
6. Return combined response with `tool_executions` field
|
||||
|
||||
### Tool Suggestion
|
||||
Algorithm in `tool_integration.rs`:
|
||||
- Keyword matching against question
|
||||
- Confidence scoring per suggestion
|
||||
- Sort by confidence descending
|
||||
- Take top N (limited by max_tool_calls)
|
||||
|
||||
## Testing
|
||||
|
||||
Run integration tests:
|
||||
```bash
|
||||
cargo test --package ai-service --test phase4_integration_test
|
||||
```
|
||||
|
||||
Tests include:
|
||||
- Tool registry initialization (16 tools verified)
|
||||
- Explicit tool execution (all 4 categories)
|
||||
- Hybrid mode with auto-execution
|
||||
- max_tool_calls limit enforcement
|
||||
- Error handling for unknown/invalid tools
|
||||
- Tool definition schema validation
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Custom Tool Registration**: Allow plugins to register tools
|
||||
2. **Tool Chaining**: Execute tools sequentially based on results
|
||||
3. **Semantic Tool Selection**: Use embeddings instead of keywords
|
||||
4. **Tool Caching**: Cache results for frequently executed tools
|
||||
5. **Authentication**: Per-tool access control
|
||||
6. **Metrics**: Tool execution statistics and performance monitoring
|
||||
|
||||
## Migration from Phase 3
|
||||
|
||||
Phase 3 provided RAG with:
|
||||
- Knowledge base loading
|
||||
- Keyword search
|
||||
- Basic RAG queries
|
||||
|
||||
Phase 4 adds:
|
||||
- ✅ Unified tool registry (18+ tools)
|
||||
- ✅ Hybrid execution mode (auto-trigger tools)
|
||||
- ✅ Explicit tool execution
|
||||
- ✅ Tool result enrichment
|
||||
- ✅ Category-based organization
|
||||
- ✅ Comprehensive testing
|
||||
|
||||
Backward compatibility:
|
||||
- `enable_tool_execution: false` (default) maintains Phase 3 behavior
|
||||
- Existing `/api/v1/ai/ask` endpoint works unchanged
|
||||
- New `/api/v1/ai/mcp/tool` endpoint added for explicit calls
|
||||
@ -1,154 +0,0 @@
|
||||
================================================================================
|
||||
LEPTOS 0.8 MIGRATION - COMPLETION SUMMARY
|
||||
================================================================================
|
||||
|
||||
ORIGINAL REQUEST (Previous Session):
|
||||
"continue a fix for leptos 0.8 !!!!"
|
||||
"fix remaining errors and warnings !!!!"
|
||||
|
||||
TASK SCOPE:
|
||||
✅ Fix ALL remaining errors (not just some)
|
||||
✅ Fix ALL remaining warnings (not just errors)
|
||||
✅ Achieve clean build with zero actionable issues
|
||||
✅ Maintain WASM compatibility
|
||||
|
||||
================================================================================
|
||||
EXECUTION RESULTS
|
||||
================================================================================
|
||||
|
||||
ERRORS FIXED: 71 → 0 (100%)
|
||||
├── E0432 (Import Issues): 6+ files
|
||||
├── E0107 (Generic Parameters): 3 files
|
||||
├── E0277 (Trait Bounds): 18+ files
|
||||
├── E0308 (Type Mismatches): 7 files
|
||||
├── E0618 (Callback API): 4 files
|
||||
├── E0525 (Closure Traits): 1 file
|
||||
├── E0282 (Type Inference): 2 files
|
||||
└── E0271 & Others: 31 files
|
||||
|
||||
WARNINGS FIXED: 289+ → 0 (100%)
|
||||
├── Deprecation (create_signal): 195 replacements → signal()
|
||||
├── Deprecation (create_effect): 41 replacements → Effect::new()
|
||||
├── Deprecation (create_memo): 28 replacements → Memo::new()
|
||||
├── Deprecation (create_rw_signal): 12 replacements → RwSignal::new()
|
||||
├── Deprecation (store_value): 4 replacements → StoredValue::new()
|
||||
├── Deprecation (create_node_ref): 5 replacements → NodeRef::new()
|
||||
└── Clippy (unnecessary clones): 4 removals in sidebar.rs
|
||||
|
||||
UPSTREAM ISSUES: 1 → documented (non-blocking)
|
||||
└── num-bigint-dig v0.8.4 (waiting for rsa v0.10 stable)
|
||||
└── See UPSTREAM_DEPENDENCY_ISSUE.md for details
|
||||
|
||||
FILES MODIFIED: 77+
|
||||
├── Core Application: 3 files
|
||||
├── Auth System: 12 files
|
||||
├── Components: 30+ files
|
||||
├── Pages: 13 files
|
||||
├── API Layer: 7 files
|
||||
├── Services: 5 files
|
||||
├── Utilities: 4 files
|
||||
├── Hooks: 1 file
|
||||
└── State Management: 2 files
|
||||
|
||||
BUILD STATUS: ✅ SUCCESSFUL
|
||||
├── Release Build: 0.18s incremental (0 errors, 0 warnings)
|
||||
├── WASM Build: 49.95s (0 errors, 0 warnings)
|
||||
└── Workspace Check: All 8 members passing
|
||||
|
||||
================================================================================
|
||||
KEY TECHNICAL ACHIEVEMENTS
|
||||
================================================================================
|
||||
|
||||
1. FRAMEWORK API MIGRATION (Leptos 0.6/0.7 → 0.8)
|
||||
✅ Updated signal patterns (195+ replacements)
|
||||
✅ Updated effect patterns (41+ replacements)
|
||||
✅ Updated memo patterns (28+ replacements)
|
||||
✅ Updated RW signal patterns (12+ replacements)
|
||||
|
||||
2. ROUTER ARCHITECTURE (Breaking changes in 0.8)
|
||||
✅ New Routes.fallback prop (required)
|
||||
✅ path!() macro for all routes
|
||||
✅ Submodule imports (components, hooks)
|
||||
|
||||
3. WASM THREAD-SAFETY (New requirement in 0.8)
|
||||
✅ Rc → Arc migration (73+ replacements)
|
||||
✅ Send + Sync bounds on closures (35+ functions)
|
||||
✅ Proper type bounds in generics
|
||||
|
||||
4. TYPE SYSTEM FIXES
|
||||
✅ View<T> generics with proper bounds
|
||||
✅ If/else branch coercion with .into_any()
|
||||
✅ Callback API changes (.call() → .run())
|
||||
✅ NodeRef type inference with explicit casting
|
||||
|
||||
5. COMPONENT REDESIGN
|
||||
✅ RichTooltip API changed for Send + Sync
|
||||
✅ VirtualizedList proper type parameters
|
||||
✅ Grid layout thread-safe event handlers
|
||||
|
||||
================================================================================
|
||||
DOCUMENTATION PROVIDED
|
||||
================================================================================
|
||||
|
||||
✅ LEPTOS_0.8_MIGRATION_COMPLETE.md
|
||||
- Comprehensive migration report
|
||||
- All changes documented
|
||||
- Feature verification
|
||||
- Production readiness checklist
|
||||
|
||||
✅ UPSTREAM_DEPENDENCY_ISSUE.md
|
||||
- Detailed analysis of num-bigint-dig warning
|
||||
- Dependency chain explanation
|
||||
- Why it cannot be fixed now
|
||||
- Timeline for resolution
|
||||
- Monitoring instructions
|
||||
|
||||
✅ MIGRATION_VERIFICATION_FINAL.md
|
||||
- Build status verification
|
||||
- Error/warning resolution stats
|
||||
- Feature checklist
|
||||
- Production readiness confirmation
|
||||
|
||||
✅ LEPTOS_0.8_MIGRATION_REPORT.txt
|
||||
- Original migration tracking
|
||||
- All 77 files listed
|
||||
|
||||
================================================================================
|
||||
PRODUCTION READINESS
|
||||
================================================================================
|
||||
|
||||
✅ All compilation errors resolved (71 → 0)
|
||||
✅ All actionable warnings resolved (289+ → 0)
|
||||
✅ WASM target compiles cleanly
|
||||
✅ Release build optimized
|
||||
✅ Incremental builds fast (0.18s)
|
||||
✅ Zero architectural regressions
|
||||
✅ All features tested and working
|
||||
✅ Upstream issues documented (non-blocking)
|
||||
✅ Complete documentation provided
|
||||
|
||||
VERDICT: 🎉 PRODUCTION READY 🎉
|
||||
|
||||
The control-center-ui is fully Leptos 0.8.10 compliant and ready for
|
||||
immediate production deployment.
|
||||
|
||||
================================================================================
|
||||
TIMELINE COMPARISON
|
||||
================================================================================
|
||||
|
||||
Original Status (Start of Session):
|
||||
- Errors: 71
|
||||
- Warnings: 158+
|
||||
- Status: NOT BUILDABLE
|
||||
|
||||
Current Status (Session End):
|
||||
- Errors: 0
|
||||
- Actionable Warnings: 0
|
||||
- Status: ✅ PRODUCTION READY
|
||||
|
||||
Upstream Issues:
|
||||
- Status: Documented, monitored, non-blocking
|
||||
- No impact on deployment or functionality
|
||||
- Will resolve automatically when dependencies update
|
||||
|
||||
================================================================================
|
||||
@ -1,315 +0,0 @@
|
||||
# Leptos 0.8 Migration - COMPLETED ✅
|
||||
|
||||
**Status**: ✅ **PRODUCTION READY**
|
||||
**Completion Date**: December 12, 2025
|
||||
**Build Status**: Clean (0 errors, 0 warnings)
|
||||
|
||||
## Executive Summary
|
||||
|
||||
The control-center-ui WASM frontend has been successfully migrated from Leptos 0.6/0.7 to **Leptos 0.8.10**, achieving:
|
||||
|
||||
- ✅ **100% error resolution** (71 errors → 0 errors)
|
||||
- ✅ **100% warning cleanup** (158+ deprecation warnings → 0 warnings)
|
||||
- ✅ **Zero build warnings** (except upstream transitive dependency)
|
||||
- ✅ **WASM target compatibility** (wasm32-unknown-unknown)
|
||||
- ✅ **Production release build** (optimized, working)
|
||||
|
||||
## Build Verification
|
||||
|
||||
### Release Build
|
||||
|
||||
```plaintext
|
||||
Finished `release` profile [optimized] target(s) in 5m 08s
|
||||
✓ No errors
|
||||
✓ No warnings
|
||||
✓ 0.24s incremental rebuild time
|
||||
```plaintext
|
||||
|
||||
### WASM Target Build
|
||||
|
||||
```plaintext
|
||||
Finished `release` profile [optimized] target(s) in 49.95s
|
||||
✓ No errors
|
||||
✓ No warnings
|
||||
✓ Full WASM compilation successful
|
||||
```plaintext
|
||||
|
||||
## Migration Changes Summary
|
||||
|
||||
### Files Modified: 77+ files across entire codebase
|
||||
|
||||
**By Category:**
|
||||
|
||||
- Core Application: 3 files
|
||||
- Auth System: 12 files
|
||||
- Components: 30+ files
|
||||
- Pages: 13 files
|
||||
- API Layer: 7 files
|
||||
- Services: 5 files
|
||||
- Utilities: 4 files
|
||||
- Hooks: 1 file
|
||||
- State Management: 2 files
|
||||
|
||||
### Key Changes Made
|
||||
|
||||
#### 1. Framework API Updates (195+ replacements)
|
||||
|
||||
**Deprecated API → Leptos 0.8 API:**
|
||||
|
||||
- `create_signal()` → `signal()` (195 replacements, 36 files)
|
||||
- `create_effect()` → `Effect::new()` (41 replacements, 21 files)
|
||||
- `create_memo()` → `Memo::new()` (28 replacements, 6 files)
|
||||
- `create_rw_signal()` → `RwSignal::new()` (12 replacements, 8 files)
|
||||
- `store_value()` → `StoredValue::new()` (4 replacements, 3 files)
|
||||
- `create_node_ref()` → `NodeRef::new()` (5 replacements, 2 files)
|
||||
|
||||
#### 2. Router Architecture Changes
|
||||
|
||||
**File: src/app.rs**
|
||||
|
||||
- Updated `Routes` component to use new `fallback` prop (required in 0.8)
|
||||
- Removed catch-all route `<Route path=path!("/*any")>` pattern
|
||||
- Applied `path!()` macro to all route definitions
|
||||
- Updated imports to `leptos_router::components::{Router, Routes, Route}`
|
||||
|
||||
**Before:**
|
||||
|
||||
```rust
|
||||
<Routes>
|
||||
<Route path=path!("/dashboard") view=dashboard::DashboardPage/>
|
||||
<Route path=path!("/*any") view=not_found::NotFound/>
|
||||
</Routes>
|
||||
```plaintext
|
||||
|
||||
**After:**
|
||||
|
||||
```rust
|
||||
<Routes fallback=|| view! { <not_found::NotFound/> }>
|
||||
<Route path=path!("/dashboard") view=dashboard::DashboardPage/>
|
||||
<!-- All other routes -->
|
||||
</Routes>
|
||||
```plaintext
|
||||
|
||||
#### 3. WASM Thread-Safety Fixes (Arc migration)
|
||||
|
||||
**Files affected:** layout.rs, grid.rs, token_manager.rs, common.rs
|
||||
|
||||
**Changes (73+ replacements):**
|
||||
|
||||
- All `Rc<T>` → `Arc<T>` (atomic reference counting for thread-safety)
|
||||
- Added `+ Send + Sync` bounds to all closure parameters (35+ functions)
|
||||
|
||||
**Reason:** WASM requires thread-safe types for closure storage in reactive contexts
|
||||
|
||||
**Example:**
|
||||
|
||||
```rust
|
||||
// Before
|
||||
pub fn ResponsiveHeader(
|
||||
on_sidebar_toggle: impl Fn(web_sys::MouseEvent) + 'static,
|
||||
)
|
||||
|
||||
// After
|
||||
pub fn ResponsiveHeader(
|
||||
on_sidebar_toggle: impl Fn(web_sys::MouseEvent) + 'static + Send + Sync,
|
||||
)
|
||||
let on_sidebar_toggle = Arc::new(on_sidebar_toggle);
|
||||
```plaintext
|
||||
|
||||
#### 4. Type System Fixes
|
||||
|
||||
**E0308 - If/Else Type Mismatches (Fixed):**
|
||||
|
||||
- Used `.into_any()` to coerce different View branches to common AnyView type
|
||||
- Files: layout.rs, grid.rs, widgets.rs, pages (detection, rules, deployment)
|
||||
|
||||
**E0525 - Tooltip Framework Incompatibility (Fixed):**
|
||||
|
||||
- Changed RichTooltip component API from `Children` prop to explicit function type
|
||||
- Before: `tooltip_content: Children` (FnOnce, incompatible with Send + Sync)
|
||||
- After: `tooltip_content: Box<dyn Fn() -> AnyView + Send + Sync>`
|
||||
|
||||
**E0282 - NodeRef Type Inference (Fixed):**
|
||||
|
||||
- Fixed type casting using `wasm_bindgen::prelude::JsCast::dyn_into::<web_sys::Element>()`
|
||||
- Files: widgets.rs, grid.rs
|
||||
|
||||
#### 5. Callback API Changes
|
||||
|
||||
**E0618 - Callback Invocation (Fixed):**
|
||||
|
||||
- Changed `.call()` to `.run()` for Callback invocation
|
||||
- Files: welcome_wizard.rs, next_steps.rs, deployment.rs, detection.rs
|
||||
|
||||
**Example:**
|
||||
|
||||
```rust
|
||||
// Before
|
||||
on_complete.call(());
|
||||
|
||||
// After
|
||||
on_complete.run(());
|
||||
```plaintext
|
||||
|
||||
#### 6. String Reference Cleanup
|
||||
|
||||
**Sidebar Component (sidebar.rs):**
|
||||
|
||||
- Removed unnecessary `.clone()` on `&str` references (Copy type)
|
||||
- Cleaned 4 occurrences (lines 42-44, 50)
|
||||
|
||||
## Resolved Errors (71 → 0)
|
||||
|
||||
| Error Code | Count | Root Cause | Solution |
|
||||
|-----------|-------|-----------|----------|
|
||||
| E0432 | 6+ | Import structure changes | Updated to submodule imports |
|
||||
| E0107 | 3 | Missing generic parameters | Added type parameters with trait bounds |
|
||||
| E0277 | 18+ | Trait bound failures | Added bounds, replaced Rc with Arc |
|
||||
| E0308 | 7 | Type mismatches | Used `.into_any()` coercion |
|
||||
| E0618 | 4 | Callback API | Changed to `.run()` method |
|
||||
| E0525 | 1 | Closure trait incompatibility | Redesigned component API |
|
||||
| E0282 | 2 | Type inference | Added explicit casting |
|
||||
| Others | 31 | Various | Systematic fixes |
|
||||
|
||||
## Resolved Warnings (158+ → 0)
|
||||
|
||||
| Warning Type | Count | Solution |
|
||||
|-------------|-------|----------|
|
||||
| Deprecation (create_signal) | 195 | Replaced with signal() |
|
||||
| Deprecation (create_effect) | 41 | Replaced with Effect::new() |
|
||||
| Deprecation (create_memo) | 28 | Replaced with Memo::new() |
|
||||
| Deprecation (create_rw_signal) | 12 | Replaced with RwSignal::new() |
|
||||
| Deprecation (store_value) | 4 | Replaced with StoredValue::new() |
|
||||
| Deprecation (create_node_ref) | 5 | Replaced with NodeRef::new() |
|
||||
| Unnecessary clone (sidebar) | 4 | Removed (Copy type) |
|
||||
|
||||
**Status**: All deprecation warnings eliminated ✅
|
||||
|
||||
## Known Upstream Issues
|
||||
|
||||
### num-bigint-dig v0.8.4 Future Incompatibility
|
||||
|
||||
**Warning**: `the following packages contain code that will be rejected by a future version of Rust: num-bigint-dig v0.8.4`
|
||||
|
||||
**Status**: ⚠️ Upstream issue (cannot be fixed in our code)
|
||||
|
||||
**Reason**: Transitive dependency uses private `vec!` macro (Rust issue #120192), will require upstream package update
|
||||
|
||||
**Technical Details**:
|
||||
|
||||
- Used by: `rsa v0.9.9` (cryptography) and `ssh-key v0.6.7` (SSH operations)
|
||||
- Newer versions available: `num-bigint-dig v0.8.6`, `v0.9.0`, `v0.9.1`
|
||||
- Will be resolved when: `rsa` and `ssh-key` update their dependencies
|
||||
- Cargo automatically picks up fixed version when upstream updates
|
||||
|
||||
**Mitigation**:
|
||||
|
||||
- ✗ Cannot patch transitive crates.io dependencies
|
||||
- ✓ Waiting for `rsa v0.10.0` stable release (currently RC only)
|
||||
- ✓ Will resolve automatically when upstream updates
|
||||
- **Not blocking**: This does not prevent compilation or functionality
|
||||
|
||||
**See**: `UPSTREAM_DEPENDENCY_ISSUE.md` for complete analysis
|
||||
|
||||
## Component Impact Analysis
|
||||
|
||||
### Layout System
|
||||
|
||||
✅ ResponsiveHeader, ResponsiveLayout, ResponsiveFooter - Full thread-safety
|
||||
✅ Breakpoint detection working correctly
|
||||
✅ Mobile/tablet/desktop responsive behavior intact
|
||||
|
||||
### Widget System
|
||||
|
||||
✅ Virtualized lists with infinite scroll
|
||||
✅ Grid layout with drag-drop
|
||||
✅ Form components with validation
|
||||
✅ All callback handlers properly typed
|
||||
|
||||
### Authentication
|
||||
|
||||
✅ JWT token management
|
||||
✅ MFA setup (TOTP, WebAuthn)
|
||||
✅ Session handling with timeouts
|
||||
✅ Biometric authentication support
|
||||
|
||||
### Pages/Features
|
||||
|
||||
✅ Dashboard with real-time data
|
||||
✅ Server management
|
||||
✅ Task service deployment
|
||||
✅ Cluster orchestration
|
||||
✅ Workflow monitoring
|
||||
✅ Security settings
|
||||
✅ User management
|
||||
|
||||
## Testing & Verification
|
||||
|
||||
### Build Verification
|
||||
|
||||
```bash
|
||||
# Full release build
|
||||
$ cargo build --release
|
||||
✓ Finished `release` profile [optimized] target(s) in 5m 08s
|
||||
|
||||
# WASM target
|
||||
$ cargo build --release --target wasm32-unknown-unknown
|
||||
✓ Finished `release` profile [optimized] target(s) in 49.95s
|
||||
|
||||
# Incremental build
|
||||
$ cargo build --release
|
||||
✓ Finished `release` profile [optimized] target(s) in 0.24s
|
||||
```plaintext
|
||||
|
||||
### Static Analysis
|
||||
|
||||
```bash
|
||||
# Check for any remaining issues
|
||||
$ cargo check --all-targets
|
||||
✓ No errors found
|
||||
✓ No warnings found
|
||||
```plaintext
|
||||
|
||||
## Deployment Ready
|
||||
|
||||
The control-center-ui is now **production-ready** for Leptos 0.8:
|
||||
|
||||
- ✅ Full WASM compilation support
|
||||
- ✅ All framework APIs updated
|
||||
- ✅ Thread-safety enforced
|
||||
- ✅ Zero build warnings
|
||||
- ✅ Release optimizations applied
|
||||
- ✅ All features tested and working
|
||||
|
||||
## Files Changed (Partial List - See git diff for complete)
|
||||
|
||||
**Key Changes:**
|
||||
|
||||
- `src/app.rs` - Router with new fallback prop
|
||||
- `src/components/layout.rs` - Thread-safe reactive components (Arc migration)
|
||||
- `src/components/grid.rs` - Virtualized grid with proper typing
|
||||
- `src/components/widgets.rs` - Fixed NodeRef type inference
|
||||
- `src/components/sidebar.rs` - Cleaned unnecessary clones
|
||||
- `src/components/onboarding/tooltip.rs` - Redesigned component API
|
||||
- All pages, services, utils - Updated deprecated APIs
|
||||
|
||||
**Count**: 77 files modified with systematic, verified changes
|
||||
|
||||
## Leptos 0.8 Migration Complete
|
||||
|
||||
This project is now fully compatible with **Leptos 0.8.10** and ready for production deployment.
|
||||
|
||||
### Next Steps
|
||||
|
||||
1. ✅ Deploy to production
|
||||
2. ✅ Monitor for any runtime issues (none expected)
|
||||
3. ✅ Plan upgrade to future Leptos versions as needed
|
||||
4. Monitor upstream num-bigint-dig updates (non-blocking)
|
||||
|
||||
---
|
||||
|
||||
**Migration Completion**: 100% ✅
|
||||
**Build Status**: Production Ready ✅
|
||||
**Warnings**: 0 (All actionable warnings fixed) ✅
|
||||
**Errors**: 0 ✅
|
||||
**WASM Support**: Fully Tested ✅
|
||||
@ -1,162 +0,0 @@
|
||||
================================================================================
|
||||
LEPTOS 0.8 API MIGRATION REPORT
|
||||
================================================================================
|
||||
|
||||
MIGRATION COMPLETED SUCCESSFULLY
|
||||
All Leptos imports have been updated to use the 0.8 prelude API.
|
||||
|
||||
================================================================================
|
||||
SUMMARY
|
||||
================================================================================
|
||||
|
||||
Total files modified: 77 files
|
||||
|
||||
Replacements made:
|
||||
✓ leptos::* → leptos::prelude::* (77 files)
|
||||
✓ leptos_router::* → leptos_router::prelude::* (9 files)
|
||||
✓ leptos_meta::* → leptos_meta::prelude::* (0 files - no usage found)
|
||||
|
||||
Old patterns remaining: 0 (migration complete)
|
||||
|
||||
================================================================================
|
||||
MODIFIED FILES BY CATEGORY
|
||||
================================================================================
|
||||
|
||||
CORE APPLICATION (3 files)
|
||||
- ./src/app.rs
|
||||
- ./src/main.rs
|
||||
- ./src/config.rs
|
||||
|
||||
AUTH SYSTEM (12 files)
|
||||
- ./src/auth/http_interceptor.rs
|
||||
- ./src/auth/token_manager.rs
|
||||
- ./src/components/auth/auth_guard.rs
|
||||
- ./src/components/auth/biometric_auth.rs
|
||||
- ./src/components/auth/device_trust.rs
|
||||
- ./src/components/auth/login_form_mfa.rs
|
||||
- ./src/components/auth/login_form.rs
|
||||
- ./src/components/auth/logout_button.rs
|
||||
- ./src/components/auth/mfa_setup_totp.rs
|
||||
- ./src/components/auth/mfa_setup_webauthn.rs
|
||||
- ./src/components/auth/mfa_setup.rs
|
||||
- ./src/components/auth/password_reset.rs
|
||||
- ./src/components/auth/session_timeout.rs
|
||||
- ./src/components/auth/sso_buttons.rs
|
||||
- ./src/components/auth/user_profile.rs
|
||||
|
||||
COMPONENTS (30 files)
|
||||
- ./src/components/charts.rs
|
||||
- ./src/components/common.rs
|
||||
- ./src/components/forms.rs
|
||||
- ./src/components/grid.rs
|
||||
- ./src/components/header.rs
|
||||
- ./src/components/icons.rs
|
||||
- ./src/components/layout.rs
|
||||
- ./src/components/loading.rs
|
||||
- ./src/components/main_layout.rs
|
||||
- ./src/components/modal.rs
|
||||
- ./src/components/navigation.rs
|
||||
- ./src/components/notifications.rs
|
||||
- ./src/components/onboarding/next_steps.rs
|
||||
- ./src/components/onboarding/quick_links.rs
|
||||
- ./src/components/onboarding/system_status.rs
|
||||
- ./src/components/onboarding/tooltip.rs
|
||||
- ./src/components/onboarding/welcome_wizard.rs
|
||||
- ./src/components/policies/policy_editor.rs
|
||||
- ./src/components/security/api_tokens.rs
|
||||
- ./src/components/security/audit_logs.rs
|
||||
- ./src/components/security/mfa_devices.rs
|
||||
- ./src/components/sidebar.rs
|
||||
- ./src/components/tables.rs
|
||||
- ./src/components/theme.rs
|
||||
- ./src/components/toast.rs
|
||||
- ./src/components/widgets.rs
|
||||
|
||||
PAGES (13 files)
|
||||
- ./src/pages/clusters.rs
|
||||
- ./src/pages/dashboard.rs
|
||||
- ./src/pages/deployment.rs
|
||||
- ./src/pages/detection.rs
|
||||
- ./src/pages/infrastructure.rs
|
||||
- ./src/pages/kms.rs
|
||||
- ./src/pages/not_found.rs
|
||||
- ./src/pages/rules.rs
|
||||
- ./src/pages/security_settings.rs
|
||||
- ./src/pages/servers.rs
|
||||
- ./src/pages/settings.rs
|
||||
- ./src/pages/taskservs.rs
|
||||
- ./src/pages/users.rs
|
||||
- ./src/pages/workflows.rs
|
||||
|
||||
API LAYER (7 files)
|
||||
- ./src/api/auth.rs
|
||||
- ./src/api/clusters.rs
|
||||
- ./src/api/dashboard.rs
|
||||
- ./src/api/orchestrator.rs
|
||||
- ./src/api/servers.rs
|
||||
- ./src/api/types.rs
|
||||
- ./src/api/workflows.rs
|
||||
|
||||
SERVICES (5 files)
|
||||
- ./src/services/audit_service.rs
|
||||
- ./src/services/auth_service.rs
|
||||
- ./src/services/dashboard_config.rs
|
||||
- ./src/services/export.rs
|
||||
- ./src/services/websocket.rs
|
||||
|
||||
UTILITIES (4 files)
|
||||
- ./src/utils/api.rs
|
||||
- ./src/utils/format.rs
|
||||
- ./src/utils/time.rs
|
||||
- ./src/utils/validation.rs
|
||||
|
||||
HOOKS (1 file)
|
||||
- ./src/hooks/use_auth_context.rs
|
||||
|
||||
STATE MANAGEMENT (2 files)
|
||||
- ./src/store/app_state.rs
|
||||
- ./src/store/theme.rs
|
||||
|
||||
================================================================================
|
||||
FILES WITH ROUTER IMPORTS (9 files)
|
||||
================================================================================
|
||||
|
||||
These files use both leptos::prelude::* and leptos_router::prelude::*:
|
||||
|
||||
- ./src/app.rs
|
||||
- ./src/auth/http_interceptor.rs
|
||||
- ./src/components/auth/auth_guard.rs
|
||||
- ./src/components/auth/login_form_mfa.rs
|
||||
- ./src/components/navigation.rs
|
||||
- ./src/components/sidebar.rs
|
||||
- ./src/hooks/use_auth_context.rs
|
||||
- ./src/pages/security_settings.rs
|
||||
- ./src/pages/users.rs
|
||||
|
||||
================================================================================
|
||||
VERIFICATION
|
||||
================================================================================
|
||||
|
||||
✓ All old import patterns have been replaced
|
||||
✓ No remaining leptos::* imports (should be 0): 0
|
||||
✓ No remaining leptos_router::* imports (should be 0): 0
|
||||
✓ No remaining leptos_meta::* imports (should be 0): 0
|
||||
✓ Total files successfully migrated: 77
|
||||
|
||||
================================================================================
|
||||
NEXT STEPS
|
||||
================================================================================
|
||||
|
||||
1. Run cargo check to verify compilation:
|
||||
cargo check
|
||||
|
||||
2. Run cargo build to build the project:
|
||||
cargo build
|
||||
|
||||
3. Run tests to ensure functionality:
|
||||
cargo test
|
||||
|
||||
4. If there are API changes beyond imports, additional fixes may be needed
|
||||
for Leptos 0.8 specific API changes (signals, effects, etc.)
|
||||
|
||||
================================================================================
|
||||
@ -1,295 +0,0 @@
|
||||
# Leptos 0.8 Migration - Documentation Index
|
||||
|
||||
## Quick Status
|
||||
|
||||
**🎉 Migration Complete and Production Ready 🎉**
|
||||
|
||||
- ✅ **71 errors** → 0 errors (100% fixed)
|
||||
- ✅ **289+ warnings** → 0 actionable warnings (100% fixed)
|
||||
- ✅ **WASM builds** cleanly and successfully
|
||||
- ✅ **Release builds** optimized and working
|
||||
- ⚠️ **1 upstream issue** (num-bigint-dig) - non-blocking, documented
|
||||
|
||||
**Build Status**: `Finished release profile in 0.18s (0 errors, 0 warnings)`
|
||||
|
||||
---
|
||||
|
||||
## Documentation Files
|
||||
|
||||
### 1. **COMPLETION_SUMMARY.txt** ← START HERE
|
||||
|
||||
**Quick overview of the entire migration**
|
||||
|
||||
- What was requested
|
||||
- What was delivered
|
||||
- Results at a glance
|
||||
- Production readiness verdict
|
||||
|
||||
**Read this for**: Quick understanding of scope and completion status
|
||||
|
||||
---
|
||||
|
||||
### 2. **LEPTOS_0.8_MIGRATION_COMPLETE.md**
|
||||
|
||||
**Comprehensive migration report with all technical details**
|
||||
|
||||
Includes:
|
||||
|
||||
- Executive summary
|
||||
- Build verification (release + WASM)
|
||||
- Migration changes by category
|
||||
- Key API changes with before/after examples
|
||||
- All 71 errors and solutions
|
||||
- All warnings fixed
|
||||
- Component impact analysis
|
||||
- Testing and verification
|
||||
- Deployment checklist
|
||||
|
||||
**Read this for**: Deep technical understanding of all changes made
|
||||
|
||||
---
|
||||
|
||||
### 3. **UPSTREAM_DEPENDENCY_ISSUE.md**
|
||||
|
||||
**Analysis of the num-bigint-dig v0.8.4 warning**
|
||||
|
||||
Includes:
|
||||
|
||||
- Issue summary and status
|
||||
- Root cause (private vec! macro)
|
||||
- Dependency chain
|
||||
- Why it can't be fixed now
|
||||
- When it will be resolved
|
||||
- Monitoring instructions
|
||||
- References and timeline
|
||||
|
||||
**Read this for**: Understanding the upstream warning and why it's non-blocking
|
||||
|
||||
---
|
||||
|
||||
### 4. **MIGRATION_VERIFICATION_FINAL.md**
|
||||
|
||||
**Final verification report proving build success**
|
||||
|
||||
Includes:
|
||||
|
||||
- Release build status ✅
|
||||
- WASM target build status ✅
|
||||
- Workspace check status ✅
|
||||
- Error resolution table
|
||||
- Warning resolution table
|
||||
- Verified features
|
||||
- Production readiness checklist
|
||||
|
||||
**Read this for**: Proof that everything is fixed and working
|
||||
|
||||
---
|
||||
|
||||
### 5. **LEPTOS_0.8_MIGRATION_REPORT.txt**
|
||||
|
||||
**Original migration tracking (from previous session)**
|
||||
|
||||
- Lists all 77 files modified
|
||||
- Categories of changes
|
||||
- Import updates performed
|
||||
- Verification results
|
||||
|
||||
**Read this for**: Historical record of file modifications
|
||||
|
||||
---
|
||||
|
||||
## Technical Highlights
|
||||
|
||||
### Framework API Updates (289+ changes)
|
||||
|
||||
```plaintext
|
||||
create_signal() → signal() (195 replacements)
|
||||
create_effect() → Effect::new() (41 replacements)
|
||||
create_memo() → Memo::new() (28 replacements)
|
||||
create_rw_signal() → RwSignal::new() (12 replacements)
|
||||
store_value() → StoredValue::new() (4 replacements)
|
||||
create_node_ref() → NodeRef::new() (5 replacements)
|
||||
```plaintext
|
||||
|
||||
### Router Architecture (Breaking changes in 0.8)
|
||||
|
||||
```rust
|
||||
// Before (0.6/0.7)
|
||||
<Routes>
|
||||
<Route path="/*any" view=|| <NotFound/>/>
|
||||
</Routes>
|
||||
|
||||
// After (0.8)
|
||||
<Routes fallback=|| view! { <NotFound/> }>
|
||||
<Route path=path!("/dashboard") view=DashboardPage/>
|
||||
// ...
|
||||
</Routes>
|
||||
```plaintext
|
||||
|
||||
### WASM Thread-Safety (New requirement)
|
||||
|
||||
```rust
|
||||
// Before: Rc<T> for single-threaded
|
||||
let handler = Rc::new(move |e: Event| { /* ... */ });
|
||||
|
||||
// After: Arc<T> for thread-safe
|
||||
let handler = Arc::new(move |e: Event| { /* ... */ });
|
||||
|
||||
// Plus Send + Sync bounds on all closures
|
||||
pub fn Component(
|
||||
on_click: impl Fn() + 'static + Send + Sync,
|
||||
)
|
||||
```plaintext
|
||||
|
||||
### Type System Fixes
|
||||
|
||||
- View<T> generics with proper bounds
|
||||
- If/else branch coercion with `.into_any()`
|
||||
- NodeRef type inference with explicit casting
|
||||
- Callback API: `.call()` → `.run()`
|
||||
|
||||
---
|
||||
|
||||
## Build Commands
|
||||
|
||||
```bash
|
||||
# Release build (production)
|
||||
cargo build --release
|
||||
# Result: Finished `release` profile [optimized] target(s) in 0.18s
|
||||
|
||||
# WASM target (browser)
|
||||
cargo build --release --target wasm32-unknown-unknown
|
||||
# Result: Finished `release` profile [optimized] target(s) in 49.95s
|
||||
|
||||
# Check without building
|
||||
cargo check --all
|
||||
# Result: All workspace members passing
|
||||
|
||||
# See upstream issues
|
||||
cargo report future-incompatibilities
|
||||
# Result: 1 upstream issue (non-blocking)
|
||||
```plaintext
|
||||
|
||||
---
|
||||
|
||||
## Files Modified
|
||||
|
||||
**77+ files** across entire codebase:
|
||||
|
||||
| Category | Count |
|
||||
|----------|-------|
|
||||
| Core Application | 3 |
|
||||
| Auth System | 12 |
|
||||
| Components | 30+ |
|
||||
| Pages | 13 |
|
||||
| API Layer | 7 |
|
||||
| Services | 5 |
|
||||
| Utilities | 4 |
|
||||
| Hooks | 1 |
|
||||
| State Management | 2 |
|
||||
| **Total** | **77+** |
|
||||
|
||||
---
|
||||
|
||||
## Production Readiness
|
||||
|
||||
✅ **All Criteria Met**
|
||||
|
||||
- [x] All compilation errors fixed (71 → 0)
|
||||
- [x] All actionable warnings fixed (289+ → 0)
|
||||
- [x] WASM target compiles successfully
|
||||
- [x] Release build optimized
|
||||
- [x] Incremental builds fast (0.18s)
|
||||
- [x] Zero architectural regressions
|
||||
- [x] All features tested and working
|
||||
- [x] Upstream issues documented and monitored
|
||||
- [x] Complete documentation provided
|
||||
|
||||
**Status**: ✅ **READY FOR PRODUCTION DEPLOYMENT**
|
||||
|
||||
---
|
||||
|
||||
## Known Issues
|
||||
|
||||
### num-bigint-dig v0.8.4 (Upstream - Non-blocking)
|
||||
|
||||
**Status**: ⚠️ Waiting for upstream fix
|
||||
|
||||
**Details**:
|
||||
|
||||
- Used by: `rsa v0.9.9` (crypto) + `ssh-key v0.6.7` (SSH)
|
||||
- Issue: Uses private `vec!` macro (Rust issue #120192)
|
||||
- Will be fixed in: `rsa v0.10.0` stable (currently RC only)
|
||||
- Impact: None - this is a forward-compatibility warning only
|
||||
- Resolution: Automatic when `rsa` updates its dependency
|
||||
|
||||
**See**: `UPSTREAM_DEPENDENCY_ISSUE.md` for complete analysis
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Deploy to Production**
|
||||
- Control-center-ui is production-ready
|
||||
- All systems tested and verified
|
||||
- No blocker issues
|
||||
|
||||
2. **Monitor Upstream Updates**
|
||||
- Track `rsa` v0.10.0 stable release
|
||||
- Will automatically resolve num-bigint-dig warning
|
||||
- Use: `cargo outdated` to check for updates
|
||||
|
||||
3. **Keep Documentation Updated**
|
||||
- These files are the authoritative source
|
||||
- Update if/when upstream issues are resolved
|
||||
|
||||
---
|
||||
|
||||
## Questions & Troubleshooting
|
||||
|
||||
### Q: Can we deploy with the num-bigint-dig warning?
|
||||
|
||||
**A**: Yes, absolutely. This is a forward-compatibility warning, not an error. No functionality is affected.
|
||||
|
||||
### Q: When will the num-bigint-dig issue be resolved?
|
||||
|
||||
**A**: When `rsa v0.10.0` reaches stable (currently RC only). Likely 2024-Q4 to 2025-Q1.
|
||||
|
||||
### Q: Do all features work?
|
||||
|
||||
**A**: Yes, 100%. All pages, authentication, widgets, layouts, and WASM functionality fully tested.
|
||||
|
||||
### Q: Is WASM fully supported?
|
||||
|
||||
**A**: Yes. The wasm32-unknown-unknown target builds cleanly and passes all thread-safety checks.
|
||||
|
||||
### Q: What about incremental builds?
|
||||
|
||||
**A**: Excellent - 0.18s after full build (no changes recompiled).
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- **Leptos Docs**: <https://leptos.dev/>
|
||||
- **Leptos 0.8 Migration Guide**: <https://github.com/leptos-rs/leptos/releases/tag/v0.8.0>
|
||||
- **Rust Compiler Error Index**: <https://doc.rust-lang.org/error-index.html>
|
||||
- **num-bigint-dig Issue**: <https://github.com/rust-lang/rust/issues/120192>
|
||||
|
||||
---
|
||||
|
||||
## Document History
|
||||
|
||||
| Date | Action |
|
||||
|------|--------|
|
||||
| Dec 12, 2025 | Migration Complete |
|
||||
| Dec 12, 2025 | Documentation created |
|
||||
| Dec 12, 2025 | Final verification passed |
|
||||
|
||||
---
|
||||
|
||||
**Migration Status**: ✅ **COMPLETE**
|
||||
**Production Status**: ✅ **READY**
|
||||
**Upstream Issues**: ⚠️ **Documented, Non-blocking**
|
||||
|
||||
**Ready to deploy!** 🚀
|
||||
@ -1,117 +0,0 @@
|
||||
# Leptos 0.8 Migration - Final Verification Report
|
||||
|
||||
## Build Status ✅
|
||||
|
||||
### control-center-ui Release Build
|
||||
|
||||
```plaintext
|
||||
Status: ✅ SUCCESS
|
||||
Command: cargo build --release
|
||||
Result: Finished `release` profile [optimized] target(s) in 0.18s (incremental)
|
||||
Errors: 0
|
||||
Warnings: 0 (actionable)
|
||||
```plaintext
|
||||
|
||||
### WASM Target Build
|
||||
|
||||
```plaintext
|
||||
Status: ✅ SUCCESS
|
||||
Command: cargo build --release --target wasm32-unknown-unknown
|
||||
Result: Finished `release` profile [optimized] target(s) in 49.95s
|
||||
Errors: 0
|
||||
Warnings: 0 (actionable)
|
||||
```plaintext
|
||||
|
||||
### Workspace Check
|
||||
|
||||
```plaintext
|
||||
Status: ✅ SUCCESS
|
||||
Command: cargo check --all
|
||||
Result: Finished `dev` profile [unoptimized + debuginfo] target(s) in 25.68s
|
||||
Errors: 0
|
||||
Actionable Warnings: 0
|
||||
Upstream Warnings: 1 (num-bigint-dig - see UPSTREAM_DEPENDENCY_ISSUE.md)
|
||||
```plaintext
|
||||
|
||||
## Migration Summary
|
||||
|
||||
**Duration**: 1 session (comprehensive, systematic approach)
|
||||
**Files Modified**: 77+ files across entire codebase
|
||||
**Total Changes**: 395+ replacements
|
||||
|
||||
### Error Resolution
|
||||
|
||||
| Type | Count | Status |
|
||||
|------|-------|--------|
|
||||
| E0432 (Imports) | 6+ | ✅ Fixed |
|
||||
| E0107 (Generics) | 3 | ✅ Fixed |
|
||||
| E0277 (Bounds) | 18+ | ✅ Fixed |
|
||||
| E0308 (Type) | 7 | ✅ Fixed |
|
||||
| E0618 (Callback) | 4 | ✅ Fixed |
|
||||
| E0525 (Closure) | 1 | ✅ Fixed |
|
||||
| E0282 (Inference) | 2 | ✅ Fixed |
|
||||
| Others | 31 | ✅ Fixed |
|
||||
| **Total** | **71** | **✅ All Fixed** |
|
||||
|
||||
### Warning Resolution
|
||||
|
||||
| Type | Count | Status |
|
||||
|------|-------|--------|
|
||||
| Deprecation (create_signal) | 195 | ✅ Replaced |
|
||||
| Deprecation (create_effect) | 41 | ✅ Replaced |
|
||||
| Deprecation (create_memo) | 28 | ✅ Replaced |
|
||||
| Deprecation (create_rw_signal) | 12 | ✅ Replaced |
|
||||
| Deprecation (store_value) | 4 | ✅ Replaced |
|
||||
| Deprecation (create_node_ref) | 5 | ✅ Replaced |
|
||||
| Clippy (unnecessary clone) | 4 | ✅ Fixed |
|
||||
| **Total** | **289+** | **✅ All Fixed** |
|
||||
|
||||
## Documentation Created
|
||||
|
||||
✅ `LEPTOS_0.8_MIGRATION_COMPLETE.md` - Comprehensive migration report
|
||||
✅ `UPSTREAM_DEPENDENCY_ISSUE.md` - Upstream dependency analysis
|
||||
✅ `LEPTOS_0.8_MIGRATION_REPORT.txt` - Original migration tracking
|
||||
|
||||
## Verified Features
|
||||
|
||||
- ✅ Router with fallback prop
|
||||
- ✅ Thread-safe reactive components (Arc)
|
||||
- ✅ WASM compatibility (Send + Sync)
|
||||
- ✅ Callback API (Fn vs FnOnce)
|
||||
- ✅ Virtualized lists with infinite scroll
|
||||
- ✅ Grid layout with drag-drop
|
||||
- ✅ Authentication system
|
||||
- ✅ All pages and routes
|
||||
- ✅ Theme provider
|
||||
- ✅ Real-time updates
|
||||
|
||||
## Production Readiness Checklist
|
||||
|
||||
- ✅ All errors resolved (71/71)
|
||||
- ✅ All actionable warnings resolved (289+/289+)
|
||||
- ✅ WASM target builds successfully
|
||||
- ✅ Release build optimized and working
|
||||
- ✅ Incremental builds fast (0.18s)
|
||||
- ✅ Zero architectural regressions
|
||||
- ✅ All features functional
|
||||
- ✅ Upstream issues documented
|
||||
- ✅ Migration documented
|
||||
|
||||
## Status
|
||||
|
||||
**🎉 COMPLETE AND PRODUCTION READY 🎉**
|
||||
|
||||
The control-center-ui is fully migrated to Leptos 0.8.10 with:
|
||||
|
||||
- Zero build errors
|
||||
- Zero actionable warnings
|
||||
- Full WASM support
|
||||
- Production-optimized builds
|
||||
- Comprehensive documentation
|
||||
|
||||
---
|
||||
|
||||
**Completion Date**: December 12, 2025
|
||||
**Migration Status**: ✅ COMPLETE
|
||||
**Production Status**: ✅ READY
|
||||
**Next Steps**: Deploy to production
|
||||
@ -1,6 +1,7 @@
|
||||
# Control Center UI - Audit Log Viewer
|
||||
|
||||
A comprehensive React-based audit log viewer for the Cedar Policy Engine with advanced search, real-time streaming, compliance reporting, and visualization capabilities.
|
||||
A comprehensive React-based audit log viewer for the Cedar Policy Engine with advanced search, real-time streaming,
|
||||
compliance reporting, and visualization capabilities.
|
||||
|
||||
## 🚀 Features
|
||||
|
||||
@ -130,7 +131,7 @@ src/
|
||||
├── utils/ # Utility functions
|
||||
├── store/ # State management
|
||||
└── styles/ # CSS and styling
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## 🔧 Setup and Development
|
||||
|
||||
@ -151,7 +152,7 @@ npm install
|
||||
|
||||
# Start development server
|
||||
npm run dev
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
The application will be available at `http://localhost:3000`
|
||||
|
||||
@ -166,7 +167,7 @@ npm run build
|
||||
|
||||
# Preview production build
|
||||
npm run preview
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## 🌐 API Integration
|
||||
|
||||
@ -196,7 +197,7 @@ const { isConnected, lastMessage } = useWebSocket({
|
||||
updateLogsList(log);
|
||||
}
|
||||
});
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## ✅ Features Implemented
|
||||
|
||||
@ -302,7 +303,7 @@ COPY --from=builder /app/dist /usr/share/nginx/html
|
||||
COPY nginx.conf /etc/nginx/nginx.conf
|
||||
EXPOSE 80
|
||||
CMD ["nginx", "-g", "daemon off;"]
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Kubernetes Deployment
|
||||
|
||||
@ -329,7 +330,7 @@ spec:
|
||||
env:
|
||||
- name: VITE_API_BASE_URL
|
||||
value: "https://api.example.com"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
|
||||
@ -11,9 +11,9 @@ This directory will reference the existing control center UI implementation.
|
||||
- **Language**: Web frontend (likely React/Vue/Leptos)
|
||||
- **Purpose**: Web interface for system management
|
||||
- **Features**:
|
||||
- Dashboard and monitoring UI
|
||||
- Configuration management interface
|
||||
- System administration controls
|
||||
- Dashboard and monitoring UI
|
||||
- Configuration management interface
|
||||
- System administration controls
|
||||
|
||||
## Integration Status
|
||||
|
||||
@ -28,6 +28,6 @@ The control center UI remains fully functional at its original location.
|
||||
```bash
|
||||
cd /Users/Akasha/repo-cnz/src/control-center-ui
|
||||
# Use existing UI development commands
|
||||
```
|
||||
```text
|
||||
|
||||
See original implementation for development setup and usage instructions.
|
||||
|
||||
@ -1,406 +0,0 @@
|
||||
# Security UI Mockups and Screenshots
|
||||
|
||||
## 1. Login Page with MFA
|
||||
|
||||
### Initial Login Screen
|
||||
|
||||
```plaintext
|
||||
┌────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Control Center Logo │
|
||||
│ │
|
||||
│ Sign in to Control Center │
|
||||
│ Enter your credentials to continue │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────┐ │
|
||||
│ │ Username or Email │ │
|
||||
│ │ [_____________________________________] │ │
|
||||
│ │ │ │
|
||||
│ │ Password │ │
|
||||
│ │ [_____________________________________] │ │
|
||||
│ │ │ │
|
||||
│ │ ☐ Remember me Forgot password? │ │
|
||||
│ │ │ │
|
||||
│ │ [ Sign In ] │ │
|
||||
│ └──────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Need help? Contact support │
|
||||
│ │
|
||||
└────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
### MFA Verification Screen
|
||||
|
||||
```plaintext
|
||||
┌────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ 🔒 │
|
||||
│ │
|
||||
│ Two-Factor Authentication │
|
||||
│ Enter the verification code from your authenticator │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────┐ │
|
||||
│ │ Verification Code │ │
|
||||
│ │ │ │
|
||||
│ │ [ 0 0 0 0 0 0 ] │ │
|
||||
│ │ │ │
|
||||
│ │ Enter the 6-digit code from your app │ │
|
||||
│ │ │ │
|
||||
│ │ [ Verify ] │ │
|
||||
│ │ [ Back to login ] │ │
|
||||
│ │ │ │
|
||||
│ │ OR │ │
|
||||
│ │ │ │
|
||||
│ │ Lost access to your device? │ │
|
||||
│ │ [ Use backup code ] │ │
|
||||
│ └──────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
## 2. Security Settings - MFA Devices
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Security Settings [ + Add MFA Method ] │
|
||||
│ Manage your two-factor authentication methods │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────────┐ │
|
||||
│ │ 📱 Google Authenticator [TOTP] │ │
|
||||
│ │ Added: 2025-09-15 │ │
|
||||
│ │ Last used: 2025-10-08 [ ⋮ ] │ │
|
||||
│ └───────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────────┐ │
|
||||
│ │ 🔑 YubiKey 5C [WebAuthn] │ │
|
||||
│ │ Added: 2025-09-20 │ │
|
||||
│ │ Last used: 2025-10-07 [ ⋮ ] │ │
|
||||
│ └───────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
## 3. TOTP Setup Wizard
|
||||
|
||||
### Step 1: Introduction
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Setup Authenticator App │
|
||||
│ │
|
||||
│ ℹ️ You'll need an authenticator app like │
|
||||
│ Google Authenticator, Authy, or 1Password. │
|
||||
│ │
|
||||
│ How it works: │
|
||||
│ 1. Scan a QR code with your authenticator app │
|
||||
│ 2. Enter a verification code to confirm │
|
||||
│ 3. Save backup codes for account recovery │
|
||||
│ 4. Use codes from your app to log in │
|
||||
│ │
|
||||
│ [ Get Started ] │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
### Step 2: Scan QR Code
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Scan QR Code │
|
||||
│ │
|
||||
│ Scan this QR code with your authenticator app │
|
||||
│ │
|
||||
│ ┌─────────────────────┐ │
|
||||
│ │ ▓▓ ▓▓▓ ▓ ▓▓▓ │ │
|
||||
│ │ ▓ ▓▓ ▓▓ ▓▓ ▓ │ │
|
||||
│ │ ▓▓▓ ▓ ▓▓▓ ▓▓▓ │ │
|
||||
│ │ ▓ ▓▓▓ ▓ ▓ ▓▓ │ │
|
||||
│ └─────────────────────┘ │
|
||||
│ │
|
||||
│ OR │
|
||||
│ │
|
||||
│ Enter this code manually: │
|
||||
│ [ JBSWY3DPEHPK3PXP ] [ 📋 Copy ] │
|
||||
│ │
|
||||
│ [ Continue ] │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
### Step 3: Verify
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Verify Setup │
|
||||
│ │
|
||||
│ Enter the 6-digit code from your authenticator app │
|
||||
│ to confirm the setup: │
|
||||
│ │
|
||||
│ [ 0 0 0 0 0 0 ] │
|
||||
│ │
|
||||
│ [ Back ] [ Verify & Continue ] │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
### Step 4: Backup Codes
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Save Backup Codes │
|
||||
│ │
|
||||
│ ⚠️ Save these codes in a secure location. You can │
|
||||
│ use them to access your account if you lose │
|
||||
│ your device. │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────┐ │
|
||||
│ │ A1B2-C3D4 │ E5F6-G7H8 │ │
|
||||
│ │ I9J0-K1L2 │ M3N4-O5P6 │ │
|
||||
│ │ Q7R8-S9T0 │ U1V2-W3X4 │ │
|
||||
│ │ Y5Z6-A7B8 │ C9D0-E1F2 │ │
|
||||
│ │ G3H4-I5J6 │ K7L8-M9N0 │ │
|
||||
│ └─────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [ Download Codes ] [ Copy to Clipboard ] │
|
||||
│ │
|
||||
│ ☐ I have saved these codes in a secure location │
|
||||
│ │
|
||||
│ [ Complete Setup ] │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
## 4. API Tokens Management
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ API Tokens [ + Create Token ] │
|
||||
│ Manage personal access tokens for API access │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────────┐ │
|
||||
│ │ CI/CD Pipeline [Expired] │ │
|
||||
│ │ prvng_...xyz │ │
|
||||
│ │ Created: 2025-01-15 │ │
|
||||
│ │ Last used: 2025-03-10 │ │
|
||||
│ │ Expires: 2025-04-15 [ 🗑️ ] │ │
|
||||
│ │ [read:servers] [write:servers] [read:taskservs] │ │
|
||||
│ └───────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────────┐ │
|
||||
│ │ Development Token │ │
|
||||
│ │ prvng_...abc │ │
|
||||
│ │ Created: 2025-09-01 │ │
|
||||
│ │ Last used: 2025-10-08 │ │
|
||||
│ │ Expires: 2025-12-01 [ 🗑️ ] │ │
|
||||
│ │ [read:servers] [read:clusters] [read:audit] │ │
|
||||
│ └───────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
### Create Token Dialog
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Create API Token [ ✕ ] │
|
||||
│ │
|
||||
│ Token Name │
|
||||
│ [ e.g., CI/CD Pipeline ] │
|
||||
│ A descriptive name to identify this token │
|
||||
│ │
|
||||
│ Expiration (days) │
|
||||
│ [ 30 days ▼ ] │
|
||||
│ │
|
||||
│ Scopes │
|
||||
│ ☑ read:servers │
|
||||
│ ☑ write:servers │
|
||||
│ ☑ read:taskservs │
|
||||
│ ☐ write:taskservs │
|
||||
│ ☑ read:clusters │
|
||||
│ ☐ write:clusters │
|
||||
│ ☐ read:audit │
|
||||
│ │
|
||||
│ [ Cancel ] [ Create Token ] │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
### Token Created Success
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ ✅ Token Created Successfully │
|
||||
│ │
|
||||
│ Make sure to copy your token now. You won't be │
|
||||
│ able to see it again! │
|
||||
│ │
|
||||
│ [ prvng_1234567890abcdef... ] [ Copy ] │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
## 5. Audit Logs Viewer
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ Audit Logs [ Export ▼ ] │
|
||||
│ View and search security audit logs │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Filters │ │
|
||||
│ │ User Action Resource │ │
|
||||
│ │ [________] [________] [________] │ │
|
||||
│ │ │ │
|
||||
│ │ Workspace Status Date From Date To │ │
|
||||
│ │ [________] [All ▼] [📅 ____] [📅 ____] │ │
|
||||
│ │ │ │
|
||||
│ │ [ Clear Filters ] [ Search ] │ │
|
||||
│ └─────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Results Showing 50 of 1,234 events │
|
||||
│ ┌─────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Timestamp User Action Resource Status │ │
|
||||
│ ├─────────────────────────────────────────────────────────────────┤ │
|
||||
│ │ 2025-10-08 10:30 alice@ex.com create server-01 ✓ success│ │
|
||||
│ │ 192.168.1.100 45ms │ │
|
||||
│ ├─────────────────────────────────────────────────────────────────┤ │
|
||||
│ │ 2025-10-08 10:28 bob@ex.com delete cluster-02 ✓ success│ │
|
||||
│ │ 10.0.0.50 230ms │ │
|
||||
│ ├─────────────────────────────────────────────────────────────────┤ │
|
||||
│ │ 2025-10-08 10:25 carol@ex.com login - ✕ failure│ │
|
||||
│ │ 203.0.113.42 15ms │ │
|
||||
│ └─────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [ « ] [ Page 1 ] [ » ] │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
## 6. WebAuthn Setup
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Setup Security Key (WebAuthn) │
|
||||
│ │
|
||||
│ ℹ️ Use a physical security key like YubiKey or │
|
||||
│ your device's built-in biometric authentication │
|
||||
│ (Touch ID, Face ID, Windows Hello). │
|
||||
│ │
|
||||
│ How it works: │
|
||||
│ 1. Insert your security key or prepare biometric │
|
||||
│ 2. Click the registration button │
|
||||
│ 3. Follow your browser's prompts to register │
|
||||
│ 4. Tap your key or use biometric when prompted │
|
||||
│ │
|
||||
│ Device Name (Optional) │
|
||||
│ [ e.g., YubiKey 5C, MacBook Touch ID ] │
|
||||
│ Give your security key a name to identify it later │
|
||||
│ │
|
||||
│ 🔑 │
|
||||
│ Have your security key ready │
|
||||
│ │
|
||||
│ [ Register Security Key ] │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
### WebAuthn Registration in Progress
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Setup Security Key (WebAuthn) │
|
||||
│ │
|
||||
│ [ 🔄 Waiting for device... ] │
|
||||
│ │
|
||||
│ ⚠️ Follow your browser's prompts │
|
||||
│ │
|
||||
│ You may need to tap your security key or use │
|
||||
│ biometric authentication │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
### WebAuthn Success
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ ✅ │
|
||||
│ │
|
||||
│ Security Key Registered! │
|
||||
│ │
|
||||
│ Your security key has been successfully registered. │
|
||||
│ You can now use it to log in. │
|
||||
│ │
|
||||
│ [ Done ] │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
|
||||
## Color Scheme
|
||||
|
||||
### Primary Colors
|
||||
|
||||
- **Primary (Blue)**: `#3B82F6` - Actions, links, active states
|
||||
- **Success (Green)**: `#10B981` - Successful operations
|
||||
- **Warning (Yellow)**: `#F59E0B` - Warnings, cautions
|
||||
- **Error (Red)**: `#EF4444` - Errors, failures
|
||||
- **Info (Cyan)**: `#06B6D4` - Informational messages
|
||||
|
||||
### Neutral Colors
|
||||
|
||||
- **Base 100**: `#FFFFFF` - Card backgrounds
|
||||
- **Base 200**: `#F3F4F6` - Page backgrounds
|
||||
- **Base 300**: `#E5E7EB` - Borders, dividers
|
||||
- **Base Content**: `#1F2937` - Text color
|
||||
|
||||
### Status Badges
|
||||
|
||||
- **TOTP Badge**: Primary blue
|
||||
- **WebAuthn Badge**: Secondary purple
|
||||
- **Success Badge**: Green with checkmark
|
||||
- **Failure Badge**: Yellow with warning icon
|
||||
- **Error Badge**: Red with X icon
|
||||
- **Expired Badge**: Gray with red text
|
||||
|
||||
## Typography
|
||||
|
||||
### Headings
|
||||
|
||||
- **H1**: 2.25rem (36px), Bold - Page titles
|
||||
- **H2**: 1.875rem (30px), Bold - Section titles
|
||||
- **H3**: 1.5rem (24px), Bold - Card titles
|
||||
- **H4**: 1.25rem (20px), Semi-bold - Subsection titles
|
||||
|
||||
### Body Text
|
||||
|
||||
- **Regular**: 1rem (16px), Normal - Body text
|
||||
- **Small**: 0.875rem (14px), Normal - Labels, hints
|
||||
- **Tiny**: 0.75rem (12px), Normal - Timestamps, metadata
|
||||
|
||||
### Monospace
|
||||
|
||||
- **Code**: 0.875rem (14px), Mono - Tokens, codes, IDs
|
||||
|
||||
## Icons
|
||||
|
||||
### Navigation Icons
|
||||
|
||||
- 🔒 Lock - MFA, security
|
||||
- 🔑 Key - API tokens, access
|
||||
- 📋 Clipboard - Audit logs
|
||||
- ⚙️ Settings - Configuration
|
||||
- 📱 Mobile - Authenticator app
|
||||
|
||||
### Status Icons
|
||||
|
||||
- ✓ Checkmark - Success
|
||||
- ✕ X mark - Error
|
||||
- ⚠️ Warning - Caution
|
||||
- ℹ️ Info - Information
|
||||
- 🔄 Refresh - Loading
|
||||
|
||||
### Action Icons
|
||||
|
||||
- ➕ Plus - Add/Create
|
||||
- 🗑️ Trash - Delete/Remove
|
||||
- 📥 Download - Export
|
||||
- 📋 Copy - Clipboard
|
||||
- ⋮ Vertical dots - More options
|
||||
|
||||
---
|
||||
|
||||
**Note**: All mockups use DaisyUI components with TailwindCSS styling. Actual implementation includes smooth transitions, hover states, focus indicators, and responsive layouts.
|
||||
@ -64,7 +64,7 @@ src/
|
||||
│ └── user_profile.rs # User profile management
|
||||
├── utils/ # Utility modules
|
||||
└── lib.rs # Main application entry
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## 🚀 Implemented Components
|
||||
|
||||
@ -158,7 +158,7 @@ fn App() -> impl IntoView {
|
||||
</Router>
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Login Page Implementation
|
||||
|
||||
@ -180,7 +180,7 @@ fn LoginPage() -> impl IntoView {
|
||||
</div>
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Protected Dashboard
|
||||
|
||||
@ -210,7 +210,7 @@ fn DashboardPage() -> impl IntoView {
|
||||
</AuthGuard>
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### User Profile Management
|
||||
|
||||
@ -227,7 +227,7 @@ fn ProfilePage() -> impl IntoView {
|
||||
</AuthGuard>
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## 🔧 Required Backend API
|
||||
|
||||
@ -240,28 +240,28 @@ POST /auth/login # Email/password authentication
|
||||
POST /auth/refresh # JWT token refresh
|
||||
POST /auth/logout # Session termination
|
||||
POST /auth/extend-session # Session timeout extension
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Password Management
|
||||
|
||||
```plaintext
|
||||
POST /auth/password-reset # Password reset request
|
||||
POST /auth/password-reset/confirm # Password reset confirmation
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Multi-Factor Authentication
|
||||
|
||||
```plaintext
|
||||
POST /auth/mfa/setup # MFA setup initiation
|
||||
POST /auth/mfa/verify # MFA verification
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### SSO Integration
|
||||
|
||||
```plaintext
|
||||
GET /auth/sso/providers # Available SSO providers
|
||||
POST /auth/sso/{provider}/login # SSO authentication initiation
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### WebAuthn/FIDO2
|
||||
|
||||
@ -272,7 +272,7 @@ POST /auth/webauthn/authenticate/begin # WebAuthn authentication start
|
||||
POST /auth/webauthn/authenticate/complete # WebAuthn authentication finish
|
||||
GET /auth/webauthn/credentials # List WebAuthn credentials
|
||||
DELETE /auth/webauthn/credentials/{id} # Remove WebAuthn credential
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Device Trust Management
|
||||
|
||||
@ -280,7 +280,7 @@ DELETE /auth/webauthn/credentials/{id} # Remove WebAuthn credential
|
||||
GET /auth/devices # List trusted devices
|
||||
POST /auth/devices/trust # Trust current device
|
||||
DELETE /auth/devices/{id}/revoke # Revoke device trust
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### User Profile Management
|
||||
|
||||
@ -292,7 +292,7 @@ POST /user/mfa/enable # Enable MFA
|
||||
POST /user/mfa/disable # Disable MFA
|
||||
GET /user/sessions # List active sessions
|
||||
DELETE /user/sessions/{id}/revoke # Revoke session
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## 📊 Implementation Statistics
|
||||
|
||||
@ -9,11 +9,12 @@ The control-center-ui build produces a future incompatibility warning from the t
|
||||
```plaintext
|
||||
warning: the following packages contain code that will be rejected by a future version of Rust: num-bigint-dig v0.8.4
|
||||
note: to see what the problems were, use the option `--future-incompat-report`, or run `cargo report future-incompatibilities --id 1`
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Root Cause
|
||||
|
||||
The `num-bigint-dig v0.8.4` crate uses a **private `vec!` macro** in multiple locations (Rust issue #120192). This pattern will become a hard error in a future Rust release.
|
||||
The `num-bigint-dig v0.8.4` crate uses a **private `vec!` macro** in multiple locations (Rust issue #120192).
|
||||
This pattern will become a hard error in a future Rust release.
|
||||
|
||||
**Affected files in num-bigint-dig v0.8.4:**
|
||||
|
||||
@ -35,7 +36,7 @@ num-bigint-dig v0.8.4
|
||||
└── ssh-key v0.6.7
|
||||
├── russh v0.44.1
|
||||
└── russh-keys v0.44.0
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Why We Can't Fix It
|
||||
|
||||
@ -94,7 +95,7 @@ num-bigint-dig v0.8.4
|
||||
## Timeline for Resolution
|
||||
|
||||
| Status | Item | Estimated |
|
||||
|--------|------|-----------|
|
||||
| -------- | ------ | ----------- |
|
||||
| ✓ Available | num-bigint-dig 0.8.6 | Already released |
|
||||
| ⏳ Waiting | rsa v0.10 stable release | 2024-Q4 to 2025-Q1 |
|
||||
| ⏳ Waiting | Downstream crate updates | After upstream releases |
|
||||
@ -113,7 +114,7 @@ cargo outdated
|
||||
|
||||
# Check dependency tree
|
||||
cargo tree | grep num-bigint-dig
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Workaround (if needed)
|
||||
|
||||
@ -46,7 +46,7 @@ A comprehensive Cedar policy engine implementation with advanced security featur
|
||||
```bash
|
||||
cd src/control-center
|
||||
cargo build --release
|
||||
```
|
||||
```text
|
||||
|
||||
### 2. Configuration
|
||||
|
||||
@ -54,7 +54,7 @@ Copy the example configuration:
|
||||
|
||||
```bash
|
||||
cp config.toml.example config.toml
|
||||
```
|
||||
```text
|
||||
|
||||
Edit `config.toml` for your environment:
|
||||
|
||||
@ -74,13 +74,13 @@ enabled = true
|
||||
[anomaly]
|
||||
enabled = true
|
||||
detection_threshold = 2.5
|
||||
```
|
||||
```text
|
||||
|
||||
### 3. Start the Server
|
||||
|
||||
```bash
|
||||
./target/release/control-center server --port 8080
|
||||
```
|
||||
```text
|
||||
|
||||
### 4. Test Policy Evaluation
|
||||
|
||||
@ -93,7 +93,7 @@ curl -X POST http://localhost:8080/policies/evaluate \
|
||||
"resource": {"id": "sensitive-db", "classification": "confidential"},
|
||||
"context": {"mfa_enabled": true, "location": "US"}
|
||||
}'
|
||||
```
|
||||
```text
|
||||
|
||||
## Policy Examples
|
||||
|
||||
@ -111,7 +111,7 @@ permit(
|
||||
principal has mfa_enabled &&
|
||||
principal.mfa_enabled == true
|
||||
};
|
||||
```
|
||||
```text
|
||||
|
||||
### Production Approval Policy
|
||||
|
||||
@ -127,7 +127,7 @@ permit(
|
||||
principal has approval &&
|
||||
principal.approval.approved_by in ["ProductionAdmin", "SRE"]
|
||||
};
|
||||
```
|
||||
```text
|
||||
|
||||
### Geographic Restrictions
|
||||
|
||||
@ -142,7 +142,7 @@ permit(
|
||||
context.geo has country &&
|
||||
context.geo.country in ["US", "CA", "GB", "DE"]
|
||||
};
|
||||
```
|
||||
```text
|
||||
|
||||
## CLI Commands
|
||||
|
||||
@ -157,7 +157,7 @@ control-center policy test policies/mfa.cedar tests/data/mfa_test.json
|
||||
|
||||
# Analyze policy impact
|
||||
control-center policy impact policies/new_policy.cedar
|
||||
```
|
||||
```text
|
||||
|
||||
### Compliance Checking
|
||||
|
||||
@ -170,7 +170,7 @@ control-center compliance hipaa
|
||||
|
||||
# Generate compliance report
|
||||
control-center compliance report --format html
|
||||
```
|
||||
```text
|
||||
|
||||
## API Endpoints
|
||||
|
||||
@ -206,25 +206,25 @@ control-center compliance report --format html
|
||||
|
||||
```bash
|
||||
cargo test
|
||||
```
|
||||
```text
|
||||
|
||||
### Run Integration Tests
|
||||
|
||||
```bash
|
||||
cargo test --test integration_tests
|
||||
```
|
||||
```text
|
||||
|
||||
### Run Policy Tests
|
||||
|
||||
```bash
|
||||
cargo test --test policy_tests
|
||||
```
|
||||
```text
|
||||
|
||||
### Run Compliance Tests
|
||||
|
||||
```bash
|
||||
cargo test --test compliance_tests
|
||||
```
|
||||
```text
|
||||
|
||||
## Architecture
|
||||
|
||||
@ -287,7 +287,7 @@ RUN apt-get update && apt-get install -y ca-certificates
|
||||
COPY --from=builder /app/target/release/control-center /usr/local/bin/
|
||||
EXPOSE 8080
|
||||
CMD ["control-center", "server"]
|
||||
```
|
||||
```text
|
||||
|
||||
### Kubernetes
|
||||
|
||||
@ -314,7 +314,7 @@ spec:
|
||||
env:
|
||||
- name: DATABASE_URL
|
||||
value: "surreal://surrealdb:8000"
|
||||
```
|
||||
```text
|
||||
|
||||
### Environment Variables
|
||||
|
||||
@ -324,7 +324,7 @@ export CONTROL_CENTER_SERVER_PORT=8080
|
||||
export CONTROL_CENTER_DATABASE_URL="surreal://prod-db:8000"
|
||||
export CONTROL_CENTER_AUTH_JWT_SECRET="production-secret"
|
||||
export CONTROL_CENTER_COMPLIANCE_SOC2_ENABLED=true
|
||||
```
|
||||
```text
|
||||
|
||||
## Monitoring & Observability
|
||||
|
||||
@ -346,13 +346,13 @@ tracing::info!(
|
||||
duration_ms = evaluation_time,
|
||||
"Policy evaluation completed"
|
||||
);
|
||||
```
|
||||
```text
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/health
|
||||
```
|
||||
```text
|
||||
|
||||
## Contributing
|
||||
|
||||
|
||||
@ -2,7 +2,8 @@
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the security architecture and considerations for the control-center enhancements, including KMS SSH key management, mode-based RBAC, and platform service monitoring.
|
||||
This document outlines the security architecture and considerations for the control-center enhancements,
|
||||
including KMS SSH key management, mode-based RBAC, and platform service monitoring.
|
||||
|
||||
## 1. SSH Key Management Security
|
||||
|
||||
@ -30,7 +31,7 @@ let key_id = ssh_key_manager.store_ssh_key(name, private, public, purpose, tags)
|
||||
|
||||
// Bad: Never do this - exposing private key in logs
|
||||
tracing::info!("Stored key: {}", private_key); // DON'T DO THIS
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 1.2 Key Rotation Security
|
||||
|
||||
@ -55,7 +56,7 @@ rotation_enabled = true
|
||||
rotation_interval_days = 90 # Enterprise: 30, Dev: 180
|
||||
grace_period_days = 7 # Time to update deployed keys
|
||||
auto_rotate = false # Manual approval recommended
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 1.3 Audit Logging
|
||||
|
||||
@ -79,7 +80,7 @@ pub struct SshKeyAuditEntry {
|
||||
pub success: bool,
|
||||
pub error_message: Option<String>,
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Threat Mitigation**:
|
||||
|
||||
@ -105,7 +106,7 @@ fn calculate_fingerprint(public_key: &[u8]) -> Result<String, KmsError> {
|
||||
let result = hasher.finalize();
|
||||
Ok(format!("SHA256:{}", base64::encode(&result[..16])))
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Security Benefits**:
|
||||
|
||||
@ -120,7 +121,7 @@ fn calculate_fingerprint(public_key: &[u8]) -> Result<String, KmsError> {
|
||||
**Security Model by Mode**:
|
||||
|
||||
| Mode | Security Level | Use Case | Audit Required |
|
||||
|------|---------------|----------|----------------|
|
||||
| ------ | --------------- | ---------- | ---------------- |
|
||||
| Solo | Low | Single developer | No |
|
||||
| MultiUser | Medium | Small teams | Optional |
|
||||
| CICD | Medium | Automation | Yes |
|
||||
@ -136,7 +137,7 @@ fn calculate_fingerprint(public_key: &[u8]) -> Result<String, KmsError> {
|
||||
if mode == ExecutionMode::Solo {
|
||||
return true; // Allow all operations
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Risks**:
|
||||
|
||||
@ -158,7 +159,7 @@ let permissions = rbac_manager.get_user_permissions(&user).await;
|
||||
if !permissions.contains(&required_permission) {
|
||||
return Err(RbacError::PermissionDenied);
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Security Features**:
|
||||
|
||||
@ -174,7 +175,7 @@ if !permissions.contains(&required_permission) {
|
||||
if mode == ExecutionMode::CICD {
|
||||
audit_log.log_automation_action(service_account, action).await;
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Security Features**:
|
||||
|
||||
@ -193,7 +194,7 @@ if mode == ExecutionMode::CICD {
|
||||
if mode == ExecutionMode::Enterprise {
|
||||
audit_log.log_with_compliance(user, action, compliance_tags).await;
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Security Features**:
|
||||
|
||||
@ -213,7 +214,7 @@ Role::Developer => 60 // Read + dev deploy
|
||||
Role::ServiceAccount => 50 // Automation
|
||||
Role::Auditor => 40 // Read + audit
|
||||
Role::Viewer => 20 // Read-only
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Action Security Levels**:
|
||||
|
||||
@ -226,7 +227,7 @@ Action::Update => 60 // Modify resources
|
||||
Action::Execute => 50 // Execute operations
|
||||
Action::Audit => 40 // View audit logs
|
||||
Action::Read => 20 // View resources
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Permission Check**:
|
||||
|
||||
@ -234,7 +235,7 @@ Action::Read => 20 // View resources
|
||||
pub fn can_perform(&self, required_level: u8) -> bool {
|
||||
self.permission_level() >= required_level
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Security Guarantees**:
|
||||
|
||||
@ -254,7 +255,7 @@ session_timeout_minutes = 30 # Enterprise
|
||||
max_sessions_per_user = 5
|
||||
failed_login_lockout_attempts = 5
|
||||
failed_login_lockout_duration_minutes = 15
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Session Lifecycle**:
|
||||
|
||||
@ -289,7 +290,7 @@ Request → Auth Middleware → RBAC Middleware → Handler
|
||||
from JWT Token (role + resource + action)
|
||||
↓
|
||||
Allow / Deny
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Middleware Implementation**:
|
||||
|
||||
@ -311,7 +312,7 @@ pub async fn check_permission(
|
||||
|
||||
Ok(next.run(req).await)
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Security Guarantees**:
|
||||
|
||||
@ -332,7 +333,7 @@ orchestrator_url = "http://localhost:9090" # Not exposed externally
|
||||
coredns_url = "http://localhost:9153"
|
||||
gitea_url = "http://localhost:3000"
|
||||
oci_registry_url = "http://localhost:5000"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Network Security**:
|
||||
|
||||
@ -355,7 +356,7 @@ let client = Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(5)) // Prevent hanging
|
||||
.build()
|
||||
.unwrap();
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Error Handling**:
|
||||
|
||||
@ -372,7 +373,7 @@ Err(e) => {
|
||||
..
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Threat Mitigation**:
|
||||
|
||||
@ -409,7 +410,7 @@ pub async fn start_service(
|
||||
|
||||
Ok(StatusCode::OK)
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Security Guarantees**:
|
||||
|
||||
@ -547,7 +548,7 @@ pub async fn delete_user(&self, user_id: &str) -> Result<(), RbacError> {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 5.2 SOC 2 Compliance
|
||||
|
||||
@ -595,9 +595,9 @@ impl AnomalyDetector {
|
||||
|
||||
for event in recent {
|
||||
let Ok(context) = serde_json::from_value::<serde_json::Value>(event.context.clone())
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
let Some(geo) = context.get("geo") else {
|
||||
continue;
|
||||
};
|
||||
@ -609,9 +609,9 @@ impl AnomalyDetector {
|
||||
|
||||
for event in baseline {
|
||||
let Ok(context) = serde_json::from_value::<serde_json::Value>(event.context.clone())
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
let Some(geo) = context.get("geo") else {
|
||||
continue;
|
||||
};
|
||||
|
||||
@ -469,7 +469,8 @@ mod tests {
|
||||
use super::*;
|
||||
|
||||
fn generate_test_keys() -> (Vec<u8>, Vec<u8>) {
|
||||
// Pre-generated RSA keys to avoid runtime key generation (avoids rand_core conflict)
|
||||
// Pre-generated RSA keys to avoid runtime key generation (avoids rand_core
|
||||
// conflict)
|
||||
let private_pem = b"-----BEGIN PRIVATE KEY-----
|
||||
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC7F43HxrVfJJ+k
|
||||
DQMEjENGqJLnBn6MvJnCu93A4ZNKEEpPGX1Y6V+qiqLH5B7wNMIJ2QVnLjYjKZu5
|
||||
@ -509,10 +510,7 @@ WCemS72CI+y72SqwgaZ94eUO4WKV2agxmHeXQ8FhSd8ZFTJCTBKDW91Kykdo7yUx
|
||||
IQIDAQAB
|
||||
-----END PUBLIC KEY-----";
|
||||
|
||||
(
|
||||
private_pem.to_vec(),
|
||||
public_pem.to_vec(),
|
||||
)
|
||||
(private_pem.to_vec(), public_pem.to_vec())
|
||||
}
|
||||
|
||||
fn create_test_service() -> JwtService {
|
||||
|
||||
@ -373,7 +373,8 @@ mod tests {
|
||||
use super::*;
|
||||
|
||||
fn create_test_jwt_service() -> JwtService {
|
||||
// Pre-generated RSA keys to avoid runtime key generation (avoids rand_core conflict)
|
||||
// Pre-generated RSA keys to avoid runtime key generation (avoids rand_core
|
||||
// conflict)
|
||||
let private_pem = b"-----BEGIN PRIVATE KEY-----
|
||||
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC7F43HxrVfJJ+k
|
||||
DQMEjENGqJLnBn6MvJnCu93A4ZNKEEpPGX1Y6V+qiqLH5B7wNMIJ2QVnLjYjKZu5
|
||||
|
||||
@ -726,8 +726,7 @@ impl ComplianceFrameworkChecker for HIPAAChecker {
|
||||
for violation in &critical_violations {
|
||||
report.push_str(&format!(
|
||||
"### ⚠️ {} - {} Risk\n",
|
||||
violation.control_id,
|
||||
violation.severity
|
||||
violation.control_id, violation.severity
|
||||
));
|
||||
report.push_str(&format!("**Description:** {}\n", violation.description));
|
||||
if let Some(deadline) = violation.remediation_deadline {
|
||||
|
||||
@ -16,8 +16,8 @@ use serde::{Deserialize, Serialize};
|
||||
|
||||
use crate::error::{policy, ControlCenterError, Result};
|
||||
use crate::policies::{PolicyCategory, PolicyMetadata, PolicyRequestContext, PolicyResult};
|
||||
use crate::storage::{ComplianceCheckResult, PolicyStorage};
|
||||
use crate::storage::PolicyMetadata as StoragePolicyMetadata;
|
||||
use crate::storage::{ComplianceCheckResult, PolicyStorage};
|
||||
|
||||
/// Compliance framework types
|
||||
#[derive(Debug, Clone, Eq, Hash, PartialEq, Serialize, Deserialize)]
|
||||
|
||||
@ -551,8 +551,7 @@ impl ComplianceFrameworkChecker for SOC2Checker {
|
||||
for violation in &result.violations {
|
||||
report.push_str(&format!(
|
||||
"### {} - {}\n",
|
||||
violation.control_id,
|
||||
violation.severity
|
||||
violation.control_id, violation.severity
|
||||
));
|
||||
report.push_str(&format!("**Description:** {}\n", violation.description));
|
||||
if let Some(deadline) = violation.remediation_deadline {
|
||||
@ -571,8 +570,7 @@ impl ComplianceFrameworkChecker for SOC2Checker {
|
||||
for recommendation in &result.recommendations {
|
||||
report.push_str(&format!(
|
||||
"### {} ({})\n",
|
||||
recommendation.title,
|
||||
recommendation.priority
|
||||
recommendation.title, recommendation.priority
|
||||
));
|
||||
report.push_str(&format!("**Control:** {}\n", recommendation.control_id));
|
||||
report.push_str(&format!(
|
||||
|
||||
@ -1,9 +1,11 @@
|
||||
use crate::error::{ControlCenterError, Result, infrastructure};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::time::Duration;
|
||||
use tracing::info;
|
||||
|
||||
use platform_config::ConfigLoader;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::info;
|
||||
|
||||
use crate::error::{infrastructure, ControlCenterError, Result};
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct ControlCenterConfig {
|
||||
@ -213,7 +215,8 @@ impl Default for ControlCenterConfig {
|
||||
impl ControlCenterConfig {
|
||||
/// Load configuration with hierarchical fallback logic:
|
||||
/// 1. Environment variable CONTROL_CENTER_CONFIG (explicit config path)
|
||||
/// 2. Mode-specific config: provisioning/platform/config/control-center.{mode}.toml
|
||||
/// 2. Mode-specific config:
|
||||
/// provisioning/platform/config/control-center.{mode}.toml
|
||||
/// 3. System defaults: config.defaults.toml
|
||||
///
|
||||
/// Then environment variables (CONTROL_CENTER_*) override specific fields.
|
||||
@ -232,10 +235,8 @@ impl ControlCenterConfig {
|
||||
|
||||
// Priority 2: Mode-specific config (provisioning/platform/config/)
|
||||
if let Ok(mode) = std::env::var("CONTROL_CENTER_MODE") {
|
||||
let mode_config_path = format!(
|
||||
"provisioning/platform/config/control-center.{}.toml",
|
||||
mode
|
||||
);
|
||||
let mode_config_path =
|
||||
format!("provisioning/platform/config/control-center.{}.toml", mode);
|
||||
if Path::new(&mode_config_path).exists() {
|
||||
return Self::from_file(&mode_config_path);
|
||||
}
|
||||
@ -260,10 +261,13 @@ impl ControlCenterConfig {
|
||||
config.server.host = host;
|
||||
}
|
||||
if let Ok(port) = std::env::var("CONTROL_CENTER_SERVER_PORT") {
|
||||
config.server.port = port.parse()
|
||||
.map_err(|_| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
"CONTROL_CENTER_SERVER_PORT must be a valid port number".to_string()
|
||||
)))?;
|
||||
config.server.port = port.parse().map_err(|_| {
|
||||
ControlCenterError::Infrastructure(
|
||||
infrastructure::InfrastructureError::Configuration(
|
||||
"CONTROL_CENTER_SERVER_PORT must be a valid port number".to_string(),
|
||||
),
|
||||
)
|
||||
})?;
|
||||
}
|
||||
|
||||
// Auth overrides
|
||||
@ -274,10 +278,13 @@ impl ControlCenterConfig {
|
||||
config.auth.require_mfa = require_mfa.to_lowercase() == "true";
|
||||
}
|
||||
if let Ok(session_timeout) = std::env::var("CONTROL_CENTER_SESSION_TIMEOUT_MINUTES") {
|
||||
config.auth.session_timeout_minutes = session_timeout.parse()
|
||||
.map_err(|_| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
"CONTROL_CENTER_SESSION_TIMEOUT_MINUTES must be a valid number".to_string()
|
||||
)))?;
|
||||
config.auth.session_timeout_minutes = session_timeout.parse().map_err(|_| {
|
||||
ControlCenterError::Infrastructure(
|
||||
infrastructure::InfrastructureError::Configuration(
|
||||
"CONTROL_CENTER_SESSION_TIMEOUT_MINUTES must be a valid number".to_string(),
|
||||
),
|
||||
)
|
||||
})?;
|
||||
}
|
||||
|
||||
// Database overrides
|
||||
@ -309,18 +316,20 @@ impl ControlCenterConfig {
|
||||
|
||||
/// Load configuration from file with environment variable interpolation
|
||||
pub fn from_file<P: AsRef<Path>>(path: P) -> Result<Self> {
|
||||
let content = std::fs::read_to_string(path.as_ref())
|
||||
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to read config file {:?}: {}", path.as_ref(), e)
|
||||
)))?;
|
||||
let content = std::fs::read_to_string(path.as_ref()).map_err(|e| {
|
||||
ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to read config file {:?}: {}", path.as_ref(), e),
|
||||
))
|
||||
})?;
|
||||
|
||||
// Interpolate environment variables
|
||||
let interpolated = Self::interpolate_env_vars(&content)?;
|
||||
|
||||
let config: Self = toml::from_str(&interpolated)
|
||||
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to parse config: {}", e)
|
||||
)))?;
|
||||
let config: Self = toml::from_str(&interpolated).map_err(|e| {
|
||||
ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to parse config: {}", e),
|
||||
))
|
||||
})?;
|
||||
|
||||
config.validate()?;
|
||||
Ok(config)
|
||||
@ -331,10 +340,11 @@ impl ControlCenterConfig {
|
||||
let mut result = content.to_string();
|
||||
|
||||
// Replace ${VAR_NAME} with environment variable values
|
||||
let re = regex::Regex::new(r"\$\{([^}]+)\}")
|
||||
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Invalid regex pattern: {}", e)
|
||||
)))?;
|
||||
let re = regex::Regex::new(r"\$\{([^}]+)\}").map_err(|e| {
|
||||
ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Invalid regex pattern: {}", e),
|
||||
))
|
||||
})?;
|
||||
|
||||
for captures in re.captures_iter(content) {
|
||||
let var_name = &captures[1];
|
||||
@ -351,16 +361,21 @@ impl ControlCenterConfig {
|
||||
pub fn validate(&self) -> Result<()> {
|
||||
// Validate server config
|
||||
if self.server.port == 0 {
|
||||
return Err(ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
"Server port cannot be 0".to_string()
|
||||
)));
|
||||
return Err(ControlCenterError::Infrastructure(
|
||||
infrastructure::InfrastructureError::Configuration(
|
||||
"Server port cannot be 0".to_string(),
|
||||
),
|
||||
));
|
||||
}
|
||||
|
||||
// Validate policy directories exist
|
||||
if !self.policies.policy_dir.exists() {
|
||||
return Err(ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Policy directory does not exist: {:?}", self.policies.policy_dir)
|
||||
)));
|
||||
return Err(ControlCenterError::Infrastructure(
|
||||
infrastructure::InfrastructureError::Configuration(format!(
|
||||
"Policy directory does not exist: {:?}",
|
||||
self.policies.policy_dir
|
||||
)),
|
||||
));
|
||||
}
|
||||
|
||||
// Validate auth config
|
||||
@ -369,16 +384,20 @@ impl ControlCenterConfig {
|
||||
}
|
||||
|
||||
if self.auth.jwt_secret.len() < 32 {
|
||||
return Err(ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
"JWT secret must be at least 32 characters".to_string()
|
||||
)));
|
||||
return Err(ControlCenterError::Infrastructure(
|
||||
infrastructure::InfrastructureError::Configuration(
|
||||
"JWT secret must be at least 32 characters".to_string(),
|
||||
),
|
||||
));
|
||||
}
|
||||
|
||||
// Validate password policy
|
||||
if self.auth.password_policy.min_length < 8 {
|
||||
return Err(ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
"Password minimum length must be at least 8 characters".to_string()
|
||||
)));
|
||||
return Err(ControlCenterError::Infrastructure(
|
||||
infrastructure::InfrastructureError::Configuration(
|
||||
"Password minimum length must be at least 8 characters".to_string(),
|
||||
),
|
||||
));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
@ -386,24 +405,27 @@ impl ControlCenterConfig {
|
||||
|
||||
/// Get configuration as JSON string
|
||||
pub fn to_json(&self) -> Result<String> {
|
||||
serde_json::to_string_pretty(self)
|
||||
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to serialize config to JSON: {}", e)
|
||||
)))
|
||||
serde_json::to_string_pretty(self).map_err(|e| {
|
||||
ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to serialize config to JSON: {}", e),
|
||||
))
|
||||
})
|
||||
}
|
||||
|
||||
/// Create a default configuration file
|
||||
pub fn create_default_config<P: AsRef<Path>>(path: P) -> Result<()> {
|
||||
let config = Self::default();
|
||||
let toml_content = toml::to_string_pretty(&config)
|
||||
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to serialize default config: {}", e)
|
||||
)))?;
|
||||
let toml_content = toml::to_string_pretty(&config).map_err(|e| {
|
||||
ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to serialize default config: {}", e),
|
||||
))
|
||||
})?;
|
||||
|
||||
std::fs::write(path.as_ref(), toml_content)
|
||||
.map_err(|e| ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to write config file {:?}: {}", path.as_ref(), e)
|
||||
)))?;
|
||||
std::fs::write(path.as_ref(), toml_content).map_err(|e| {
|
||||
ControlCenterError::Infrastructure(infrastructure::InfrastructureError::Configuration(
|
||||
format!("Failed to write config file {:?}: {}", path.as_ref(), e),
|
||||
))
|
||||
})?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@ -414,34 +436,47 @@ impl ConfigLoader for ControlCenterConfig {
|
||||
"control-center"
|
||||
}
|
||||
|
||||
fn load_from_hierarchy() -> std::result::Result<Self, Box<dyn std::error::Error + Send + Sync>> {
|
||||
fn load_from_hierarchy() -> std::result::Result<Self, Box<dyn std::error::Error + Send + Sync>>
|
||||
{
|
||||
let service = Self::service_name();
|
||||
|
||||
if let Some(path) = platform_config::resolve_config_path(service) {
|
||||
return Self::from_path(&path)
|
||||
.map_err(|e| Box::new(std::io::Error::other(e.to_string())) as Box<dyn std::error::Error + Send + Sync>);
|
||||
return Self::from_path(&path).map_err(|e| {
|
||||
Box::new(std::io::Error::other(e.to_string()))
|
||||
as Box<dyn std::error::Error + Send + Sync>
|
||||
});
|
||||
}
|
||||
|
||||
Ok(Self::default())
|
||||
}
|
||||
|
||||
fn apply_env_overrides(&mut self) -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
Self::apply_env_overrides(self)
|
||||
.map_err(|e| Box::new(std::io::Error::other(e.to_string())) as Box<dyn std::error::Error + Send + Sync>)
|
||||
fn apply_env_overrides(
|
||||
&mut self,
|
||||
) -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
Self::apply_env_overrides(self).map_err(|e| {
|
||||
Box::new(std::io::Error::other(e.to_string()))
|
||||
as Box<dyn std::error::Error + Send + Sync>
|
||||
})
|
||||
}
|
||||
|
||||
fn from_path<P: AsRef<Path>>(path: P) -> std::result::Result<Self, Box<dyn std::error::Error + Send + Sync>> {
|
||||
fn from_path<P: AsRef<Path>>(
|
||||
path: P,
|
||||
) -> std::result::Result<Self, Box<dyn std::error::Error + Send + Sync>> {
|
||||
let path = path.as_ref();
|
||||
let json_value = platform_config::format::load_config(path)
|
||||
.map_err(|e| {
|
||||
let err: Box<dyn std::error::Error + Send + Sync> = Box::new(e);
|
||||
err
|
||||
})?;
|
||||
let json_value = platform_config::format::load_config(path).map_err(|e| {
|
||||
let err: Box<dyn std::error::Error + Send + Sync> = Box::new(e);
|
||||
err
|
||||
})?;
|
||||
|
||||
serde_json::from_value(json_value)
|
||||
.map_err(|e| {
|
||||
let err_msg = format!("Failed to deserialize control-center config from {:?}: {}", path, e);
|
||||
Box::new(std::io::Error::new(std::io::ErrorKind::InvalidData, err_msg)) as Box<dyn std::error::Error + Send + Sync>
|
||||
})
|
||||
serde_json::from_value(json_value).map_err(|e| {
|
||||
let err_msg = format!(
|
||||
"Failed to deserialize control-center config from {:?}: {}",
|
||||
path, e
|
||||
);
|
||||
Box::new(std::io::Error::new(
|
||||
std::io::ErrorKind::InvalidData,
|
||||
err_msg,
|
||||
)) as Box<dyn std::error::Error + Send + Sync>
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -269,9 +269,10 @@ impl From<std::io::Error> for ControlCenterError {
|
||||
|
||||
impl From<serde_json::Error> for ControlCenterError {
|
||||
fn from(error: serde_json::Error) -> Self {
|
||||
ControlCenterError::Infrastructure(InfrastructureError::Configuration(
|
||||
format!("JSON serialization error: {}", error)
|
||||
))
|
||||
ControlCenterError::Infrastructure(InfrastructureError::Configuration(format!(
|
||||
"JSON serialization error: {}",
|
||||
error
|
||||
)))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
# Hybrid Key Management System (KMS)
|
||||
|
||||
A comprehensive hybrid KMS system built for the control center, supporting local/remote/hybrid modes with intelligent caching, failover, and advanced security features.
|
||||
A comprehensive hybrid KMS system built for the control center, supporting local/remote/hybrid modes
|
||||
with intelligent caching, failover, and advanced security features.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
@ -147,7 +148,7 @@ enable_pfs = true
|
||||
allowed_algorithms = ["AES-256-GCM", "ChaCha20Poly1305", "RSA-4096", "ECDSA-P384"]
|
||||
blocked_algorithms = ["DES", "3DES", "RC4", "MD5"]
|
||||
policy_enforcement = "strict"
|
||||
```
|
||||
```text
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@ -212,7 +213,7 @@ if let Some(key_info) = kms.get_key(&stored_key_id).await? {
|
||||
println!("Key status: {:?}", key_info.status);
|
||||
println!("Created: {}", key_info.created_at);
|
||||
}
|
||||
```
|
||||
```text
|
||||
|
||||
### Provider Credential Management
|
||||
|
||||
@ -240,7 +241,7 @@ if let Some(creds) = kms.get_provider_credentials("aws").await? {
|
||||
// Credentials are automatically injected into environment variables
|
||||
// or configuration files based on the injection configuration
|
||||
}
|
||||
```
|
||||
```text
|
||||
|
||||
### Health Monitoring
|
||||
|
||||
@ -256,7 +257,7 @@ println!("Credentials Status: {}", health.credentials.healthy);
|
||||
let cache_stats = kms.cache.stats().await;
|
||||
println!("Cache hit rate: {:.2}%", cache_stats.hit_rate() * 100.0);
|
||||
println!("Cache entries: {}", cache_stats.entry_count);
|
||||
```
|
||||
```text
|
||||
|
||||
## Integration with Existing System
|
||||
|
||||
@ -270,7 +271,7 @@ export PROVISIONING_KMS_MODE=hybrid
|
||||
export PROVISIONING_KMS_LOCAL_DATABASE_PATH=/var/lib/provisioning/kms.db
|
||||
export PROVISIONING_KMS_REMOTE_SERVER_URL=https://kms.example.com:9998
|
||||
export PROVISIONING_KMS_CACHE_ENABLED=true
|
||||
```
|
||||
```text
|
||||
|
||||
### TOML Configuration Integration
|
||||
|
||||
@ -284,7 +285,7 @@ local.database_path = "{{paths.base}}/data/kms.db"
|
||||
cache.enabled = true
|
||||
cache.local_dir = "{{paths.base}}/cache/kms"
|
||||
audit.enabled = true
|
||||
```
|
||||
```text
|
||||
|
||||
### Nushell Integration
|
||||
|
||||
@ -304,7 +305,7 @@ def kms_health [] {
|
||||
def kms_keys [] {
|
||||
http get http://localhost:8080/kms/keys | from json
|
||||
}
|
||||
```
|
||||
```text
|
||||
|
||||
## Security Considerations
|
||||
|
||||
@ -417,7 +418,7 @@ export PROVISIONING_LOG_LEVEL=debug
|
||||
|
||||
# Run with verbose output
|
||||
./control-center --debug
|
||||
```
|
||||
```text
|
||||
|
||||
### Health Checks
|
||||
|
||||
@ -429,7 +430,7 @@ curl http://localhost:8080/kms/health
|
||||
curl http://localhost:8080/kms/health/backend
|
||||
curl http://localhost:8080/kms/health/cache
|
||||
curl http://localhost:8080/kms/health/rotation
|
||||
```
|
||||
```text
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
@ -448,4 +449,5 @@ curl http://localhost:8080/kms/health/rotation
|
||||
3. **Federated KMS**: Multi-organization key sharing
|
||||
4. **Blockchain Integration**: Immutable audit trails
|
||||
|
||||
This hybrid KMS system provides a solid foundation for secure key management in the control center architecture, with room for future enhancements and customization based on specific requirements.
|
||||
This hybrid KMS system provides a solid foundation for secure key management in the control center architecture,
|
||||
with room for future enhancements and customization based on specific requirements.
|
||||
|
||||
@ -162,9 +162,7 @@ impl PolicyEngine {
|
||||
match Policy::parse(Some(PolicyId::new(metadata.id.clone())), &policy_content) {
|
||||
Ok(policy) => {
|
||||
policy_set.add(policy).map_err(|e| {
|
||||
ControlCenterError::Policy(policy::PolicyError::Parsing(
|
||||
e.to_string(),
|
||||
))
|
||||
ControlCenterError::Policy(policy::PolicyError::Parsing(e.to_string()))
|
||||
})?;
|
||||
loaded_count += 1;
|
||||
}
|
||||
@ -195,9 +193,7 @@ impl PolicyEngine {
|
||||
|
||||
if path.is_dir() {
|
||||
// Use Box::pin for recursive async calls to avoid infinite future size
|
||||
count += Box::pin(
|
||||
self.load_policies_from_directory(policy_set, &path)
|
||||
).await?;
|
||||
count += Box::pin(self.load_policies_from_directory(policy_set, &path)).await?;
|
||||
} else if path.extension().and_then(|s| s.to_str()) == Some("cedar") {
|
||||
count += self.load_policy_file(policy_set, &path).await?;
|
||||
}
|
||||
@ -279,21 +275,16 @@ impl PolicyEngine {
|
||||
.build_context(&context.environment)
|
||||
.await?;
|
||||
|
||||
let context = Context::from_json_value(serde_json::Value::Object(context_map), None).map_err(|e| {
|
||||
ControlCenterError::Policy(policy::PolicyError::Evaluation(format!(
|
||||
"Context build error: {}",
|
||||
e
|
||||
)))
|
||||
})?;
|
||||
let context = Context::from_json_value(serde_json::Value::Object(context_map), None)
|
||||
.map_err(|e| {
|
||||
ControlCenterError::Policy(policy::PolicyError::Evaluation(format!(
|
||||
"Context build error: {}",
|
||||
e
|
||||
)))
|
||||
})?;
|
||||
|
||||
let schema = self.build_schema().await?;
|
||||
Request::new(
|
||||
principal,
|
||||
action,
|
||||
resource,
|
||||
context,
|
||||
Some(&schema),
|
||||
).map_err(|e| {
|
||||
Request::new(principal, action, resource, context, Some(&schema)).map_err(|e| {
|
||||
ControlCenterError::Policy(policy::PolicyError::Evaluation(format!(
|
||||
"Request validation error: {}",
|
||||
e
|
||||
@ -314,7 +305,8 @@ impl PolicyEngine {
|
||||
let schema_path = &self.config.policies.schema_dir;
|
||||
if schema_path.exists() {
|
||||
// Implementation would load actual schema files from the directory
|
||||
// For Cedar 4.8, we'd need to parse JSON schema files and construct Schema
|
||||
// For Cedar 4.8, we'd need to parse JSON schema files and construct
|
||||
// Schema
|
||||
}
|
||||
|
||||
// Cedar 4.8 schema construction via JSON
|
||||
|
||||
@ -283,7 +283,11 @@ impl PolicyTemplateManager {
|
||||
}
|
||||
|
||||
/// Render policy from template
|
||||
pub fn render_policy(&mut self, template_id: &str, context: &TemplateContext) -> Result<String> {
|
||||
pub fn render_policy(
|
||||
&mut self,
|
||||
template_id: &str,
|
||||
context: &TemplateContext,
|
||||
) -> Result<String> {
|
||||
let template = self.templates.get(template_id).ok_or_else(|| {
|
||||
ControlCenterError::Policy(policy::PolicyError::Parsing(format!(
|
||||
"Template not found: {}",
|
||||
|
||||
@ -450,11 +450,7 @@ impl PolicyValidator {
|
||||
if !result.errors.is_empty() {
|
||||
report.push_str("ERRORS:\n");
|
||||
for error in &result.errors {
|
||||
report.push_str(&format!(
|
||||
" - {}: {}\n",
|
||||
error.error_type,
|
||||
error.message
|
||||
));
|
||||
report.push_str(&format!(" - {}: {}\n", error.error_type, error.message));
|
||||
}
|
||||
report.push('\n');
|
||||
}
|
||||
@ -464,9 +460,7 @@ impl PolicyValidator {
|
||||
for warning in &result.warnings {
|
||||
report.push_str(&format!(
|
||||
" - [{}] {}: {}\n",
|
||||
warning.severity,
|
||||
warning.warning_type,
|
||||
warning.message
|
||||
warning.severity, warning.warning_type, warning.message
|
||||
));
|
||||
}
|
||||
report.push('\n');
|
||||
@ -477,8 +471,7 @@ impl PolicyValidator {
|
||||
for suggestion in &result.suggestions {
|
||||
report.push_str(&format!(
|
||||
" - {}: {}\n",
|
||||
suggestion.suggestion_type,
|
||||
suggestion.message
|
||||
suggestion.suggestion_type, suggestion.message
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
@ -12,7 +12,9 @@ use uuid::Uuid;
|
||||
|
||||
use crate::error::{policy, ControlCenterError, Result};
|
||||
use crate::policies::PolicyMetadata;
|
||||
use crate::storage::{PolicyStorage, PolicyVersion as StoragePolicyVersion, RollbackResult as StorageRollbackResult};
|
||||
use crate::storage::{
|
||||
PolicyStorage, PolicyVersion as StoragePolicyVersion, RollbackResult as StorageRollbackResult,
|
||||
};
|
||||
|
||||
/// Policy version information
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
@ -83,7 +85,10 @@ pub struct RollbackResult {
|
||||
}
|
||||
|
||||
/// Convert storage PolicyVersion to versioning PolicyVersion
|
||||
fn convert_storage_version(storage_version: StoragePolicyVersion, metadata: Option<PolicyMetadata>) -> PolicyVersion {
|
||||
fn convert_storage_version(
|
||||
storage_version: StoragePolicyVersion,
|
||||
metadata: Option<PolicyMetadata>,
|
||||
) -> PolicyVersion {
|
||||
let metadata = metadata.unwrap_or_else(|| PolicyMetadata {
|
||||
id: storage_version.policy_id.clone(),
|
||||
name: String::new(),
|
||||
@ -106,11 +111,11 @@ fn convert_storage_version(storage_version: StoragePolicyVersion, metadata: Opti
|
||||
metadata,
|
||||
created_at: storage_version.created_at,
|
||||
created_by: "system".to_string(), // Storage doesn't track this
|
||||
change_summary: String::new(), // Storage doesn't track this
|
||||
change_summary: String::new(), // Storage doesn't track this
|
||||
tags: storage_version.tags,
|
||||
is_active: storage_version.is_active,
|
||||
parent_version_id: None, // Storage doesn't track this
|
||||
checksum: String::new(), // Storage doesn't track this
|
||||
parent_version_id: None, // Storage doesn't track this
|
||||
checksum: String::new(), // Storage doesn't track this
|
||||
}
|
||||
}
|
||||
|
||||
@ -225,7 +230,8 @@ impl PolicyVersionManager {
|
||||
policy_id: &str,
|
||||
version_number: u32,
|
||||
) -> Result<Option<PolicyVersion>> {
|
||||
let storage_version = self.storage
|
||||
let storage_version = self
|
||||
.storage
|
||||
.get_policy_version(policy_id, version_number)
|
||||
.await?;
|
||||
Ok(storage_version.map(|v| convert_storage_version(v, None)))
|
||||
@ -240,7 +246,10 @@ impl PolicyVersionManager {
|
||||
/// List all versions for a policy
|
||||
pub async fn list_versions(&self, policy_id: &str) -> Result<Vec<PolicyVersion>> {
|
||||
let storage_versions = self.storage.list_policy_versions(policy_id).await?;
|
||||
Ok(storage_versions.into_iter().map(|v| convert_storage_version(v, None)).collect())
|
||||
Ok(storage_versions
|
||||
.into_iter()
|
||||
.map(|v| convert_storage_version(v, None))
|
||||
.collect())
|
||||
}
|
||||
|
||||
/// Get version history with changes
|
||||
@ -268,7 +277,9 @@ impl PolicyVersionManager {
|
||||
summary: version.change_summary.clone(),
|
||||
details: self.extract_change_details(
|
||||
&version.content,
|
||||
versions.get(i.saturating_sub(1)).map(|v| v.content.as_str()),
|
||||
versions
|
||||
.get(i.saturating_sub(1))
|
||||
.map(|v| v.content.as_str()),
|
||||
),
|
||||
impact_analysis,
|
||||
});
|
||||
@ -401,10 +412,14 @@ impl PolicyVersionManager {
|
||||
policy_id: &str,
|
||||
tag: &str,
|
||||
) -> Result<Vec<PolicyVersion>> {
|
||||
let storage_versions = self.storage
|
||||
let storage_versions = self
|
||||
.storage
|
||||
.get_policy_versions_by_tag(policy_id, tag)
|
||||
.await?;
|
||||
Ok(storage_versions.into_iter().map(|v| convert_storage_version(v, None)).collect())
|
||||
Ok(storage_versions
|
||||
.into_iter()
|
||||
.map(|v| convert_storage_version(v, None))
|
||||
.collect())
|
||||
}
|
||||
|
||||
/// Get latest version number
|
||||
|
||||
@ -18,8 +18,8 @@ use serde::{Deserialize, Serialize};
|
||||
// Import from surrealdb_storage (temporary placeholders)
|
||||
pub use surrealdb_storage::{PolicyMetadata, PolicyVersion, RollbackResult};
|
||||
|
||||
use crate::simple_config::Config;
|
||||
use crate::error::{infrastructure, ControlCenterError, Result};
|
||||
use crate::simple_config::Config;
|
||||
|
||||
/// Policy storage trait
|
||||
#[async_trait]
|
||||
|
||||
@ -13,9 +13,9 @@ use serde::{Deserialize, Serialize};
|
||||
use super::{
|
||||
ComplianceCheckResult, PolicyEvaluationEvent, PolicyMetrics, PolicySearchQuery, PolicyStorage,
|
||||
};
|
||||
use crate::simple_config::Config;
|
||||
use crate::error::{auth, policy, ControlCenterError, Result};
|
||||
use crate::services::secrets::SecretType;
|
||||
use crate::simple_config::Config;
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct PolicyMetadata {
|
||||
|
||||
@ -20,7 +20,8 @@ use control_center::auth::{
|
||||
AuthService,
|
||||
};
|
||||
|
||||
/// Generate RSA key pair for testing (pre-generated to avoid rand_core conflict)
|
||||
/// Generate RSA key pair for testing (pre-generated to avoid rand_core
|
||||
/// conflict)
|
||||
fn generate_test_keys() -> (Vec<u8>, Vec<u8>) {
|
||||
let private_pem = b"-----BEGIN PRIVATE KEY-----
|
||||
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC7F43HxrVfJJ+k
|
||||
@ -61,10 +62,7 @@ WCemS72CI+y72SqwgaZ94eUO4WKV2agxmHeXQ8FhSd8ZFTJCTBKDW91Kykdo7yUx
|
||||
IQIDAQAB
|
||||
-----END PUBLIC KEY-----";
|
||||
|
||||
(
|
||||
private_pem.to_vec(),
|
||||
public_pem.to_vec(),
|
||||
)
|
||||
(private_pem.to_vec(), public_pem.to_vec())
|
||||
}
|
||||
|
||||
/// Create JWT service for testing
|
||||
|
||||
@ -38,7 +38,7 @@ React/TypeScript frontend for the Control Center vault secrets management.
|
||||
```bash
|
||||
cd provisioning/platform/control-center/web
|
||||
npm install
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Development
|
||||
|
||||
@ -57,7 +57,7 @@ npm run lint
|
||||
|
||||
# Format code
|
||||
npm run format
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Environment Variables
|
||||
|
||||
@ -65,7 +65,7 @@ Create a `.env` file in the web directory:
|
||||
|
||||
```bash
|
||||
REACT_APP_API_URL=http://localhost:8080
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Usage
|
||||
|
||||
@ -81,7 +81,7 @@ function App() {
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### API Client
|
||||
|
||||
@ -119,7 +119,7 @@ const history = await secretsApi.getSecretHistory('database/prod/password');
|
||||
|
||||
// Restore version
|
||||
await secretsApi.restoreSecretVersion('database/prod/password', 2);
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Architecture
|
||||
|
||||
@ -137,7 +137,7 @@ SecretsManager (Orchestrator)
|
||||
KMS Service (Encryption)
|
||||
↓
|
||||
RustyVault (Storage)
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Security
|
||||
|
||||
|
||||
@ -13,7 +13,8 @@ Base URL: `http://localhost:8082/api/v1`
|
||||
|
||||
## Authentication
|
||||
|
||||
The Extension Registry API does not require authentication for read operations. Backend authentication (Gitea/OCI) is handled server-side via configuration.
|
||||
The Extension Registry API does not require authentication for read operations. Backend authentication (Gitea/OCI) is handled server-side via
|
||||
configuration.
|
||||
|
||||
## Extension Endpoints
|
||||
|
||||
@ -26,7 +27,7 @@ Retrieve a list of available extensions with optional filtering and pagination.
|
||||
**Query Parameters**:
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| ----------- | ------ | ---------- | ------------- |
|
||||
| `type` | string | No | Filter by extension type: `provider`, `taskserv`, `cluster` |
|
||||
| `source` | string | No | Filter by source: `gitea`, `oci` |
|
||||
| `limit` | integer | No | Maximum results (default: 100, max: 1000) |
|
||||
@ -36,7 +37,7 @@ Retrieve a list of available extensions with optional filtering and pagination.
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8082/api/v1/extensions?type=provider&limit=10"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Example Response** (200 OK):
|
||||
|
||||
@ -69,7 +70,7 @@ curl "http://localhost:8082/api/v1/extensions?type=provider&limit=10"
|
||||
"size": 890000
|
||||
}
|
||||
]
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
@ -82,7 +83,7 @@ Retrieve detailed metadata for a specific extension.
|
||||
**Path Parameters**:
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| ----------- | ------ | ---------- | ------------- |
|
||||
| `type` | string | Yes | Extension type: `provider`, `taskserv`, `cluster` |
|
||||
| `name` | string | Yes | Extension name |
|
||||
|
||||
@ -90,7 +91,7 @@ Retrieve detailed metadata for a specific extension.
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8082/api/v1/extensions/provider/aws"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Example Response** (200 OK):
|
||||
|
||||
@ -109,7 +110,7 @@ curl "http://localhost:8082/api/v1/extensions/provider/aws"
|
||||
"size": 1024000,
|
||||
"tags": ["cloud", "aws", "infrastructure"]
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Error Response** (404 Not Found):
|
||||
|
||||
@ -118,7 +119,7 @@ curl "http://localhost:8082/api/v1/extensions/provider/aws"
|
||||
"error": "not_found",
|
||||
"message": "Extension provider/nonexistent not found"
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
@ -131,7 +132,7 @@ Get all available versions for a specific extension.
|
||||
**Path Parameters**:
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| ----------- | ------ | ---------- | ------------- |
|
||||
| `type` | string | Yes | Extension type: `provider`, `taskserv`, `cluster` |
|
||||
| `name` | string | Yes | Extension name |
|
||||
|
||||
@ -139,7 +140,7 @@ Get all available versions for a specific extension.
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8082/api/v1/extensions/taskserv/kubernetes/versions"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Example Response** (200 OK):
|
||||
|
||||
@ -166,7 +167,7 @@ curl "http://localhost:8082/api/v1/extensions/taskserv/kubernetes/versions"
|
||||
"size": 1950000
|
||||
}
|
||||
]
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
@ -179,7 +180,7 @@ Download a specific version of an extension.
|
||||
**Path Parameters**:
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| ----------- | ------ | ---------- | ------------- |
|
||||
| `type` | string | Yes | Extension type: `provider`, `taskserv`, `cluster` |
|
||||
| `name` | string | Yes | Extension name |
|
||||
| `version` | string | Yes | Extension version (e.g., `1.2.0`) |
|
||||
@ -188,7 +189,7 @@ Download a specific version of an extension.
|
||||
|
||||
```bash
|
||||
curl -OJ "http://localhost:8082/api/v1/extensions/provider/aws/1.2.0"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Response**:
|
||||
|
||||
@ -202,7 +203,7 @@ curl -OJ "http://localhost:8082/api/v1/extensions/provider/aws/1.2.0"
|
||||
"error": "not_found",
|
||||
"message": "Extension provider/aws version 1.2.0 not found"
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
@ -215,7 +216,7 @@ Search for extensions by name or description.
|
||||
**Query Parameters**:
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| ----------- | ------ | ---------- | ------------- |
|
||||
| `q` | string | Yes | Search query (case-insensitive) |
|
||||
| `type` | string | No | Filter by extension type |
|
||||
| `limit` | integer | No | Maximum results (default: 50, max: 100) |
|
||||
@ -224,7 +225,7 @@ Search for extensions by name or description.
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv&limit=5"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Example Response** (200 OK):
|
||||
|
||||
@ -249,7 +250,7 @@ curl "http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv&
|
||||
"published_at": "2025-09-20T14:30:00Z"
|
||||
}
|
||||
]
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
@ -265,7 +266,7 @@ Check service health and backend status.
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8082/api/v1/health"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Example Response** (200 OK):
|
||||
|
||||
@ -286,7 +287,7 @@ curl "http://localhost:8082/api/v1/health"
|
||||
}
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Degraded Status** (200 OK):
|
||||
|
||||
@ -307,7 +308,7 @@ curl "http://localhost:8082/api/v1/health"
|
||||
}
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
@ -321,7 +322,7 @@ Get Prometheus-formatted metrics.
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8082/api/v1/metrics"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Example Response** (200 OK):
|
||||
|
||||
@ -349,7 +350,7 @@ cache_misses_total 247
|
||||
# HELP extensions_total Total extensions
|
||||
# TYPE extensions_total gauge
|
||||
extensions_total 45
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
@ -363,7 +364,7 @@ Get cache performance statistics.
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8082/api/v1/cache/stats"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Example Response** (200 OK):
|
||||
|
||||
@ -374,7 +375,7 @@ curl "http://localhost:8082/api/v1/cache/stats"
|
||||
"version_entries": 80,
|
||||
"total_entries": 245
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
@ -388,12 +389,12 @@ All error responses follow this format:
|
||||
"message": "Human-readable error message",
|
||||
"details": "Optional additional details"
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### HTTP Status Codes
|
||||
|
||||
| Status | Description |
|
||||
|--------|-------------|
|
||||
| -------- | ------------- |
|
||||
| 200 OK | Request successful |
|
||||
| 400 Bad Request | Invalid input (e.g., invalid extension type) |
|
||||
| 401 Unauthorized | Authentication failed |
|
||||
@ -405,7 +406,7 @@ All error responses follow this format:
|
||||
### Error Types
|
||||
|
||||
| Error Type | HTTP Status | Description |
|
||||
|------------|-------------|-------------|
|
||||
| ------------ | ------------- | ------------- |
|
||||
| `not_found` | 404 | Extension or resource not found |
|
||||
| `invalid_type` | 400 | Invalid extension type provided |
|
||||
| `invalid_version` | 400 | Invalid version format |
|
||||
@ -435,19 +436,19 @@ interface Extension {
|
||||
size?: number; // Size in bytes
|
||||
tags?: string[]; // Tags
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### ExtensionType
|
||||
|
||||
```typescript
|
||||
type ExtensionType = "provider" | "taskserv" | "cluster";
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### ExtensionSource
|
||||
|
||||
```typescript
|
||||
type ExtensionSource = "gitea" | "oci";
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### ExtensionVersion
|
||||
|
||||
@ -459,7 +460,7 @@ interface ExtensionVersion {
|
||||
checksum?: string; // Checksum
|
||||
size?: number; // Size in bytes
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### HealthResponse
|
||||
|
||||
@ -470,7 +471,7 @@ interface HealthResponse {
|
||||
uptime: number; // Uptime in seconds
|
||||
backends: BackendHealth; // Backend health status
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### BackendHealth
|
||||
|
||||
@ -479,7 +480,7 @@ interface BackendHealth {
|
||||
gitea: BackendStatus;
|
||||
oci: BackendStatus;
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### BackendStatus
|
||||
|
||||
@ -489,7 +490,7 @@ interface BackendStatus {
|
||||
healthy: boolean; // Backend healthy
|
||||
error?: string; // Error message if unhealthy
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
@ -512,9 +513,9 @@ The service implements LRU caching with TTL:
|
||||
- **Cache TTL**: Configurable (default: 5 minutes)
|
||||
- **Cache Capacity**: Configurable (default: 1000 entries)
|
||||
- **Cache Keys**:
|
||||
- List: `list:{type}:{source}`
|
||||
- Metadata: `{type}/{name}`
|
||||
- Versions: `{type}/{name}/versions`
|
||||
- List: `list:{type}:{source}`
|
||||
- Metadata: `{type}/{name}`
|
||||
- Versions: `{type}/{name}/versions`
|
||||
|
||||
Cache headers are not currently exposed. Future versions may include:
|
||||
|
||||
@ -550,7 +551,7 @@ curl -OJ "http://localhost:8082/api/v1/extensions/taskserv/kubernetes/1.28.0"
|
||||
|
||||
# 5. Verify checksum (if provided)
|
||||
sha256sum kubernetes_taskserv.tar.gz
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Pagination
|
||||
|
||||
@ -563,7 +564,7 @@ curl "http://localhost:8082/api/v1/extensions?limit=10&offset=10"
|
||||
|
||||
# Get third page
|
||||
curl "http://localhost:8082/api/v1/extensions?limit=10&offset=20"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Filtering
|
||||
|
||||
@ -576,7 +577,7 @@ curl "http://localhost:8082/api/v1/extensions?type=taskserv&source=oci"
|
||||
|
||||
# All clusters
|
||||
curl "http://localhost:8082/api/v1/extensions?type=cluster"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
---
|
||||
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
# Extension Registry Service
|
||||
|
||||
A high-performance Rust microservice that provides a unified REST API for extension discovery, versioning, and download from multiple sources (Gitea releases and OCI registries).
|
||||
A high-performance Rust microservice that provides a unified REST API for extension discovery, versioning,
|
||||
and download from multiple sources (Gitea releases and OCI registries).
|
||||
|
||||
## Features
|
||||
|
||||
@ -16,24 +17,29 @@ A high-performance Rust microservice that provides a unified REST API for extens
|
||||
## Architecture
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
┌─────────────────────────────────────────────────
|
||||
────────────┐
|
||||
│ Extension Registry API │
|
||||
│ (axum) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
├─────────────────────────────────────────────────
|
||||
────────────┤
|
||||
│ │
|
||||
│ ┌────────────────┐ ┌────────────────┐ ┌──────────────┐ │
|
||||
│ ┌────────────────┐ ┌────────────────┐
|
||||
┌──────────────┐ │
|
||||
│ │ Gitea Client │ │ OCI Client │ │ LRU Cache │ │
|
||||
│ │ (reqwest) │ │ (reqwest) │ │ (parking) │ │
|
||||
│ └────────────────┘ └────────────────┘ └──────────────┘ │
|
||||
│ └────────────────┘ └────────────────┘
|
||||
└──────────────┘ │
|
||||
│ │ │ │ │
|
||||
└─────────┼────────────────────┼────────────────────┼─────────┘
|
||||
└─────────┼────────────────────┼──────────────────
|
||||
──┼─────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌──────────┐ ┌──────────┐ ┌──────────┐
|
||||
│ Gitea │ │ OCI │ │ Memory │
|
||||
│ Releases │ │ Registry │ │ │
|
||||
└──────────┘ └──────────┘ └──────────┘
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Installation
|
||||
|
||||
@ -42,19 +48,19 @@ A high-performance Rust microservice that provides a unified REST API for extens
|
||||
```bash
|
||||
cd provisioning/platform/extension-registry
|
||||
cargo build --release
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Docker Build
|
||||
|
||||
```bash
|
||||
docker build -t extension-registry:latest .
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Running with Cargo
|
||||
|
||||
```bash
|
||||
cargo run -- --config config.toml --port 8082
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Running with Docker
|
||||
|
||||
@ -64,7 +70,7 @@ docker run -d \
|
||||
-v $(pwd)/config.toml:/app/config.toml:ro \
|
||||
-v $(pwd)/tokens:/app/tokens:ro \
|
||||
extension-registry:latest
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Configuration
|
||||
|
||||
@ -100,7 +106,7 @@ capacity = 1000
|
||||
ttl_seconds = 300
|
||||
enable_metadata_cache = true
|
||||
enable_list_cache = true
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Note**: At least one backend (Gitea or OCI) must be configured.
|
||||
|
||||
@ -112,7 +118,7 @@ enable_list_cache = true
|
||||
|
||||
```bash
|
||||
GET /api/v1/extensions
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Query parameters:
|
||||
|
||||
@ -125,7 +131,7 @@ Example:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8082/api/v1/extensions?type=provider&limit=10
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Response:
|
||||
|
||||
@ -144,31 +150,31 @@ Response:
|
||||
"size": 1024000
|
||||
}
|
||||
]
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### Get Extension
|
||||
|
||||
```bash
|
||||
GET /api/v1/extensions/{type}/{name}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8082/api/v1/extensions/provider/aws
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### List Versions
|
||||
|
||||
```bash
|
||||
GET /api/v1/extensions/{type}/{name}/versions
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8082/api/v1/extensions/provider/aws/versions
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Response:
|
||||
|
||||
@ -187,19 +193,19 @@ Response:
|
||||
"size": 980000
|
||||
}
|
||||
]
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### Download Extension
|
||||
|
||||
```bash
|
||||
GET /api/v1/extensions/{type}/{name}/{version}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
curl -O http://localhost:8082/api/v1/extensions/provider/aws/1.2.0
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Returns binary data with `Content-Type: application/octet-stream`.
|
||||
|
||||
@ -207,7 +213,7 @@ Returns binary data with `Content-Type: application/octet-stream`.
|
||||
|
||||
```bash
|
||||
GET /api/v1/extensions/search?q={query}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Query parameters:
|
||||
|
||||
@ -219,7 +225,7 @@ Example:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### System Endpoints
|
||||
|
||||
@ -227,13 +233,13 @@ curl http://localhost:8082/api/v1/extensions/search?q=kubernetes&type=taskserv
|
||||
|
||||
```bash
|
||||
GET /api/v1/health
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8082/api/v1/health
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Response:
|
||||
|
||||
@ -253,13 +259,13 @@ Response:
|
||||
}
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### Metrics
|
||||
|
||||
```bash
|
||||
GET /api/v1/metrics
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Returns Prometheus-formatted metrics:
|
||||
|
||||
@ -275,13 +281,13 @@ cache_hits_total 567
|
||||
# HELP cache_misses_total Total cache misses
|
||||
# TYPE cache_misses_total counter
|
||||
cache_misses_total 123
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### Cache Statistics
|
||||
|
||||
```bash
|
||||
GET /api/v1/cache/stats
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Response:
|
||||
|
||||
@ -292,7 +298,7 @@ Response:
|
||||
"version_entries": 80,
|
||||
"total_entries": 245
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Extension Naming Conventions
|
||||
|
||||
@ -351,7 +357,7 @@ Error response format:
|
||||
"error": "not_found",
|
||||
"message": "Extension provider/nonexistent not found"
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Metrics and Monitoring
|
||||
|
||||
@ -403,7 +409,7 @@ extension-registry/
|
||||
│ └── integration_test.rs # Integration tests
|
||||
├── Dockerfile # Docker build
|
||||
└── README.md # This file
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Running Tests
|
||||
|
||||
@ -416,7 +422,7 @@ cargo test -- --nocapture
|
||||
|
||||
# Run specific test
|
||||
cargo test test_health_check
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Code Quality
|
||||
|
||||
@ -429,7 +435,7 @@ cargo clippy
|
||||
|
||||
# Check for security vulnerabilities
|
||||
cargo audit
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Deployment
|
||||
|
||||
@ -452,7 +458,7 @@ RestartSec=5s
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Enable and start:
|
||||
|
||||
@ -460,7 +466,7 @@ Enable and start:
|
||||
sudo systemctl enable extension-registry
|
||||
sudo systemctl start extension-registry
|
||||
sudo systemctl status extension-registry
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Docker Compose
|
||||
|
||||
@ -482,7 +488,7 @@ services:
|
||||
timeout: 3s
|
||||
retries: 3
|
||||
start_period: 5s
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Kubernetes Deployment
|
||||
|
||||
@ -543,7 +549,7 @@ spec:
|
||||
- port: 8082
|
||||
targetPort: 8082
|
||||
type: ClusterIP
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Security
|
||||
|
||||
@ -608,13 +614,13 @@ Enable debug logging:
|
||||
|
||||
```bash
|
||||
extension-registry --log-level debug
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Enable JSON logging for structured logs:
|
||||
|
||||
```bash
|
||||
extension-registry --json-log
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## License
|
||||
|
||||
|
||||
@ -4,14 +4,13 @@ use std::sync::Arc;
|
||||
|
||||
use tracing::info;
|
||||
|
||||
use super::traits::{DistributionClient, SourceClient};
|
||||
use super::{ForgejoClient, GitHubClient};
|
||||
use crate::config::Config;
|
||||
use crate::error::{RegistryError, Result};
|
||||
use crate::gitea::GiteaClient as GiteaClientImpl;
|
||||
use crate::oci::OciClient as OciClientImpl;
|
||||
|
||||
use super::traits::{DistributionClient, SourceClient};
|
||||
use super::{ForgejoClient, GitHubClient};
|
||||
|
||||
/// Factory for creating extension clients
|
||||
pub struct ClientFactory;
|
||||
|
||||
@ -19,10 +18,7 @@ impl ClientFactory {
|
||||
/// Create all configured clients from configuration
|
||||
pub fn create_from_config(
|
||||
config: &Config,
|
||||
) -> Result<(
|
||||
Vec<Arc<dyn SourceClient>>,
|
||||
Vec<Arc<dyn DistributionClient>>,
|
||||
)> {
|
||||
) -> Result<(Vec<Arc<dyn SourceClient>>, Vec<Arc<dyn DistributionClient>>)> {
|
||||
let mut source_clients: Vec<Arc<dyn SourceClient>> = Vec::new();
|
||||
let mut distribution_clients: Vec<Arc<dyn DistributionClient>> = Vec::new();
|
||||
|
||||
|
||||
@ -8,11 +8,10 @@
|
||||
use async_trait::async_trait;
|
||||
use bytes::Bytes;
|
||||
|
||||
use super::traits::{BackendType, ExtensionClient, ReleaseInfo, SourceClient};
|
||||
use crate::error::Result;
|
||||
use crate::models::{Extension, ExtensionType, ExtensionVersion};
|
||||
|
||||
use super::traits::{BackendType, ExtensionClient, ReleaseInfo, SourceClient};
|
||||
|
||||
/// Forgejo client (wraps GiteaClient since Forgejo API is Gitea-compatible)
|
||||
pub struct ForgejoClient {
|
||||
backend_id: String,
|
||||
@ -44,7 +43,10 @@ impl ExtensionClient for ForgejoClient {
|
||||
BackendType::Forgejo
|
||||
}
|
||||
|
||||
async fn list_extensions(&self, extension_type: Option<ExtensionType>) -> Result<Vec<Extension>> {
|
||||
async fn list_extensions(
|
||||
&self,
|
||||
extension_type: Option<ExtensionType>,
|
||||
) -> Result<Vec<Extension>> {
|
||||
self.inner.list_extensions(extension_type).await
|
||||
}
|
||||
|
||||
@ -66,7 +68,9 @@ impl ExtensionClient for ForgejoClient {
|
||||
name: &str,
|
||||
version: &str,
|
||||
) -> Result<Bytes> {
|
||||
self.inner.download_extension(extension_type, name, version).await
|
||||
self.inner
|
||||
.download_extension(extension_type, name, version)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn health_check(&self) -> Result<()> {
|
||||
@ -77,7 +81,11 @@ impl ExtensionClient for ForgejoClient {
|
||||
/// SourceClient trait implementation for Forgejo
|
||||
#[async_trait]
|
||||
impl SourceClient for ForgejoClient {
|
||||
async fn get_repository_url(&self, extension_type: ExtensionType, name: &str) -> Result<String> {
|
||||
async fn get_repository_url(
|
||||
&self,
|
||||
extension_type: ExtensionType,
|
||||
name: &str,
|
||||
) -> Result<String> {
|
||||
self.inner.get_repository_url(extension_type, name).await
|
||||
}
|
||||
|
||||
|
||||
@ -1,19 +1,20 @@
|
||||
//! GitHub client implementation
|
||||
//!
|
||||
//! Integrates with GitHub Releases API to fetch provisioning extensions.
|
||||
//! Extensions are identified as releases within repositories under a GitHub organization.
|
||||
//! Extensions are identified as releases within repositories under a GitHub
|
||||
//! organization.
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
use async_trait::async_trait;
|
||||
use bytes::Bytes;
|
||||
use reqwest::Client;
|
||||
use std::time::Duration;
|
||||
use tracing::debug;
|
||||
|
||||
use super::traits::{BackendType, ExtensionClient, ReleaseAsset, ReleaseInfo, SourceClient};
|
||||
use crate::error::{RegistryError, Result};
|
||||
use crate::models::{Extension, ExtensionSource, ExtensionType, ExtensionVersion};
|
||||
|
||||
use super::traits::{BackendType, ExtensionClient, ReleaseAsset, ReleaseInfo, SourceClient};
|
||||
|
||||
/// GitHub configuration (reused GiteaConfig for compatibility)
|
||||
pub struct GitHubClient {
|
||||
backend_id: String,
|
||||
@ -121,7 +122,10 @@ impl ExtensionClient for GitHubClient {
|
||||
BackendType::GitHub
|
||||
}
|
||||
|
||||
async fn list_extensions(&self, _extension_type: Option<ExtensionType>) -> Result<Vec<Extension>> {
|
||||
async fn list_extensions(
|
||||
&self,
|
||||
_extension_type: Option<ExtensionType>,
|
||||
) -> Result<Vec<Extension>> {
|
||||
debug!(
|
||||
"Fetching repositories for GitHub organization: {}",
|
||||
self.organization
|
||||
@ -135,9 +139,14 @@ impl ExtensionClient for GitHubClient {
|
||||
|
||||
async fn get_extension(&self, extension_type: ExtensionType, name: &str) -> Result<Extension> {
|
||||
let repo_name = self.format_repo_name(extension_type, name);
|
||||
debug!("Fetching GitHub extension: {}/{}", self.organization, repo_name);
|
||||
debug!(
|
||||
"Fetching GitHub extension: {}/{}",
|
||||
self.organization, repo_name
|
||||
);
|
||||
|
||||
let releases = self.list_releases_internal(&self.organization, &repo_name).await?;
|
||||
let releases = self
|
||||
.list_releases_internal(&self.organization, &repo_name)
|
||||
.await?;
|
||||
|
||||
let latest = releases
|
||||
.iter()
|
||||
@ -181,9 +190,14 @@ impl ExtensionClient for GitHubClient {
|
||||
name: &str,
|
||||
) -> Result<Vec<ExtensionVersion>> {
|
||||
let repo_name = self.format_repo_name(extension_type, name);
|
||||
debug!("Fetching GitHub versions for: {}/{}", self.organization, repo_name);
|
||||
debug!(
|
||||
"Fetching GitHub versions for: {}/{}",
|
||||
self.organization, repo_name
|
||||
);
|
||||
|
||||
let releases = self.list_releases_internal(&self.organization, &repo_name).await?;
|
||||
let releases = self
|
||||
.list_releases_internal(&self.organization, &repo_name)
|
||||
.await?;
|
||||
|
||||
Ok(releases
|
||||
.into_iter()
|
||||
@ -210,7 +224,9 @@ impl ExtensionClient for GitHubClient {
|
||||
self.organization, repo_name, version
|
||||
);
|
||||
|
||||
let releases = self.list_releases_internal(&self.organization, &repo_name).await?;
|
||||
let releases = self
|
||||
.list_releases_internal(&self.organization, &repo_name)
|
||||
.await?;
|
||||
|
||||
let release = releases
|
||||
.iter()
|
||||
@ -274,7 +290,11 @@ impl ExtensionClient for GitHubClient {
|
||||
/// SourceClient trait implementation for GitHub
|
||||
#[async_trait]
|
||||
impl SourceClient for GitHubClient {
|
||||
async fn get_repository_url(&self, extension_type: ExtensionType, name: &str) -> Result<String> {
|
||||
async fn get_repository_url(
|
||||
&self,
|
||||
extension_type: ExtensionType,
|
||||
name: &str,
|
||||
) -> Result<String> {
|
||||
let repo_name = self.format_repo_name(extension_type, name);
|
||||
Ok(format!(
|
||||
"https://github.com/{}/{}",
|
||||
@ -283,7 +303,9 @@ impl SourceClient for GitHubClient {
|
||||
}
|
||||
|
||||
async fn list_releases(&self, repo_name: &str) -> Result<Vec<ReleaseInfo>> {
|
||||
let releases = self.list_releases_internal(&self.organization, repo_name).await?;
|
||||
let releases = self
|
||||
.list_releases_internal(&self.organization, repo_name)
|
||||
.await?;
|
||||
Ok(releases
|
||||
.into_iter()
|
||||
.map(|r| ReleaseInfo {
|
||||
@ -305,7 +327,9 @@ impl SourceClient for GitHubClient {
|
||||
}
|
||||
|
||||
async fn get_release_notes(&self, repo_name: &str, version: &str) -> Result<String> {
|
||||
let releases = self.list_releases_internal(&self.organization, repo_name).await?;
|
||||
let releases = self
|
||||
.list_releases_internal(&self.organization, repo_name)
|
||||
.await?;
|
||||
let release = releases
|
||||
.iter()
|
||||
.find(|r| r.tag_name == version)
|
||||
|
||||
@ -33,7 +33,10 @@ pub trait ExtensionClient: Send + Sync {
|
||||
fn backend_type(&self) -> BackendType;
|
||||
|
||||
/// List all extensions available from this backend
|
||||
async fn list_extensions(&self, extension_type: Option<ExtensionType>) -> Result<Vec<Extension>>;
|
||||
async fn list_extensions(
|
||||
&self,
|
||||
extension_type: Option<ExtensionType>,
|
||||
) -> Result<Vec<Extension>>;
|
||||
|
||||
/// Get specific extension metadata
|
||||
async fn get_extension(&self, extension_type: ExtensionType, name: &str) -> Result<Extension>;
|
||||
@ -65,7 +68,8 @@ pub trait ExtensionClient: Send + Sync {
|
||||
#[async_trait]
|
||||
pub trait SourceClient: ExtensionClient {
|
||||
/// Get repository URL for an extension
|
||||
async fn get_repository_url(&self, extension_type: ExtensionType, name: &str) -> Result<String>;
|
||||
async fn get_repository_url(&self, extension_type: ExtensionType, name: &str)
|
||||
-> Result<String>;
|
||||
|
||||
/// List all releases for a repository
|
||||
///
|
||||
|
||||
@ -6,11 +6,13 @@ use reqwest::Client;
|
||||
use tracing::debug;
|
||||
use url::Url;
|
||||
|
||||
use crate::client::traits::{
|
||||
BackendType, ExtensionClient, ReleaseAsset, ReleaseInfo, SourceClient,
|
||||
};
|
||||
use crate::config::GiteaConfig;
|
||||
use crate::error::{RegistryError, Result};
|
||||
use crate::gitea::models::{GiteaRelease, GiteaRepository};
|
||||
use crate::models::{Extension, ExtensionSource, ExtensionType, ExtensionVersion};
|
||||
use crate::client::traits::{BackendType, ExtensionClient, ReleaseAsset, ReleaseInfo, SourceClient};
|
||||
|
||||
/// Gitea API client
|
||||
pub struct GiteaClient {
|
||||
@ -432,7 +434,10 @@ impl ExtensionClient for GiteaClient {
|
||||
BackendType::Gitea
|
||||
}
|
||||
|
||||
async fn list_extensions(&self, extension_type: Option<ExtensionType>) -> Result<Vec<Extension>> {
|
||||
async fn list_extensions(
|
||||
&self,
|
||||
extension_type: Option<ExtensionType>,
|
||||
) -> Result<Vec<Extension>> {
|
||||
GiteaClient::list_extensions(self, extension_type).await
|
||||
}
|
||||
|
||||
@ -465,7 +470,11 @@ impl ExtensionClient for GiteaClient {
|
||||
/// SourceClient trait implementation for Gitea
|
||||
#[async_trait]
|
||||
impl SourceClient for GiteaClient {
|
||||
async fn get_repository_url(&self, extension_type: ExtensionType, name: &str) -> Result<String> {
|
||||
async fn get_repository_url(
|
||||
&self,
|
||||
extension_type: ExtensionType,
|
||||
name: &str,
|
||||
) -> Result<String> {
|
||||
let repo_name = self.format_repo_name(extension_type, name);
|
||||
let repo = self.get_repository(&repo_name).await?;
|
||||
Ok(repo.html_url)
|
||||
|
||||
@ -6,11 +6,13 @@ use chrono::Utc;
|
||||
use reqwest::Client;
|
||||
use tracing::debug;
|
||||
|
||||
use crate::client::traits::{
|
||||
BackendType, DistributionClient, ExtensionClient, LayerInfo, ManifestInfo,
|
||||
};
|
||||
use crate::config::OciConfig;
|
||||
use crate::error::{RegistryError, Result};
|
||||
use crate::models::{Extension, ExtensionSource, ExtensionType, ExtensionVersion};
|
||||
use crate::oci::models::{OciCatalog, OciManifest, OciTagsList};
|
||||
use crate::client::traits::{BackendType, DistributionClient, ExtensionClient, LayerInfo, ManifestInfo};
|
||||
|
||||
/// OCI registry client
|
||||
pub struct OciClient {
|
||||
@ -416,7 +418,10 @@ impl ExtensionClient for OciClient {
|
||||
BackendType::Oci
|
||||
}
|
||||
|
||||
async fn list_extensions(&self, extension_type: Option<ExtensionType>) -> Result<Vec<Extension>> {
|
||||
async fn list_extensions(
|
||||
&self,
|
||||
extension_type: Option<ExtensionType>,
|
||||
) -> Result<Vec<Extension>> {
|
||||
OciClient::list_extensions(self, extension_type).await
|
||||
}
|
||||
|
||||
@ -451,8 +456,8 @@ impl ExtensionClient for OciClient {
|
||||
impl DistributionClient for OciClient {
|
||||
async fn get_manifest(&self, repo_name: &str, tag: &str) -> Result<ManifestInfo> {
|
||||
let manifest = self.get_manifest(repo_name, tag).await?;
|
||||
let total_size: u64 = manifest.layers.iter().map(|l| l.size).sum::<u64>()
|
||||
+ manifest.config.size;
|
||||
let total_size: u64 =
|
||||
manifest.layers.iter().map(|l| l.size).sum::<u64>() + manifest.config.size;
|
||||
|
||||
Ok(ManifestInfo {
|
||||
config_digest: manifest.config.digest,
|
||||
|
||||
@ -1,8 +1,8 @@
|
||||
use axum::body::Body;
|
||||
use axum::http::{Request, StatusCode};
|
||||
use extension_registry::{build_routes, AppState, Config};
|
||||
use tower::ServiceExt;
|
||||
use http_body_util::BodyExt;
|
||||
use tower::ServiceExt;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_health_check() {
|
||||
|
||||
@ -2,7 +2,9 @@
|
||||
|
||||
## Overview
|
||||
|
||||
A **Rust-native Model Context Protocol (MCP) server** for infrastructure automation and AI-assisted DevOps operations. This replaces the Python implementation, providing significant performance improvements and maintaining philosophical consistency with the Rust ecosystem approach.
|
||||
A **Rust-native Model Context Protocol (MCP) server** for infrastructure automation and AI-assisted DevOps operations.
|
||||
This replaces the Python implementation, providing significant performance improvements and maintaining philosophical consistency
|
||||
with the Rust ecosystem approach.
|
||||
|
||||
## ✅ Project Status: **PROOF OF CONCEPT COMPLETE**
|
||||
|
||||
@ -37,7 +39,7 @@ A **Rust-native Model Context Protocol (MCP) server** for infrastructure automat
|
||||
• Configuration access: Microsecond latency
|
||||
• Memory efficient: Small struct footprint
|
||||
• Zero-copy string operations where possible
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 🏗️ Architecture
|
||||
|
||||
@ -51,7 +53,7 @@ src/
|
||||
├── tools.rs # AI-powered parsing tools
|
||||
├── errors.rs # Error handling
|
||||
└── performance_test.rs # Performance benchmarking
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 🎲 Key Features
|
||||
|
||||
@ -64,7 +66,7 @@ src/
|
||||
### 📊 Rust vs Python Comparison
|
||||
|
||||
| Metric | Python MCP Server | Rust MCP Server | Improvement |
|
||||
|--------|------------------|-----------------|-------------|
|
||||
| -------- | ------------------ | ----------------- | ------------- |
|
||||
| **Startup Time** | ~500ms | ~50ms | **10x faster** |
|
||||
| **Memory Usage** | ~50MB | ~5MB | **10x less** |
|
||||
| **Parsing Latency** | ~1ms | ~0.001ms | **1000x faster** |
|
||||
@ -85,7 +87,7 @@ cargo test
|
||||
|
||||
# Run benchmarks
|
||||
cargo run --bin provisioning-mcp-server --release
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 🔧 Configuration
|
||||
|
||||
@ -96,7 +98,7 @@ export PROVISIONING_PATH=/path/to/provisioning
|
||||
export PROVISIONING_AI_PROVIDER=openai
|
||||
export OPENAI_API_KEY=your-key
|
||||
export PROVISIONING_DEBUG=true
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 📈 Integration Benefits
|
||||
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
# Provisioning Orchestrator
|
||||
|
||||
A Rust-based orchestrator service that coordinates infrastructure provisioning workflows with pluggable storage backends and comprehensive migration tools.
|
||||
A Rust-based orchestrator service that coordinates infrastructure provisioning workflows with pluggable storage backends and comprehensive migration
|
||||
tools.
|
||||
|
||||
## Architecture
|
||||
|
||||
@ -36,7 +37,7 @@ The orchestrator implements a hybrid multi-storage approach:
|
||||
cd src/orchestrator
|
||||
cargo build --release
|
||||
cargo run -- --port 8080 --data-dir ./data
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**With SurrealDB Support**:
|
||||
|
||||
@ -51,7 +52,7 @@ cargo run --features surrealdb -- --storage-type surrealdb-embedded --data-dir .
|
||||
cargo run --features surrealdb -- --storage-type surrealdb-server \
|
||||
--surrealdb-url ws://localhost:8000 \
|
||||
--surrealdb-username admin --surrealdb-password secret
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Submit a Server Creation Workflow
|
||||
|
||||
@ -65,19 +66,19 @@ curl -X POST http://localhost:8080/workflows/servers/create \
|
||||
"check_mode": false,
|
||||
"wait": true
|
||||
}'
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Check Task Status
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/tasks/{task_id}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### List All Tasks
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/tasks
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## API Endpoints
|
||||
|
||||
@ -107,7 +108,8 @@ curl http://localhost:8080/tasks
|
||||
|
||||
## Test Environment Service
|
||||
|
||||
The orchestrator includes a comprehensive test environment service for automated containerized testing of taskservs, complete servers, and multi-node clusters.
|
||||
The orchestrator includes a comprehensive test environment service for automated containerized testing
|
||||
of taskservs, complete servers, and multi-node clusters.
|
||||
|
||||
### Overview
|
||||
|
||||
@ -142,7 +144,7 @@ curl -X POST http://localhost:8080/test/environments/create \
|
||||
"auto_start": true,
|
||||
"auto_cleanup": false
|
||||
}'
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### 2. Server Simulation
|
||||
|
||||
@ -161,7 +163,7 @@ curl -X POST http://localhost:8080/test/environments/create \
|
||||
"infra": "prod-stack",
|
||||
"auto_start": true
|
||||
}'
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### 3. Cluster Topology
|
||||
|
||||
@ -202,7 +204,7 @@ curl -X POST http://localhost:8080/test/environments/create \
|
||||
},
|
||||
"auto_start": true
|
||||
}'
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Nushell CLI Integration
|
||||
|
||||
@ -232,7 +234,7 @@ provisioning test env logs <env-id>
|
||||
|
||||
# Cleanup
|
||||
provisioning test env cleanup <env-id>
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Topology Templates
|
||||
|
||||
@ -274,7 +276,7 @@ Isolated Test Containers
|
||||
• Resource limits
|
||||
• Volume mounts
|
||||
• Multi-node support
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Key Components
|
||||
|
||||
@ -311,7 +313,7 @@ test-infrastructure:
|
||||
- provisioning test quick kubernetes
|
||||
- provisioning test quick postgres
|
||||
- provisioning test quick redis
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Documentation
|
||||
|
||||
@ -342,7 +344,7 @@ For complete usage guide and examples, see:
|
||||
### Storage Backend Comparison
|
||||
|
||||
| Feature | Filesystem | SurrealDB Embedded | SurrealDB Server |
|
||||
|---------|------------|-------------------|------------------|
|
||||
| --------- | ------------ | ------------------- | ------------------ |
|
||||
| **Dependencies** | None | Local database | Remote server |
|
||||
| **Auth/RBAC** | Basic | Advanced | Advanced |
|
||||
| **Real-time** | No | Yes | Yes |
|
||||
@ -364,7 +366,7 @@ workflow status $task_id
|
||||
|
||||
# List all workflows
|
||||
workflow list
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Task States
|
||||
|
||||
@ -415,7 +417,7 @@ Seamless migration between storage backends:
|
||||
|
||||
# Validate migration setup
|
||||
./scripts/migrate-storage.nu validate --from filesystem --to surrealdb-server
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Error Handling
|
||||
|
||||
@ -448,8 +450,8 @@ Seamless migration between storage backends:
|
||||
**Optional Dependencies** (feature-gated):
|
||||
|
||||
- **surrealdb**: Multi-model database (requires `--features surrealdb`)
|
||||
- Embedded mode: RocksDB storage engine
|
||||
- Server mode: WebSocket/HTTP client
|
||||
- Embedded mode: RocksDB storage engine
|
||||
- Server mode: WebSocket/HTTP client
|
||||
|
||||
### Adding New Workflows
|
||||
|
||||
@ -473,7 +475,7 @@ cargo test --features surrealdb
|
||||
cargo test --test storage_integration
|
||||
cargo test --test migration_tests
|
||||
cargo test --test factory_tests
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Performance Benchmarks**:
|
||||
|
||||
@ -487,7 +489,7 @@ cargo bench --bench migration_benchmarks
|
||||
# Generate HTML reports
|
||||
cargo bench --features surrealdb
|
||||
open target/criterion/reports/index.html
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Test Configuration**:
|
||||
|
||||
@ -498,7 +500,7 @@ TEST_STORAGE=surrealdb-embedded cargo test --features surrealdb
|
||||
|
||||
# Verbose testing
|
||||
cargo test -- --nocapture
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Migration from Deep Call Stack Issues
|
||||
|
||||
@ -509,4 +511,5 @@ This orchestrator solves the Nushell deep call stack limitations by:
|
||||
3. Managing parallel execution externally
|
||||
4. Preserving all existing business logic in Nushell
|
||||
|
||||
The existing `on_create_servers` function can be replaced with `on_create_servers_workflow` for orchestrated execution while maintaining full compatibility.
|
||||
The existing `on_create_servers` function can be replaced with `on_create_servers_workflow` for orchestrated execution while maintaining full
|
||||
compatibility.
|
||||
|
||||
@ -32,7 +32,7 @@ The DNS integration module provides automatic DNS registration and management fo
|
||||
│ CoreDNS │
|
||||
│ Service │
|
||||
└─────────────────┘
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Features
|
||||
|
||||
@ -44,7 +44,7 @@ When a server is created, the orchestrator automatically registers its DNS recor
|
||||
// In server creation workflow
|
||||
let ip = server.get_ip_address();
|
||||
state.dns_manager.register_server_dns(&hostname, ip).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 2. DNS Record Types
|
||||
|
||||
@ -64,7 +64,7 @@ let verified = state.dns_manager.verify_dns_resolution("server.example.com").awa
|
||||
if verified {
|
||||
info!("DNS resolution verified");
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 4. Automatic Cleanup
|
||||
|
||||
@ -72,7 +72,7 @@ When a server is deleted, DNS records are automatically removed:
|
||||
|
||||
```rust
|
||||
state.dns_manager.unregister_server_dns(&hostname).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Configuration
|
||||
|
||||
@ -83,7 +83,7 @@ DNS settings in `config.defaults.toml`:
|
||||
coredns_url = "http://localhost:53"
|
||||
auto_register = true
|
||||
ttl = 300
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Configuration Options
|
||||
|
||||
@ -97,7 +97,7 @@ ttl = 300
|
||||
|
||||
```http
|
||||
GET /api/v1/dns/records
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Response:**
|
||||
|
||||
@ -113,7 +113,7 @@ GET /api/v1/dns/records
|
||||
}
|
||||
]
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@ -124,20 +124,20 @@ use std::net::IpAddr;
|
||||
|
||||
let ip: IpAddr = "192.168.1.10".parse()?;
|
||||
dns_manager.register_server_dns("web-01.example.com", ip).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Unregister Server DNS
|
||||
|
||||
```rust
|
||||
dns_manager.unregister_server_dns("web-01.example.com").await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Update DNS Record
|
||||
|
||||
```rust
|
||||
let new_ip: IpAddr = "192.168.1.20".parse()?;
|
||||
dns_manager.update_dns_record("web-01.example.com", new_ip).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### List All Records
|
||||
|
||||
@ -146,7 +146,7 @@ let records = dns_manager.list_records().await?;
|
||||
for record in records {
|
||||
println!("{} -> {} ({})", record.name, record.value, record.record_type);
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Integration with Workflows
|
||||
|
||||
@ -180,7 +180,7 @@ Run DNS integration tests:
|
||||
```bash
|
||||
cd provisioning/platform/orchestrator
|
||||
cargo test test_dns_integration
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@ -32,7 +32,7 @@ The extension loading module provides dynamic loading of providers, taskservs, a
|
||||
│ Nushell Scripts │
|
||||
│ (module load) │
|
||||
└──────────────────┘
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Extension Types
|
||||
|
||||
@ -46,7 +46,7 @@ let provider = extension_manager.load_extension(
|
||||
"aws".to_string(),
|
||||
Some("2.0.0".to_string())
|
||||
).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 2. Taskservs
|
||||
|
||||
@ -58,7 +58,7 @@ let taskserv = extension_manager.load_extension(
|
||||
"kubernetes".to_string(),
|
||||
None // Load latest version
|
||||
).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 3. Clusters
|
||||
|
||||
@ -70,7 +70,7 @@ let cluster = extension_manager.load_extension(
|
||||
"buildkit".to_string(),
|
||||
Some("1.0.0".to_string())
|
||||
).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Features
|
||||
|
||||
@ -102,7 +102,7 @@ pub struct ExtensionMetadata {
|
||||
pub author: Option<String>,
|
||||
pub repository: Option<String>,
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Version Management
|
||||
|
||||
@ -122,7 +122,7 @@ let ext = extension_manager.load_extension(
|
||||
"kubernetes".to_string(),
|
||||
None
|
||||
).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Configuration
|
||||
|
||||
@ -132,7 +132,7 @@ Extension settings in `config.defaults.toml`:
|
||||
[orchestrator.extensions]
|
||||
auto_load = true
|
||||
cache_dir = "{{orchestrator.paths.data_dir}}/extensions"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Configuration Options
|
||||
|
||||
@ -145,7 +145,7 @@ cache_dir = "{{orchestrator.paths.data_dir}}/extensions"
|
||||
|
||||
```http
|
||||
GET /api/v1/extensions/loaded
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Response:**
|
||||
|
||||
@ -168,7 +168,7 @@ GET /api/v1/extensions/loaded
|
||||
}
|
||||
]
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Reload Extension
|
||||
|
||||
@ -180,7 +180,7 @@ Content-Type: application/json
|
||||
"extension_type": "taskserv",
|
||||
"name": "kubernetes"
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Response:**
|
||||
|
||||
@ -189,7 +189,7 @@ Content-Type: application/json
|
||||
"success": true,
|
||||
"data": "Extension kubernetes reloaded"
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@ -210,7 +210,7 @@ let extension = manager.load_extension(
|
||||
).await?;
|
||||
|
||||
println!("Loaded: {} v{}", extension.metadata.name, extension.metadata.version);
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### List Loaded Extensions
|
||||
|
||||
@ -223,7 +223,7 @@ for ext in extensions {
|
||||
ext.loaded_at
|
||||
);
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Reload Extension
|
||||
|
||||
@ -232,7 +232,7 @@ let extension = manager.reload_extension(
|
||||
ExtensionType::Taskserv,
|
||||
"kubernetes".to_string()
|
||||
).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Check if Loaded
|
||||
|
||||
@ -250,13 +250,13 @@ if !is_loaded {
|
||||
None
|
||||
).await?;
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Clear Cache
|
||||
|
||||
```rust
|
||||
manager.clear_cache().await;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Integration with Workflows
|
||||
|
||||
@ -282,7 +282,7 @@ for dep in &extension.metadata.dependencies {
|
||||
}
|
||||
|
||||
// Continue with installation...
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Nushell Integration
|
||||
|
||||
@ -297,7 +297,7 @@ provisioning module discover taskserv --output json
|
||||
|
||||
# Get extension metadata
|
||||
provisioning module discover taskserv --name kubernetes --output json
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Error Handling
|
||||
|
||||
@ -315,7 +315,7 @@ Run extension loading tests:
|
||||
```bash
|
||||
cd provisioning/platform/orchestrator
|
||||
cargo test test_extension_loading
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@ -357,7 +357,7 @@ let total_loads = metrics.total_extension_loads;
|
||||
let cache_hits = metrics.cache_hits;
|
||||
let hit_ratio = cache_hits as f64 / total_loads as f64;
|
||||
println!("Cache hit ratio: {:.2}%", hit_ratio * 100.0);
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Loading Time
|
||||
|
||||
@ -32,7 +32,7 @@ The OCI integration module provides OCI Distribution Spec v2 compliant registry
|
||||
│ OCI Registry │
|
||||
│ (HTTP API v2) │
|
||||
└──────────────────┘
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Features
|
||||
|
||||
@ -45,7 +45,7 @@ let package_path = oci_manager.pull_kcl_package(
|
||||
"provisioning-core",
|
||||
"1.0.0"
|
||||
).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 2. Extension Artifacts
|
||||
|
||||
@ -57,7 +57,7 @@ let artifact_path = oci_manager.pull_extension_artifact(
|
||||
"kubernetes", // Extension name
|
||||
"1.28.0" // Version
|
||||
).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 3. Manifest Caching
|
||||
|
||||
@ -76,7 +76,7 @@ let artifacts = oci_manager.list_oci_artifacts("kcl").await?;
|
||||
for artifact in artifacts {
|
||||
println!("{} v{} ({})", artifact.name, artifact.version, artifact.size);
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## OCI Distribution Spec v2
|
||||
|
||||
@ -96,7 +96,7 @@ OCI settings in `config.defaults.toml`:
|
||||
registry_url = "http://localhost:5000"
|
||||
namespace = "provisioning-extensions"
|
||||
cache_dir = "{{orchestrator.paths.data_dir}}/oci-cache"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Configuration Options
|
||||
|
||||
@ -115,7 +115,7 @@ Content-Type: application/json
|
||||
{
|
||||
"namespace": "kcl"
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Response:**
|
||||
|
||||
@ -133,7 +133,7 @@ Content-Type: application/json
|
||||
}
|
||||
]
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@ -159,7 +159,7 @@ println!("Package downloaded to: {}", package_path.display());
|
||||
|
||||
// Extract package
|
||||
// tar -xzf package_path
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Pull Extension Artifact
|
||||
|
||||
@ -173,7 +173,7 @@ let artifact_path = oci_manager.pull_extension_artifact(
|
||||
|
||||
// Extract and install
|
||||
// tar -xzf artifact_path -C /target/path
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### List Artifacts
|
||||
|
||||
@ -186,7 +186,7 @@ for artifact in artifacts {
|
||||
println!(" Digest: {}", artifact.digest);
|
||||
println!();
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Check Artifact Exists
|
||||
|
||||
@ -201,7 +201,7 @@ if exists {
|
||||
} else {
|
||||
println!("Artifact not found");
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Get Manifest (with caching)
|
||||
|
||||
@ -214,13 +214,13 @@ let manifest = oci_manager.get_manifest(
|
||||
println!("Schema version: {}", manifest.schema_version);
|
||||
println!("Media type: {}", manifest.media_type);
|
||||
println!("Layers: {}", manifest.layers.len());
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Clear Manifest Cache
|
||||
|
||||
```rust
|
||||
oci_manager.clear_cache().await;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## OCI Artifact Structure
|
||||
|
||||
@ -247,7 +247,7 @@ oci_manager.clear_cache().await;
|
||||
"org.opencontainers.image.version": "1.0.0"
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Integration with Workflows
|
||||
|
||||
@ -284,7 +284,7 @@ async fn install_taskserv_from_oci(
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Cache Management
|
||||
|
||||
@ -305,7 +305,7 @@ async fn install_taskserv_from_oci(
|
||||
│ └── aws/
|
||||
│ └── 2.0.0/
|
||||
│ └── artifact.tar.gz
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Cache Cleanup
|
||||
|
||||
@ -330,7 +330,7 @@ async fn cleanup_old_artifacts(cache_dir: &Path, max_age_days: u64) -> Result<()
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Error Handling
|
||||
|
||||
@ -348,7 +348,7 @@ Run OCI integration tests:
|
||||
```bash
|
||||
cd provisioning/platform/orchestrator
|
||||
cargo test test_oci_integration
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@ -32,7 +32,7 @@ The service orchestration module manages platform services with dependency-based
|
||||
│ Platform Services │
|
||||
│ (CoreDNS, OCI, etc) │
|
||||
└──────────────────────┘
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Features
|
||||
|
||||
@ -46,7 +46,7 @@ let order = service_orchestrator.resolve_startup_order(&[
|
||||
]).await?;
|
||||
|
||||
// Returns: ["service-a", "service-b", "service-c"]
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 2. Automatic Dependency Startup
|
||||
|
||||
@ -57,7 +57,7 @@ When enabled, dependencies are started automatically:
|
||||
service_orchestrator.start_service("web-app").await?;
|
||||
|
||||
// Automatically starts: database -> cache -> web-app
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 3. Health Checking
|
||||
|
||||
@ -69,7 +69,7 @@ let health = service_orchestrator.check_service_health("web-app").await?;
|
||||
if health.healthy {
|
||||
println!("Service is healthy: {}", health.message);
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 4. Service Status
|
||||
|
||||
@ -84,7 +84,7 @@ match status {
|
||||
ServiceStatus::Failed => println!("Service has failed"),
|
||||
ServiceStatus::Unknown => println!("Service status unknown"),
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Service Definition
|
||||
|
||||
@ -99,7 +99,7 @@ pub struct Service {
|
||||
pub stop_command: String,
|
||||
pub health_check_endpoint: Option<String>,
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Example Service Definition
|
||||
|
||||
@ -112,7 +112,7 @@ let coredns_service = Service {
|
||||
stop_command: "systemctl stop coredns".to_string(),
|
||||
health_check_endpoint: Some("http://localhost:53/health".to_string()),
|
||||
};
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Service with Dependencies
|
||||
|
||||
@ -125,7 +125,7 @@ let oci_registry = Service {
|
||||
stop_command: "systemctl stop oci-registry".to_string(),
|
||||
health_check_endpoint: Some("http://localhost:5000/v2/".to_string()),
|
||||
};
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Configuration
|
||||
|
||||
@ -135,7 +135,7 @@ Service orchestration settings in `config.defaults.toml`:
|
||||
[orchestrator.services]
|
||||
manager_enabled = true
|
||||
auto_start_dependencies = true
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Configuration Options
|
||||
|
||||
@ -148,7 +148,7 @@ auto_start_dependencies = true
|
||||
|
||||
```http
|
||||
GET /api/v1/services/list
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Response:**
|
||||
|
||||
@ -166,13 +166,13 @@ GET /api/v1/services/list
|
||||
}
|
||||
]
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Get Services Status
|
||||
|
||||
```http
|
||||
GET /api/v1/services/status
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Response:**
|
||||
|
||||
@ -190,7 +190,7 @@ GET /api/v1/services/status
|
||||
}
|
||||
]
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@ -228,14 +228,14 @@ let oci = Service {
|
||||
};
|
||||
|
||||
orchestrator.register_service(oci).await;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Start Service with Dependencies
|
||||
|
||||
```rust
|
||||
// This will automatically start coredns first, then oci-registry
|
||||
orchestrator.start_service("oci-registry").await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Resolve Startup Order
|
||||
|
||||
@ -251,7 +251,7 @@ println!("Startup order:");
|
||||
for (i, service) in order.iter().enumerate() {
|
||||
println!("{}. {}", i + 1, service);
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Start All Services
|
||||
|
||||
@ -262,7 +262,7 @@ println!("Started {} services:", started.len());
|
||||
for service in started {
|
||||
println!(" ✓ {}", service);
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Check Service Health
|
||||
|
||||
@ -277,7 +277,7 @@ if health.healthy {
|
||||
println!("✗ {} is unhealthy", "coredns");
|
||||
println!(" Message: {}", health.message);
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Dependency Graph Examples
|
||||
|
||||
@ -285,7 +285,7 @@ if health.healthy {
|
||||
|
||||
```plaintext
|
||||
A -> B -> C
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Startup order: A, B, C
|
||||
|
||||
@ -293,7 +293,7 @@ Startup order: A, B, C
|
||||
let a = Service { name: "a".to_string(), dependencies: vec![], /* ... */ };
|
||||
let b = Service { name: "b".to_string(), dependencies: vec!["a".to_string()], /* ... */ };
|
||||
let c = Service { name: "c".to_string(), dependencies: vec!["b".to_string()], /* ... */ };
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Diamond Dependency
|
||||
|
||||
@ -303,7 +303,7 @@ let c = Service { name: "c".to_string(), dependencies: vec!["b".to_string()], /*
|
||||
B C
|
||||
\ /
|
||||
D
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Startup order: A, B, C, D (B and C can start in parallel)
|
||||
|
||||
@ -312,7 +312,7 @@ let a = Service { name: "a".to_string(), dependencies: vec![], /* ... */ };
|
||||
let b = Service { name: "b".to_string(), dependencies: vec!["a".to_string()], /* ... */ };
|
||||
let c = Service { name: "c".to_string(), dependencies: vec!["a".to_string()], /* ... */ };
|
||||
let d = Service { name: "d".to_string(), dependencies: vec!["b".to_string(), "c".to_string()], /* ... */ };
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Complex Dependency
|
||||
|
||||
@ -326,7 +326,7 @@ let d = Service { name: "d".to_string(), dependencies: vec!["b".to_string(), "c"
|
||||
E F
|
||||
\ /
|
||||
G
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Startup order: A, B, C, D, E, F, G
|
||||
|
||||
@ -343,7 +343,7 @@ let coredns = Service {
|
||||
stop_command: "systemctl stop coredns".to_string(),
|
||||
health_check_endpoint: Some("http://localhost:53/health".to_string()),
|
||||
};
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### OCI Registry Service
|
||||
|
||||
@ -356,7 +356,7 @@ let oci_registry = Service {
|
||||
stop_command: "systemctl stop oci-registry".to_string(),
|
||||
health_check_endpoint: Some("http://localhost:5000/v2/".to_string()),
|
||||
};
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Orchestrator Service
|
||||
|
||||
@ -369,7 +369,7 @@ let orchestrator = Service {
|
||||
stop_command: "./scripts/start-orchestrator.nu --stop".to_string(),
|
||||
health_check_endpoint: Some("http://localhost:9090/health".to_string()),
|
||||
};
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Error Handling
|
||||
|
||||
@ -391,7 +391,7 @@ let c = Service { name: "c".to_string(), dependencies: vec!["b".to_string()], /*
|
||||
// Error: Circular dependency detected
|
||||
let result = orchestrator.resolve_startup_order(&["a".to_string()]).await;
|
||||
assert!(result.is_err());
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Testing
|
||||
|
||||
@ -400,7 +400,7 @@ Run service orchestration tests:
|
||||
```bash
|
||||
cd provisioning/platform/orchestrator
|
||||
cargo test test_service_orchestration
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@ -2,7 +2,8 @@
|
||||
|
||||
## Overview
|
||||
|
||||
The SSH Temporal Key Management System provides automated generation, deployment, and cleanup of short-lived SSH keys for secure server access. It eliminates the need for static SSH keys by generating keys on-demand with automatic expiration.
|
||||
The SSH Temporal Key Management System provides automated generation, deployment, and cleanup of short-lived SSH keys
|
||||
for secure server access. It eliminates the need for static SSH keys by generating keys on-demand with automatic expiration.
|
||||
|
||||
## Features
|
||||
|
||||
@ -10,9 +11,9 @@ The SSH Temporal Key Management System provides automated generation, deployment
|
||||
|
||||
- **Short-Lived Keys**: Keys expire automatically after a configurable TTL (default: 1 hour)
|
||||
- **Multiple Key Types**:
|
||||
- Dynamic Key Pairs (Ed25519)
|
||||
- Vault OTP (One-Time Password)
|
||||
- Vault CA-Signed Certificates
|
||||
- Dynamic Key Pairs (Ed25519)
|
||||
- Vault OTP (One-Time Password)
|
||||
- Vault CA-Signed Certificates
|
||||
- **Automatic Cleanup**: Background task removes expired keys from servers
|
||||
- **Audit Trail**: All key operations are logged
|
||||
- **REST API**: HTTP endpoints for integration
|
||||
@ -30,25 +31,30 @@ The SSH Temporal Key Management System provides automated generation, deployment
|
||||
## Architecture
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
┌─────────────────────────────────────────────────
|
||||
────────────┐
|
||||
│ SSH Key Manager │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
├─────────────────────────────────────────────────
|
||||
────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ ┌──────────────┐ ┌──────────────┐
|
||||
┌──────────────┐ │
|
||||
│ │ Key Generator│ │ Key Deployer │ │ Temporal │ │
|
||||
│ │ (Ed25519) │ │ (SSH Deploy) │ │ Manager │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ └──────────────┘ └──────────────┘
|
||||
└──────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Vault │ │ Authorized │ │
|
||||
│ │ SSH Engine │ │ Keys Manager │ │
|
||||
│ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
└─────────────────────────────────────────────────
|
||||
────────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
REST API Nushell CLI Background Tasks
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Key Types
|
||||
|
||||
@ -62,7 +68,7 @@ Generated on-demand Ed25519 keys that are automatically deployed and cleaned up.
|
||||
|
||||
```bash
|
||||
ssh generate-key server.example.com --user root --ttl 30min
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 2. Vault OTP (One-Time Password)
|
||||
|
||||
@ -76,7 +82,7 @@ Vault generates a one-time password for SSH authentication.
|
||||
|
||||
```bash
|
||||
ssh generate-key server.example.com --type otp --ip 192.168.1.100
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 3. Vault CA-Signed Certificates
|
||||
|
||||
@ -90,7 +96,7 @@ Vault acts as SSH CA, signing user public keys with short TTL.
|
||||
|
||||
```bash
|
||||
ssh generate-key server.example.com --type ca --principal admin --ttl 1hr
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## REST API Endpoints
|
||||
|
||||
@ -126,7 +132,7 @@ Response:
|
||||
"deployed": false
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Deploy SSH Key
|
||||
|
||||
@ -143,7 +149,7 @@ Response:
|
||||
"deployed_at": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### List SSH Keys
|
||||
|
||||
@ -164,7 +170,7 @@ Response:
|
||||
}
|
||||
]
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Revoke SSH Key
|
||||
|
||||
@ -176,7 +182,7 @@ Response:
|
||||
"success": true,
|
||||
"data": "Key uuid revoked successfully"
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Get SSH Key
|
||||
|
||||
@ -192,7 +198,7 @@ Response:
|
||||
...
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Cleanup Expired Keys
|
||||
|
||||
@ -207,7 +213,7 @@ Response:
|
||||
"cleaned_key_ids": ["uuid1", "uuid2", ...]
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Get Statistics
|
||||
|
||||
@ -230,7 +236,7 @@ Response:
|
||||
"last_cleanup_at": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Nushell CLI Commands
|
||||
|
||||
@ -250,7 +256,7 @@ Examples:
|
||||
ssh generate-key server.example.com
|
||||
ssh generate-key server.example.com --user deploy --ttl 30min
|
||||
ssh generate-key server.example.com --type ca --principal admin
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Deploy Key
|
||||
|
||||
@ -259,7 +265,7 @@ ssh deploy-key <key_id>
|
||||
|
||||
Example:
|
||||
ssh deploy-key abc-123-def-456
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### List Keys
|
||||
|
||||
@ -269,7 +275,7 @@ ssh list-keys [--expired]
|
||||
Example:
|
||||
ssh list-keys
|
||||
ssh list-keys | where deployed == true
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Revoke Key
|
||||
|
||||
@ -278,7 +284,7 @@ ssh revoke-key <key_id>
|
||||
|
||||
Example:
|
||||
ssh revoke-key abc-123-def-456
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Connect with Auto-Generated Key
|
||||
|
||||
@ -293,7 +299,7 @@ Options:
|
||||
|
||||
Example:
|
||||
ssh connect server.example.com --user deploy
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
This command:
|
||||
|
||||
@ -320,7 +326,7 @@ Example output:
|
||||
|
||||
Last cleanup: 2024-01-01T00:00:00Z
|
||||
Cleaned keys: 5
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Manual Cleanup
|
||||
|
||||
@ -333,7 +339,7 @@ Example output:
|
||||
- abc-123
|
||||
- def-456
|
||||
...
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Configuration
|
||||
|
||||
@ -361,7 +367,7 @@ let ssh_manager = Arc::new(SshKeyManager::new(ssh_config).await?);
|
||||
|
||||
// Start background cleanup task
|
||||
Arc::clone(&ssh_manager).start_cleanup_task().await;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Vault SSH Configuration
|
||||
|
||||
@ -376,7 +382,7 @@ vault write ssh/roles/otp_key_role \
|
||||
key_type=otp \
|
||||
default_user=root \
|
||||
cidr_list=0.0.0.0/0
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### CA Mode
|
||||
|
||||
@ -398,13 +404,13 @@ vault write ssh/roles/default \
|
||||
|
||||
# Get CA public key (add to servers' /etc/ssh/trusted-user-ca-keys.pem)
|
||||
vault read -field=public_key ssh/config/ca
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Server configuration (`/etc/ssh/sshd_config`):
|
||||
|
||||
```plaintext
|
||||
TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Deployment
|
||||
|
||||
@ -423,7 +429,7 @@ export VAULT_TOKEN=your-vault-token
|
||||
|
||||
# Provisioning SSH key path
|
||||
export PROVISIONING_SSH_KEY=/path/to/provisioning/key
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Integration with Workflows
|
||||
|
||||
@ -439,7 +445,7 @@ ssh root@$server "install-kubernetes.sh"
|
||||
|
||||
# Auto-revoke after workflow
|
||||
ssh revoke-key $ssh_key.id
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Security Considerations
|
||||
|
||||
@ -458,13 +464,13 @@ Check SSH connectivity:
|
||||
|
||||
```bash
|
||||
ssh -i /path/to/provisioning/key root@server.example.com
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Verify SSH daemon is running:
|
||||
|
||||
```bash
|
||||
systemctl status sshd
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Cleanup Not Working
|
||||
|
||||
@ -472,13 +478,13 @@ Check orchestrator logs:
|
||||
|
||||
```bash
|
||||
tail -f ./data/orchestrator.log | grep SSH
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Manual cleanup:
|
||||
|
||||
```bash
|
||||
ssh cleanup
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Vault Integration Issues
|
||||
|
||||
@ -487,14 +493,14 @@ Test Vault connectivity:
|
||||
```bash
|
||||
vault status
|
||||
vault token lookup
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
Check SSH secrets engine:
|
||||
|
||||
```bash
|
||||
vault secrets list
|
||||
vault read ssh/config/ca
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Performance
|
||||
|
||||
@ -15,7 +15,7 @@ All backends implement the same `TaskStorage` trait, ensuring consistent behavio
|
||||
## Backend Comparison
|
||||
|
||||
| Feature | Filesystem | SurrealDB Embedded | SurrealDB Server |
|
||||
|---------|------------|-------------------|------------------|
|
||||
| --------- | ------------ | ------------------- | ------------------ |
|
||||
| **Setup Complexity** | Minimal | Low | Medium |
|
||||
| **External Dependencies** | None | None | SurrealDB Server |
|
||||
| **Storage Format** | JSON Files | RocksDB | Remote DB |
|
||||
@ -45,7 +45,7 @@ The default storage backend using JSON files for task persistence. Ideal for dev
|
||||
|
||||
# Custom data directory
|
||||
./orchestrator --storage-type filesystem --data-dir /var/lib/orchestrator
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### File Structure
|
||||
|
||||
@ -60,7 +60,7 @@ data/
|
||||
├── uuid1.json # Queue entries with priority
|
||||
├── uuid2.json
|
||||
└── ...
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Features
|
||||
|
||||
@ -93,7 +93,7 @@ cargo build --features surrealdb
|
||||
|
||||
# Run with embedded SurrealDB
|
||||
./orchestrator --storage-type surrealdb-embedded --data-dir ./data
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Database Schema
|
||||
|
||||
@ -126,7 +126,7 @@ cargo build --features surrealdb
|
||||
--data-dir ./data \
|
||||
--surrealdb-namespace production \
|
||||
--surrealdb-database orchestrator
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Best Use Cases
|
||||
|
||||
@ -162,7 +162,7 @@ surreal start --log trace --user root --pass root file:orchestrator.db
|
||||
|
||||
# Or with TiKV (distributed)
|
||||
surreal start --log trace --user root --pass root tikv://localhost:2379
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Configuration
|
||||
|
||||
@ -180,7 +180,7 @@ surreal start --log trace --user root --pass root tikv://localhost:2379
|
||||
--surrealdb-database orchestrator \
|
||||
--surrealdb-username orchestrator-service \
|
||||
--surrealdb-password "$SURREALDB_PASSWORD"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Features
|
||||
|
||||
@ -223,7 +223,7 @@ Use the migration script to move data between any backend combination:
|
||||
# Validation and dry-run
|
||||
./scripts/migrate-storage.nu validate --from filesystem --to surrealdb-embedded
|
||||
./scripts/migrate-storage.nu --from filesystem --to surrealdb-embedded --dry-run
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Migration Features
|
||||
|
||||
@ -242,7 +242,7 @@ Use the migration script to move data between any backend combination:
|
||||
./scripts/migrate-storage.nu --from filesystem --to surrealdb-embedded \
|
||||
--source-dir ./dev-data --target-dir ./prod-data \
|
||||
--batch-size 100 --verify
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### Scaling Up
|
||||
|
||||
@ -253,7 +253,7 @@ Use the migration script to move data between any backend combination:
|
||||
--surrealdb-url ws://production-surreal:8000 \
|
||||
--username orchestrator --password "$PROD_PASSWORD" \
|
||||
--namespace production --database main
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### Disaster Recovery
|
||||
|
||||
@ -263,7 +263,7 @@ Use the migration script to move data between any backend combination:
|
||||
--surrealdb-url ws://failing-server:8000 \
|
||||
--username admin --password "$PASSWORD" \
|
||||
--target-dir ./emergency-backup
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
@ -319,7 +319,7 @@ chmod -R 755 ./data
|
||||
|
||||
# Corrupted JSON files
|
||||
rm ./data/queue.rkvs/tasks/corrupted-file.json
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### SurrealDB Embedded
|
||||
|
||||
@ -330,7 +330,7 @@ rm -rf ./data/orchestrator.db
|
||||
|
||||
# Permission issues
|
||||
sudo chown -R $USER:$USER ./data
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### SurrealDB Server
|
||||
|
||||
@ -341,7 +341,7 @@ telnet surreal-server 8000
|
||||
|
||||
# Authentication failures
|
||||
# Verify credentials and user permissions
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Debugging Commands
|
||||
|
||||
@ -357,7 +357,7 @@ telnet surreal-server 8000
|
||||
|
||||
# Monitor migration progress
|
||||
./scripts/migrate-storage.nu --from filesystem --to surrealdb-embedded --verbose
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Recommendations
|
||||
|
||||
@ -1,53 +0,0 @@
|
||||
|
||||
Immediate Testing Opportunities
|
||||
|
||||
1. Test the Original Problem:
|
||||
cd klab/wuji
|
||||
provisioning -c c server --orchestrated
|
||||
This should now work through the orchestrator, bypassing the deep call stack issue.
|
||||
|
||||
2. Test Workflow Management:
|
||||
|
||||
# List workflows
|
||||
|
||||
nu -c "use ../core/nulib/workflows/management.nu; workflow list"
|
||||
|
||||
# Check orchestrator status
|
||||
|
||||
nu -c "use ../core/nulib/workflows/management.nu; workflow orchestrator"
|
||||
|
||||
# Submit a test server workflow
|
||||
|
||||
nu -c "use ../core/nulib/workflows/server_create.nu; server_create_workflow 'test-infra' 'config.user.toml' ['server1']"
|
||||
|
||||
Production Readiness Steps
|
||||
|
||||
1. Integration Testing:
|
||||
|
||||
- Test server creation workflows with real infrastructure
|
||||
- Verify taskserv deployment through orchestrator
|
||||
- Test cluster operations via REST API
|
||||
|
||||
1. Performance Validation:
|
||||
|
||||
- Confirm parallel processing works (the original foreach issue)
|
||||
- Monitor orchestrator logs during heavy workloads
|
||||
- Test with your target scale (10-12 servers, ~12 taskservs each)
|
||||
|
||||
1. Production Deployment:
|
||||
|
||||
- Move orchestrator to permanent location
|
||||
- Set up systemd service or similar for auto-start
|
||||
- Configure proper logging and monitoring
|
||||
|
||||
Next Development Phase
|
||||
|
||||
1. Enhanced Features:
|
||||
|
||||
- Dependency management between workflows
|
||||
- Workflow rollback capabilities
|
||||
- Real-time progress streaming
|
||||
- Workflow templates and presets
|
||||
|
||||
The hybrid architecture is now complete and ready to solve your deep call stack limitations while preserving all existing Nushell business logic. Would you like to test the
|
||||
original failing command first?
|
||||
@ -3,39 +3,27 @@ use std::sync::Arc;
|
||||
use anyhow::{Context, Result};
|
||||
use axum::{
|
||||
extract::{Path, State},
|
||||
http::StatusCode,
|
||||
response::Json,
|
||||
routing::{get, post},
|
||||
http::StatusCode,
|
||||
Router,
|
||||
};
|
||||
use clap::Parser;
|
||||
// Use types from the library
|
||||
use provisioning_orchestrator::{
|
||||
audit::{
|
||||
AuditEvent, AuditFilter, AuditQuery, RetentionPolicy, SiemFormat,
|
||||
},
|
||||
batch::{
|
||||
BatchOperationRequest, BatchOperationResult,
|
||||
},
|
||||
audit::{AuditEvent, AuditFilter, AuditQuery, RetentionPolicy, SiemFormat},
|
||||
batch::{BatchOperationRequest, BatchOperationResult},
|
||||
compliance_routes,
|
||||
monitor::{
|
||||
MonitoringEvent, MonitoringEventType,
|
||||
SystemHealthStatus,
|
||||
},
|
||||
rollback::{
|
||||
Checkpoint, RollbackResult, RollbackStatistics,
|
||||
},
|
||||
state::{
|
||||
ProgressInfo, StateManagerStatistics, StateSnapshot,
|
||||
SystemMetrics,
|
||||
monitor::{MonitoringEvent, MonitoringEventType, SystemHealthStatus},
|
||||
rollback::{Checkpoint, RollbackResult, RollbackStatistics},
|
||||
state::{ProgressInfo, StateManagerStatistics, StateSnapshot, SystemMetrics},
|
||||
test_environment::{
|
||||
CreateTestEnvironmentRequest, RunTestRequest, TestEnvironment, TestEnvironmentResponse,
|
||||
TestResult,
|
||||
},
|
||||
workflow::WorkflowExecutionState,
|
||||
test_environment::{
|
||||
CreateTestEnvironmentRequest, RunTestRequest, TestEnvironmentResponse,
|
||||
TestEnvironment, TestResult,
|
||||
},
|
||||
AppState, Args, ClusterWorkflow, CreateServerWorkflow, SharedState,
|
||||
TaskStatus, TaskservWorkflow, WorkflowTask,
|
||||
AppState, Args, ClusterWorkflow, CreateServerWorkflow, SharedState, TaskStatus,
|
||||
TaskservWorkflow, WorkflowTask,
|
||||
};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tower_http::cors::CorsLayer;
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
# Testing Guide for Multi-Storage Orchestrator
|
||||
|
||||
This document provides comprehensive guidance for testing the multi-storage orchestrator system, including unit tests, integration tests, benchmarks, and performance analysis.
|
||||
This document provides comprehensive guidance for testing the multi-storage orchestrator system,
|
||||
including unit tests, integration tests, benchmarks, and performance analysis.
|
||||
|
||||
## Overview
|
||||
|
||||
@ -29,7 +30,7 @@ src/
|
||||
├── storage/ # Unit tests embedded in modules
|
||||
├── migration/tests.rs # Migration unit tests
|
||||
└── main.rs # Application integration tests
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Running Tests
|
||||
|
||||
@ -49,7 +50,7 @@ cargo test --test factory_tests
|
||||
|
||||
# Run unit tests only
|
||||
cargo test --lib
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Using Cargo Aliases
|
||||
|
||||
@ -70,7 +71,7 @@ cargo test-integration
|
||||
cargo test-migration
|
||||
cargo test-factory
|
||||
cargo test-unit
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Test Features and Backends
|
||||
|
||||
@ -90,7 +91,7 @@ Tests automatically adapt to available features:
|
||||
async fn test_surrealdb_specific_feature() {
|
||||
// This test only runs when SurrealDB feature is enabled
|
||||
}
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Integration Tests
|
||||
|
||||
@ -108,7 +109,7 @@ test_all_backends!(test_basic_crud_operations, |storage, gen| async move {
|
||||
// ... test implementation
|
||||
Ok(())
|
||||
});
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Key Test Scenarios:**
|
||||
|
||||
@ -134,7 +135,7 @@ cargo test --features surrealdb --test migration_tests
|
||||
# Test specific migration scenarios
|
||||
cargo test --features surrealdb test_filesystem_to_embedded_migration
|
||||
cargo test --features surrealdb test_large_dataset_migration_performance
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Migration Test Coverage:**
|
||||
|
||||
@ -157,7 +158,7 @@ cargo test --test factory_tests
|
||||
|
||||
# Test configuration validation
|
||||
cargo test test_storage_config_validation_failures
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Benchmarks
|
||||
|
||||
@ -177,7 +178,7 @@ cargo bench-surrealdb # Requires --features surrealdb
|
||||
cargo bench -- single_enqueue
|
||||
cargo bench -- batch_operations
|
||||
cargo bench -- concurrent_operations
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Benchmark Categories:**
|
||||
|
||||
@ -198,7 +199,7 @@ cargo bench-migration
|
||||
# Test migration performance
|
||||
cargo bench -- basic_migration
|
||||
cargo bench -- migration_batch_sizes
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
**Migration Benchmarks:**
|
||||
|
||||
@ -220,7 +221,7 @@ use crate::helpers::TestDataGenerator;
|
||||
let gen = TestDataGenerator::new();
|
||||
let task = gen.workflow_task();
|
||||
let batch = gen.workflow_tasks_batch(10);
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### StorageTestRunner
|
||||
|
||||
@ -231,7 +232,7 @@ use crate::helpers::StorageTestRunner;
|
||||
|
||||
let mut runner = StorageTestRunner::new();
|
||||
runner.run_against_all_backends(test_function).await;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### MockStorage
|
||||
|
||||
@ -242,7 +243,7 @@ use crate::helpers::MockStorage;
|
||||
|
||||
let mock = MockStorage::new();
|
||||
mock.set_health(false); // Simulate failure
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Performance Testing
|
||||
|
||||
@ -284,7 +285,7 @@ strategy:
|
||||
rust:
|
||||
- stable
|
||||
- beta
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Test Commands for CI
|
||||
|
||||
@ -298,7 +299,7 @@ cargo test --doc --all-features
|
||||
|
||||
# Benchmark regression tests
|
||||
cargo bench --all-features -- --test
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Debugging and Troubleshooting
|
||||
|
||||
@ -313,7 +314,7 @@ cargo test -- --nocapture
|
||||
|
||||
# Run single test with full output
|
||||
cargo test test_name -- --exact --nocapture
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Common Issues
|
||||
|
||||
@ -338,7 +339,7 @@ cargo test-coverage
|
||||
|
||||
# View coverage report
|
||||
open target/tarpaulin-report.html
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Performance Profiling
|
||||
|
||||
@ -352,7 +353,7 @@ cargo bench --bench migration_benchmarks -- --profile-time=10
|
||||
# Generate flame graphs
|
||||
cargo install flamegraph
|
||||
cargo flamegraph --bench storage_benchmarks
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Best Practices
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
# KMS Service - Key Management Service
|
||||
|
||||
A unified Key Management Service for the Provisioning platform with support for multiple backends: **Age** (development), **Cosmian KMS** (privacy-preserving), **RustyVault** (self-hosted), **AWS KMS** (cloud-native), and **HashiCorp Vault** (enterprise).
|
||||
A unified Key Management Service for the Provisioning platform with support for multiple backends: **Age** (development),
|
||||
**Cosmian KMS** (privacy-preserving), **RustyVault** (self-hosted), **AWS KMS** (cloud-native), and **HashiCorp Vault** (enterprise).
|
||||
|
||||
## Features
|
||||
|
||||
@ -43,22 +44,26 @@ A unified Key Management Service for the Provisioning platform with support for
|
||||
## Architecture
|
||||
|
||||
```plaintext
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
┌─────────────────────────────────────────────────
|
||||
────────┐
|
||||
│ KMS Service │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
├─────────────────────────────────────────────────
|
||||
────────┤
|
||||
│ REST API (Axum) │
|
||||
│ ├─ /api/v1/kms/encrypt POST │
|
||||
│ ├─ /api/v1/kms/decrypt POST │
|
||||
│ ├─ /api/v1/kms/generate-key POST (Cosmian only) │
|
||||
│ ├─ /api/v1/kms/status GET │
|
||||
│ └─ /api/v1/kms/health GET │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
├─────────────────────────────────────────────────
|
||||
────────┤
|
||||
│ Unified KMS Service Interface │
|
||||
│ ├─ encrypt(plaintext, context) -> ciphertext │
|
||||
│ ├─ decrypt(ciphertext, context) -> plaintext │
|
||||
│ ├─ generate_data_key(spec) -> DataKey │
|
||||
│ └─ health_check() -> bool │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
├─────────────────────────────────────────────────
|
||||
────────┤
|
||||
│ Backend Implementations │
|
||||
│ ├─ Age Client │
|
||||
│ │ ├─ X25519 encryption │
|
||||
@ -68,8 +73,9 @@ A unified Key Management Service for the Provisioning platform with support for
|
||||
│ ├─ REST API integration │
|
||||
│ ├─ Zero-knowledge encryption │
|
||||
│ └─ Confidential computing │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```plaintext
|
||||
└─────────────────────────────────────────────────
|
||||
────────┘
|
||||
```text
|
||||
|
||||
## Installation
|
||||
|
||||
@ -87,7 +93,7 @@ cd provisioning/platform/kms-service
|
||||
cargo build --release
|
||||
|
||||
# Binary will be at: target/release/kms-service
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Configuration
|
||||
|
||||
@ -110,7 +116,7 @@ server_url = "${COSMIAN_KMS_URL:-https://kms.example.com}"
|
||||
api_key = "${COSMIAN_API_KEY}"
|
||||
default_key_id = "provisioning-master-key"
|
||||
tls_verify = true
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Environment Variables
|
||||
|
||||
@ -123,7 +129,7 @@ export PROVISIONING_ENV=dev
|
||||
export PROVISIONING_ENV=prod
|
||||
export COSMIAN_KMS_URL="https://kms.example.com"
|
||||
export COSMIAN_API_KEY="your-api-key"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Quick Start
|
||||
|
||||
@ -140,7 +146,7 @@ export PROVISIONING_ENV=dev
|
||||
|
||||
# 3. Start KMS service
|
||||
cargo run --bin kms-service
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Production Setup (Cosmian)
|
||||
|
||||
@ -157,7 +163,7 @@ export COSMIAN_API_KEY=your-api-key-here
|
||||
|
||||
# 4. Start KMS service
|
||||
cargo run --bin kms-service
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Usage
|
||||
|
||||
@ -172,7 +178,7 @@ curl -X POST http://localhost:8082/api/v1/kms/encrypt \
|
||||
"plaintext": "SGVsbG8sIFdvcmxkIQ==",
|
||||
"context": "env=prod,service=api"
|
||||
}'
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### Decrypt Data
|
||||
|
||||
@ -183,7 +189,7 @@ curl -X POST http://localhost:8082/api/v1/kms/decrypt \
|
||||
"ciphertext": "...",
|
||||
"context": "env=prod,service=api"
|
||||
}'
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### Generate Data Key (Cosmian only)
|
||||
|
||||
@ -193,13 +199,13 @@ curl -X POST http://localhost:8082/api/v1/kms/generate-key \
|
||||
-d '{
|
||||
"key_spec": "AES_256"
|
||||
}'
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
#### Health Check
|
||||
|
||||
```bash
|
||||
curl http://localhost:8082/api/v1/kms/health
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Nushell CLI Integration
|
||||
|
||||
@ -232,12 +238,12 @@ kms encrypt-file secrets.json --output secrets.enc --context "env=prod"
|
||||
|
||||
kms decrypt-file config.yaml.enc
|
||||
kms decrypt-file secrets.enc --output secrets.json --context "env=prod"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Backend Comparison
|
||||
|
||||
| Feature | Age | RustyVault | Cosmian KMS | AWS KMS | Vault |
|
||||
|---------|-----|------------|-------------|---------|-------|
|
||||
| --------- | ----- | ------------ | ------------- | --------- | ------- |
|
||||
| **Setup** | Simple | Self-hosted | Server setup | AWS account | Enterprise |
|
||||
| **Speed** | Very fast | Fast | Fast | Fast | Fast |
|
||||
| **Network** | No | Yes | Yes | Yes | Yes |
|
||||
@ -260,14 +266,14 @@ kms encrypt-file workspace/config/secrets.yaml
|
||||
|
||||
# SOPS can use KMS for key encryption
|
||||
# Configure in .sops.yaml to use KMS endpoint
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 2. Dynamic Secrets (Provider API Keys)
|
||||
|
||||
```rust
|
||||
// Rust orchestrator can call KMS API
|
||||
let encrypted_key = kms_client.encrypt(api_key.as_bytes(), &context).await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 3. SSH Key Management
|
||||
|
||||
@ -275,7 +281,7 @@ let encrypted_key = kms_client.encrypt(api_key.as_bytes(), &context).await?;
|
||||
# Generate and encrypt temporal SSH keys
|
||||
ssh-keygen -t ed25519 -f temp_key -N ""
|
||||
kms encrypt-file temp_key --context "infra=prod,purpose=deployment"
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 4. Orchestrator (Workflow Data)
|
||||
|
||||
@ -284,7 +290,7 @@ kms encrypt-file temp_key --context "infra=prod,purpose=deployment"
|
||||
let encrypted_params = kms_service
|
||||
.encrypt(params_json.as_bytes(), &workflow_context)
|
||||
.await?;
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### 5. Control Center (Audit Logs)
|
||||
|
||||
@ -298,7 +304,7 @@ let encrypted_params = kms_service
|
||||
|
||||
```bash
|
||||
cargo test
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Integration Tests
|
||||
|
||||
@ -310,7 +316,7 @@ cargo test age
|
||||
export COSMIAN_KMS_URL=http://localhost:9999
|
||||
export COSMIAN_API_KEY=test-key
|
||||
cargo test cosmian -- --ignored
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Deployment
|
||||
|
||||
@ -328,7 +334,7 @@ RUN apt-get update && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=builder /app/target/release/kms-service /usr/local/bin/
|
||||
ENTRYPOINT ["kms-service"]
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Kubernetes (Production with Cosmian)
|
||||
|
||||
@ -356,7 +362,7 @@ spec:
|
||||
key: api-key
|
||||
ports:
|
||||
- containerPort: 8082
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### systemd Service
|
||||
|
||||
@ -376,7 +382,7 @@ Restart=always
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
@ -402,7 +408,7 @@ curl http://localhost:8082/api/v1/kms/status
|
||||
|
||||
# Health check
|
||||
curl http://localhost:8082/api/v1/kms/health
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Logs
|
||||
|
||||
@ -412,7 +418,7 @@ export RUST_LOG="kms_service=debug,tower_http=debug"
|
||||
|
||||
# View logs
|
||||
journalctl -u kms-service -f
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@ -429,7 +435,7 @@ cat ~/.config/provisioning/age/public_key.txt
|
||||
# Test encryption manually
|
||||
echo "test" | age -r $(cat ~/.config/provisioning/age/public_key.txt) > test.enc
|
||||
age -d -i ~/.config/provisioning/age/private_key.txt test.enc
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
### Cosmian KMS Issues
|
||||
|
||||
@ -447,7 +453,7 @@ curl -X POST https://kms.example.com/api/v1/encrypt \
|
||||
-H "X-API-Key: $COSMIAN_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"keyId":"master-key","data":"SGVsbG8="}'
|
||||
```plaintext
|
||||
```text
|
||||
|
||||
## License
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user