fix: eliminate stub implementations across 6 integration points
Some checks failed
Documentation Lint & Validation / Markdown Linting (push) Has been cancelled
Documentation Lint & Validation / Validate mdBook Configuration (push) Has been cancelled
Documentation Lint & Validation / Content & Structure Validation (push) Has been cancelled
mdBook Build & Deploy / Build mdBook (push) Has been cancelled
Rust CI / Security Audit (push) Has been cancelled
Rust CI / Check + Test + Lint (nightly) (push) Has been cancelled
Rust CI / Check + Test + Lint (stable) (push) Has been cancelled
Documentation Lint & Validation / Lint & Validation Summary (push) Has been cancelled
mdBook Build & Deploy / Documentation Quality Check (push) Has been cancelled
mdBook Build & Deploy / Deploy to GitHub Pages (push) Has been cancelled
mdBook Build & Deploy / Notification (push) Has been cancelled
Some checks failed
Documentation Lint & Validation / Markdown Linting (push) Has been cancelled
Documentation Lint & Validation / Validate mdBook Configuration (push) Has been cancelled
Documentation Lint & Validation / Content & Structure Validation (push) Has been cancelled
mdBook Build & Deploy / Build mdBook (push) Has been cancelled
Rust CI / Security Audit (push) Has been cancelled
Rust CI / Check + Test + Lint (nightly) (push) Has been cancelled
Rust CI / Check + Test + Lint (stable) (push) Has been cancelled
Documentation Lint & Validation / Lint & Validation Summary (push) Has been cancelled
mdBook Build & Deploy / Documentation Quality Check (push) Has been cancelled
mdBook Build & Deploy / Deploy to GitHub Pages (push) Has been cancelled
mdBook Build & Deploy / Notification (push) Has been cancelled
- WorkflowOrchestrator and WorkflowService wired in main.rs (non-fatal) - try_fallback_with_budget actually calls fallback providers - vapora-tracking persistence: real TrackingEntry + NatsPublisher - vapora-doc-lifecycle: workspace + classify/consolidate/rag/NATS stubs - Merkle hash chain audit trail (tamper-evident, verify_integrity) - /api/v1/workflows/* routes operational; get_workflow_audit Result fix - ADR-0039, CHANGELOG, workflow-orchestrator docs updated
This commit is contained in:
parent
e5e2244e04
commit
847523e4d4
49
CHANGELOG.md
49
CHANGELOG.md
@ -7,6 +7,55 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
|
|
||||||
## [Unreleased]
|
## [Unreleased]
|
||||||
|
|
||||||
|
### Fixed - Stub Elimination: Real Implementations for 6 Hollow Integration Points
|
||||||
|
|
||||||
|
#### `vapora-backend` — WorkflowOrchestrator and WorkflowService wiring
|
||||||
|
|
||||||
|
- **`WorkflowOrchestrator` was never injected** (`main.rs`): `POST /schedules/:id/fire` always returned 503 because `app_state.workflow_orchestrator` was always `None`. Fixed: non-fatal NATS connect + `WorkflowOrchestrator::new(config, swarm, kg, nats, db)` in `main.rs`; 503 only when NATS is genuinely unavailable.
|
||||||
|
- **`WorkflowService` was missing from `AppState`**: `api/workflows.rs` existed with all handlers referencing `state.workflow_service`, but the field did not exist — module was commented out with `// TODO: Phase 4`. Fixed:
|
||||||
|
- `workflow_service: Option<Arc<WorkflowService>>` added to `AppState`
|
||||||
|
- `with_workflow_service(Arc<WorkflowService>)` builder added
|
||||||
|
- Non-fatal init chain in `main.rs`: `AgentRegistry → AgentCoordinator → StepExecutor → WorkflowEngine + WorkflowBroadcaster + AuditTrail → WorkflowService`
|
||||||
|
- `pub mod workflows` uncommented in `api/mod.rs`
|
||||||
|
- `.nest("/api/v1/workflows", api::workflows::workflow_routes())` added to router; 503 on coordinator init failure
|
||||||
|
- **`get_workflow_audit` Result bug**: `workflow_service.get_audit_trail(&id).await` returned `anyhow::Result<Vec<AuditEntry>>` but the result was used directly as `Vec<AuditEntry>` — compile-time oversight. Fixed with `.map_err(|e| ApiError(...))?`.
|
||||||
|
|
||||||
|
#### `vapora-llm-router` — `try_fallback_with_budget` was a no-op
|
||||||
|
|
||||||
|
- `try_fallback_with_budget` iterated the fallback provider list but never called any provider — the loop body only collected names. Fixed: accepts `prompt: String` + `context: Option<String>`; calls `provider.complete(prompt.clone(), context.clone()).await`; logs cost on success, logs error on per-provider failure, returns `AllProvidersFailed` only when all are exhausted.
|
||||||
|
- `complete_with_budget` now clones `prompt`/`context` before the primary call so ownership is available for the fallback path.
|
||||||
|
- Pre-existing `cost as u32` no-op casts removed (both occurrences).
|
||||||
|
|
||||||
|
#### `vapora-tracking` — hollow VAPORA integration layer
|
||||||
|
|
||||||
|
- `TrackingPlugin::on_task_completed`: was `Ok(())`. Now constructs a real `TrackingEntry` (`source: WorkflowYaml`, `impact: Backend`) and calls `self.db.insert_entry(&entry).await?`.
|
||||||
|
- `TrackingPlugin::on_document_created`: was `Ok(())`. Now constructs a real `TrackingEntry` (`source: CoderChanges`, `impact: Docs`, `files_affected: 1`, `details_link: Some(path)`) and persists it.
|
||||||
|
- `events` module (`#[cfg(feature = "async-nats")]`): `NatsPublisher` struct implemented — wraps `Arc<async_nats::Client>`, `publish_entry_created(&TrackingEntry)` serializes to JSON and publishes to `{prefix}.{source:?}` subject.
|
||||||
|
|
||||||
|
#### `vapora-doc-lifecycle` — workspace integration + all three plugin stubs
|
||||||
|
|
||||||
|
- Crate added to workspace members (was completely isolated — `cargo check -p vapora-doc-lifecycle` returned "no match").
|
||||||
|
- `Cargo.toml`: broken `doc-lifecycle-core` path fixed (`../doc-lifecycle-core` → correct relative path to `Tools/doc-lifecycle-manager/crates/doc-lifecycle-core`).
|
||||||
|
- `error.rs`: added `From<std::io::Error>`.
|
||||||
|
- `classify_session_docs(task_id)`: scans `.coder/` directory via async stack-walk (`collect_md_docs`), calls `Classifier::classify(path, Some(content))` on each `.md` file, logs type + confidence.
|
||||||
|
- `consolidate_docs()`: scans `config.docs_root`, calls `Consolidator::find_duplicates(&docs)`, warns on each `SimilarityMatch` with path pair and score.
|
||||||
|
- `update_rag_index()`: scans `config.docs_root`, chunks each doc via `RagIndexer::chunk_document`, calls `generate_embeddings`, zips embeddings back into `chunk.metadata.embedding`, calls `build_index`.
|
||||||
|
- `documenter.rs`: added `nats: Option<Arc<async_nats::Client>>` field + `with_nats()` builder.
|
||||||
|
- `update_root_files(task_id)`: appends timestamped line to `{docs_root}/CHANGES.md` using `OpenOptions::append`.
|
||||||
|
- `publish_docs_updated_event(task_id)`: JSON payload `{event, task_id, timestamp}` published to `config.nats_subject` when NATS is configured; debug-logged and skipped when not.
|
||||||
|
|
||||||
|
#### `audit/mod.rs` — Merkle tamper-evident audit trail (previous session)
|
||||||
|
|
||||||
|
- **Replaced append-only log** with a hash-chained Merkle audit trail.
|
||||||
|
- `block_hash = SHA256(prev_hash|seq|entry_id|timestamp_rfc3339|workflow_id|event_type|actor|details_json)` — modifying any field invalidates the hash and every subsequent entry.
|
||||||
|
- `prev_hash` on the genesis entry is `GENESIS_HASH` (64 zeros).
|
||||||
|
- `write_lock: Arc<Mutex<()>>` serializes writes so `(seq, prev_hash)` fetched from DB is always consistent.
|
||||||
|
- `verify_integrity(workflow_id) -> IntegrityReport` — recomputes every block hash from stored fields; returns `IntegrityReport { valid: bool, total_entries, first_tampered_seq: Option<i64> }`.
|
||||||
|
- `AuditEntry` gains `prev_hash: String` and `block_hash: String` fields; SurrealDB schema updated.
|
||||||
|
- **ADR-0039**: design rationale, limitations (truncation, single-process lock, no HMAC key), and deferred alternatives (NATS append-only stream, HMAC authentication).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Added - Security Layer: SSRF Protection and Prompt Injection Scanning
|
### Added - Security Layer: SSRF Protection and Prompt Injection Scanning
|
||||||
|
|
||||||
#### `vapora-backend/src/security/` — new module
|
#### `vapora-backend/src/security/` — new module
|
||||||
|
|||||||
37
Cargo.lock
generated
37
Cargo.lock
generated
@ -2807,6 +2807,22 @@ dependencies = [
|
|||||||
"urlencoding",
|
"urlencoding",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "doc-lifecycle-core"
|
||||||
|
version = "0.1.0"
|
||||||
|
dependencies = [
|
||||||
|
"anyhow",
|
||||||
|
"globset",
|
||||||
|
"once_cell",
|
||||||
|
"regex",
|
||||||
|
"serde",
|
||||||
|
"serde_json",
|
||||||
|
"thiserror 2.0.18",
|
||||||
|
"tokio",
|
||||||
|
"tracing",
|
||||||
|
"walkdir",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "document-features"
|
name = "document-features"
|
||||||
version = "0.2.12"
|
version = "0.2.12"
|
||||||
@ -12394,6 +12410,7 @@ dependencies = [
|
|||||||
"clap",
|
"clap",
|
||||||
"dotenv",
|
"dotenv",
|
||||||
"futures",
|
"futures",
|
||||||
|
"hex",
|
||||||
"http",
|
"http",
|
||||||
"jsonwebtoken 10.3.0",
|
"jsonwebtoken 10.3.0",
|
||||||
"lazy_static",
|
"lazy_static",
|
||||||
@ -12407,6 +12424,7 @@ dependencies = [
|
|||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
"serde_yaml",
|
"serde_yaml",
|
||||||
|
"sha2",
|
||||||
"sqlx",
|
"sqlx",
|
||||||
"surrealdb 3.0.0",
|
"surrealdb 3.0.0",
|
||||||
"tempfile",
|
"tempfile",
|
||||||
@ -12478,6 +12496,25 @@ dependencies = [
|
|||||||
"vapora-shared",
|
"vapora-shared",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "vapora-doc-lifecycle"
|
||||||
|
version = "1.2.0"
|
||||||
|
dependencies = [
|
||||||
|
"anyhow",
|
||||||
|
"async-nats",
|
||||||
|
"async-trait",
|
||||||
|
"chrono",
|
||||||
|
"doc-lifecycle-core",
|
||||||
|
"serde",
|
||||||
|
"serde_json",
|
||||||
|
"tempfile",
|
||||||
|
"thiserror 2.0.18",
|
||||||
|
"tokio",
|
||||||
|
"tracing",
|
||||||
|
"tracing-subscriber",
|
||||||
|
"uuid",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "vapora-frontend"
|
name = "vapora-frontend"
|
||||||
version = "1.2.0"
|
version = "1.2.0"
|
||||||
|
|||||||
@ -15,6 +15,7 @@ members = [
|
|||||||
"crates/vapora-a2a",
|
"crates/vapora-a2a",
|
||||||
"crates/vapora-a2a-client",
|
"crates/vapora-a2a-client",
|
||||||
"crates/vapora-tracking",
|
"crates/vapora-tracking",
|
||||||
|
"crates/vapora-doc-lifecycle",
|
||||||
"crates/vapora-worktree",
|
"crates/vapora-worktree",
|
||||||
"crates/vapora-knowledge-graph",
|
"crates/vapora-knowledge-graph",
|
||||||
"crates/vapora-analytics",
|
"crates/vapora-analytics",
|
||||||
|
|||||||
@ -74,6 +74,8 @@ tower-cookies = { workspace = true }
|
|||||||
# Utilities
|
# Utilities
|
||||||
uuid = { workspace = true }
|
uuid = { workspace = true }
|
||||||
chrono = { workspace = true }
|
chrono = { workspace = true }
|
||||||
|
sha2 = { workspace = true }
|
||||||
|
hex = { workspace = true }
|
||||||
dotenv = { workspace = true }
|
dotenv = { workspace = true }
|
||||||
once_cell = { workspace = true }
|
once_cell = { workspace = true }
|
||||||
regex = { workspace = true }
|
regex = { workspace = true }
|
||||||
|
|||||||
@ -20,8 +20,7 @@ pub mod tasks;
|
|||||||
pub mod tracking;
|
pub mod tracking;
|
||||||
pub mod websocket;
|
pub mod websocket;
|
||||||
pub mod workflow_orchestrator;
|
pub mod workflow_orchestrator;
|
||||||
// pub mod workflows; // TODO: Phase 4 - Re-enable when workflow module imports
|
pub mod workflows;
|
||||||
// are fixed
|
|
||||||
|
|
||||||
pub use error::ApiResult;
|
pub use error::ApiResult;
|
||||||
// pub use error::ApiError; // Temporarily commented - remove ApiError export
|
// pub use error::ApiError; // Temporarily commented - remove ApiError export
|
||||||
|
|||||||
@ -10,6 +10,7 @@ use vapora_workflow_engine::{ScheduleStore, WorkflowOrchestrator};
|
|||||||
use crate::config::NotificationConfig;
|
use crate::config::NotificationConfig;
|
||||||
use crate::services::{
|
use crate::services::{
|
||||||
AgentService, ProjectService, ProposalService, ProviderAnalyticsService, TaskService,
|
AgentService, ProjectService, ProposalService, ProviderAnalyticsService, TaskService,
|
||||||
|
WorkflowService,
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Application state shared across all API handlers
|
/// Application state shared across all API handlers
|
||||||
@ -21,6 +22,7 @@ pub struct AppState {
|
|||||||
pub proposal_service: Arc<ProposalService>,
|
pub proposal_service: Arc<ProposalService>,
|
||||||
pub provider_analytics_service: Arc<ProviderAnalyticsService>,
|
pub provider_analytics_service: Arc<ProviderAnalyticsService>,
|
||||||
pub workflow_orchestrator: Option<Arc<WorkflowOrchestrator>>,
|
pub workflow_orchestrator: Option<Arc<WorkflowOrchestrator>>,
|
||||||
|
pub workflow_service: Option<Arc<WorkflowService>>,
|
||||||
pub rlm_engine: Option<Arc<RLMEngine<SurrealDBStorage>>>,
|
pub rlm_engine: Option<Arc<RLMEngine<SurrealDBStorage>>>,
|
||||||
pub schedule_store: Option<Arc<ScheduleStore>>,
|
pub schedule_store: Option<Arc<ScheduleStore>>,
|
||||||
/// Outbound notification channels; `None` when `[channels]` is absent from
|
/// Outbound notification channels; `None` when `[channels]` is absent from
|
||||||
@ -46,6 +48,7 @@ impl AppState {
|
|||||||
proposal_service: Arc::new(proposal_service),
|
proposal_service: Arc::new(proposal_service),
|
||||||
provider_analytics_service: Arc::new(provider_analytics_service),
|
provider_analytics_service: Arc::new(provider_analytics_service),
|
||||||
workflow_orchestrator: None,
|
workflow_orchestrator: None,
|
||||||
|
workflow_service: None,
|
||||||
rlm_engine: None,
|
rlm_engine: None,
|
||||||
schedule_store: None,
|
schedule_store: None,
|
||||||
channel_registry: None,
|
channel_registry: None,
|
||||||
@ -53,6 +56,12 @@ impl AppState {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Attach the workflow service for `/api/v1/workflows/*` handlers.
|
||||||
|
pub fn with_workflow_service(mut self, svc: Arc<WorkflowService>) -> Self {
|
||||||
|
self.workflow_service = Some(svc);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
/// Add workflow orchestrator to state
|
/// Add workflow orchestrator to state
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub fn with_workflow_orchestrator(mut self, orchestrator: Arc<WorkflowOrchestrator>) -> Self {
|
pub fn with_workflow_orchestrator(mut self, orchestrator: Arc<WorkflowOrchestrator>) -> Self {
|
||||||
|
|||||||
@ -1,10 +1,6 @@
|
|||||||
// vapora-backend: Workflow API endpoints
|
// vapora-backend: Workflow API endpoints
|
||||||
// Phase 3: REST API for workflow management
|
// Phase 3: REST API for workflow management
|
||||||
|
|
||||||
use crate::api::error::ApiError;
|
|
||||||
use crate::api::state::AppState;
|
|
||||||
use crate::audit::AuditEntry;
|
|
||||||
use crate::workflow::{parser::WorkflowParser, Workflow};
|
|
||||||
use axum::{
|
use axum::{
|
||||||
extract::{Path, State},
|
extract::{Path, State},
|
||||||
http::StatusCode,
|
http::StatusCode,
|
||||||
@ -15,6 +11,11 @@ use serde::{Deserialize, Serialize};
|
|||||||
use tracing::error;
|
use tracing::error;
|
||||||
use vapora_shared::VaporaError;
|
use vapora_shared::VaporaError;
|
||||||
|
|
||||||
|
use crate::api::error::ApiError;
|
||||||
|
use crate::api::state::AppState;
|
||||||
|
use crate::audit::AuditEntry;
|
||||||
|
use crate::workflow::{parser::WorkflowParser, Workflow};
|
||||||
|
|
||||||
#[derive(Debug, Serialize, Deserialize)]
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
pub struct CreateWorkflowRequest {
|
pub struct CreateWorkflowRequest {
|
||||||
pub yaml: String,
|
pub yaml: String,
|
||||||
@ -55,10 +56,11 @@ pub fn workflow_routes() -> Router<AppState> {
|
|||||||
async fn list_workflows(
|
async fn list_workflows(
|
||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
) -> Result<Json<WorkflowListResponse>, ApiError> {
|
) -> Result<Json<WorkflowListResponse>, ApiError> {
|
||||||
let workflow_service = state
|
let workflow_service = state.workflow_service.as_ref().ok_or_else(|| {
|
||||||
.workflow_service
|
ApiError(VaporaError::InternalError(
|
||||||
.as_ref()
|
"Workflow service not available".to_string(),
|
||||||
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
let workflows = workflow_service.list_workflows().await;
|
let workflows = workflow_service.list_workflows().await;
|
||||||
|
|
||||||
@ -70,14 +72,19 @@ async fn create_workflow(
|
|||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
Json(req): Json<CreateWorkflowRequest>,
|
Json(req): Json<CreateWorkflowRequest>,
|
||||||
) -> Result<(StatusCode, Json<WorkflowResponse>), ApiError> {
|
) -> Result<(StatusCode, Json<WorkflowResponse>), ApiError> {
|
||||||
let workflow_service = state
|
let workflow_service = state.workflow_service.as_ref().ok_or_else(|| {
|
||||||
.workflow_service
|
ApiError(VaporaError::InternalError(
|
||||||
.as_ref()
|
"Workflow service not available".to_string(),
|
||||||
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
// Parse YAML
|
// Parse YAML
|
||||||
let workflow = WorkflowParser::parse_string(&req.yaml)
|
let workflow = WorkflowParser::parse_string(&req.yaml).map_err(|e| {
|
||||||
.map_err(|e| ApiError(VaporaError::InvalidInput(format!("Invalid workflow YAML: {}", e))))?;
|
ApiError(VaporaError::InvalidInput(format!(
|
||||||
|
"Invalid workflow YAML: {}",
|
||||||
|
e
|
||||||
|
)))
|
||||||
|
})?;
|
||||||
|
|
||||||
// Create workflow
|
// Create workflow
|
||||||
let created = workflow_service
|
let created = workflow_service
|
||||||
@ -99,10 +106,11 @@ async fn get_workflow(
|
|||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
Path(id): Path<String>,
|
Path(id): Path<String>,
|
||||||
) -> Result<Json<WorkflowResponse>, ApiError> {
|
) -> Result<Json<WorkflowResponse>, ApiError> {
|
||||||
let workflow_service = state
|
let workflow_service = state.workflow_service.as_ref().ok_or_else(|| {
|
||||||
.workflow_service
|
ApiError(VaporaError::InternalError(
|
||||||
.as_ref()
|
"Workflow service not available".to_string(),
|
||||||
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
let workflow = workflow_service.get_workflow(&id).await.map_err(|e| {
|
let workflow = workflow_service.get_workflow(&id).await.map_err(|e| {
|
||||||
error!("Failed to get workflow {}: {}", id, e);
|
error!("Failed to get workflow {}: {}", id, e);
|
||||||
@ -117,10 +125,11 @@ async fn execute_workflow(
|
|||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
Path(id): Path<String>,
|
Path(id): Path<String>,
|
||||||
) -> Result<Json<WorkflowResponse>, ApiError> {
|
) -> Result<Json<WorkflowResponse>, ApiError> {
|
||||||
let workflow_service = state
|
let workflow_service = state.workflow_service.as_ref().ok_or_else(|| {
|
||||||
.workflow_service
|
ApiError(VaporaError::InternalError(
|
||||||
.as_ref()
|
"Workflow service not available".to_string(),
|
||||||
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
let workflow = workflow_service.execute_workflow(&id).await.map_err(|e| {
|
let workflow = workflow_service.execute_workflow(&id).await.map_err(|e| {
|
||||||
error!("Failed to execute workflow {}: {}", id, e);
|
error!("Failed to execute workflow {}: {}", id, e);
|
||||||
@ -135,10 +144,11 @@ async fn rollback_workflow(
|
|||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
Path(id): Path<String>,
|
Path(id): Path<String>,
|
||||||
) -> Result<Json<MessageResponse>, ApiError> {
|
) -> Result<Json<MessageResponse>, ApiError> {
|
||||||
let workflow_service = state
|
let workflow_service = state.workflow_service.as_ref().ok_or_else(|| {
|
||||||
.workflow_service
|
ApiError(VaporaError::InternalError(
|
||||||
.as_ref()
|
"Workflow service not available".to_string(),
|
||||||
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
workflow_service.rollback_workflow(&id).await.map_err(|e| {
|
workflow_service.rollback_workflow(&id).await.map_err(|e| {
|
||||||
error!("Failed to rollback workflow {}: {}", id, e);
|
error!("Failed to rollback workflow {}: {}", id, e);
|
||||||
@ -156,12 +166,16 @@ async fn get_workflow_audit(
|
|||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
Path(id): Path<String>,
|
Path(id): Path<String>,
|
||||||
) -> Result<Json<AuditResponse>, ApiError> {
|
) -> Result<Json<AuditResponse>, ApiError> {
|
||||||
let workflow_service = state
|
let workflow_service = state.workflow_service.as_ref().ok_or_else(|| {
|
||||||
.workflow_service
|
ApiError(VaporaError::InternalError(
|
||||||
.as_ref()
|
"Workflow service not available".to_string(),
|
||||||
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
let entries = workflow_service.get_audit_trail(&id).await;
|
let entries = workflow_service.get_audit_trail(&id).await.map_err(|e| {
|
||||||
|
error!("Failed to get audit trail for {}: {}", id, e);
|
||||||
|
ApiError(VaporaError::InternalError(e.to_string()))
|
||||||
|
})?;
|
||||||
|
|
||||||
Ok(Json(AuditResponse { entries }))
|
Ok(Json(AuditResponse { entries }))
|
||||||
}
|
}
|
||||||
@ -177,8 +191,8 @@ async fn get_workflow_audit(
|
|||||||
// } else {
|
// } else {
|
||||||
// (
|
// (
|
||||||
// StatusCode::SERVICE_UNAVAILABLE,
|
// StatusCode::SERVICE_UNAVAILABLE,
|
||||||
// Json(serde_json::json!({"error": "Workflow service not available"})),
|
// Json(serde_json::json!({"error": "Workflow service not
|
||||||
// )
|
// available"})), )
|
||||||
// .into_response()
|
// .into_response()
|
||||||
// }
|
// }
|
||||||
// }
|
// }
|
||||||
|
|||||||
@ -1,139 +1,287 @@
|
|||||||
// vapora-backend: Audit trail system
|
|
||||||
// Phase 3: Track all workflow events and actions
|
|
||||||
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
use chrono::{DateTime, Utc};
|
use chrono::{DateTime, Utc};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use tokio::sync::RwLock;
|
use sha2::{Digest, Sha256};
|
||||||
|
use surrealdb::engine::remote::ws::Client;
|
||||||
|
use surrealdb::Surreal;
|
||||||
|
use tokio::sync::Mutex;
|
||||||
|
use tracing::warn;
|
||||||
|
|
||||||
#[allow(dead_code)]
|
/// The prev_hash value for the genesis (first) entry in the chain.
|
||||||
|
const GENESIS_HASH: &str = "0000000000000000000000000000000000000000000000000000000000000000";
|
||||||
|
|
||||||
|
/// A single tamper-evident audit entry.
|
||||||
|
///
|
||||||
|
/// `block_hash = SHA256(prev_hash | seq | entry_id | timestamp | workflow_id |
|
||||||
|
/// event_type | actor | details_json)`
|
||||||
|
///
|
||||||
|
/// Modifying any field invalidates `block_hash`, and since the next entry
|
||||||
|
/// embeds this hash as `prev_hash`, the entire suffix of the chain is
|
||||||
|
/// invalidated — making tampering detectable by `verify_integrity`.
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct AuditEntry {
|
pub struct AuditEntry {
|
||||||
pub id: String,
|
pub seq: i64,
|
||||||
|
pub entry_id: String,
|
||||||
pub timestamp: DateTime<Utc>,
|
pub timestamp: DateTime<Utc>,
|
||||||
pub workflow_id: String,
|
pub workflow_id: String,
|
||||||
pub event_type: String,
|
pub event_type: String,
|
||||||
pub actor: String,
|
pub actor: String,
|
||||||
pub details: serde_json::Value,
|
pub details: serde_json::Value,
|
||||||
|
pub prev_hash: String,
|
||||||
|
pub block_hash: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl AuditEntry {
|
/// Report returned by `AuditTrail::verify_integrity`.
|
||||||
pub fn new(
|
#[derive(Debug)]
|
||||||
workflow_id: String,
|
#[allow(dead_code)]
|
||||||
event_type: String,
|
pub struct IntegrityReport {
|
||||||
actor: String,
|
pub valid: bool,
|
||||||
details: serde_json::Value,
|
pub total_entries: usize,
|
||||||
) -> Self {
|
/// Sequence number of the first entry where the chain is broken, if any.
|
||||||
Self {
|
pub first_tampered_seq: Option<i64>,
|
||||||
id: uuid::Uuid::new_v4().to_string(),
|
|
||||||
timestamp: Utc::now(),
|
|
||||||
workflow_id,
|
|
||||||
event_type,
|
|
||||||
actor,
|
|
||||||
details,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Audit trail maintains history of workflow events
|
/// Compute the Merkle block hash for an entry.
|
||||||
|
///
|
||||||
|
/// The canonical input is pipe-delimited so that field boundaries cannot be
|
||||||
|
/// blurred: `prev_hash|seq|entry_id|timestamp_rfc3339|workflow_id|event_type|
|
||||||
|
/// actor|details_json`.
|
||||||
|
#[allow(clippy::too_many_arguments)]
|
||||||
|
fn compute_block_hash(
|
||||||
|
prev_hash: &str,
|
||||||
|
seq: i64,
|
||||||
|
entry_id: &str,
|
||||||
|
timestamp: &DateTime<Utc>,
|
||||||
|
workflow_id: &str,
|
||||||
|
event_type: &str,
|
||||||
|
actor: &str,
|
||||||
|
details: &serde_json::Value,
|
||||||
|
) -> String {
|
||||||
|
let canonical = format!(
|
||||||
|
"{prev_hash}|{seq}|{entry_id}|{ts}|{workflow_id}|{event_type}|{actor}|{details}",
|
||||||
|
ts = timestamp.to_rfc3339(),
|
||||||
|
);
|
||||||
|
hex::encode(Sha256::digest(canonical.as_bytes()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Tamper-evident audit trail backed by SurrealDB.
|
||||||
|
///
|
||||||
|
/// Writes are serialized through `write_lock` so that the seq/prev_hash pair
|
||||||
|
/// is always read and written atomically (no gaps, no forks).
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub struct AuditTrail {
|
pub struct AuditTrail {
|
||||||
entries: Arc<RwLock<Vec<AuditEntry>>>,
|
db: Surreal<Client>,
|
||||||
|
write_lock: Arc<Mutex<()>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
impl AuditTrail {
|
impl AuditTrail {
|
||||||
pub fn new() -> Self {
|
pub fn new(db: Surreal<Client>) -> Self {
|
||||||
Self {
|
Self {
|
||||||
entries: Arc::new(RwLock::new(Vec::new())),
|
db,
|
||||||
|
write_lock: Arc::new(Mutex::new(())),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Log a workflow event
|
/// Append a new event to the chain.
|
||||||
|
///
|
||||||
|
/// Computes `block_hash` from the previous entry's hash (or GENESIS_HASH
|
||||||
|
/// for the first entry) and persists the record to SurrealDB.
|
||||||
pub async fn log_event(
|
pub async fn log_event(
|
||||||
&self,
|
&self,
|
||||||
workflow_id: String,
|
workflow_id: String,
|
||||||
event_type: String,
|
event_type: String,
|
||||||
actor: String,
|
actor: String,
|
||||||
details: serde_json::Value,
|
details: serde_json::Value,
|
||||||
) {
|
) -> Result<AuditEntry> {
|
||||||
let entry = AuditEntry::new(workflow_id, event_type, actor, details);
|
let _guard = self.write_lock.lock().await;
|
||||||
let mut entries = self.entries.write().await;
|
|
||||||
entries.push(entry);
|
let mut response = self
|
||||||
|
.db
|
||||||
|
.query("SELECT seq, block_hash FROM audit_entries ORDER BY seq DESC LIMIT 1")
|
||||||
|
.await
|
||||||
|
.context("querying last audit entry")?;
|
||||||
|
|
||||||
|
let last: Vec<serde_json::Value> = response.take(0).context("taking audit query result")?;
|
||||||
|
|
||||||
|
let (next_seq, prev_hash) = match last.first() {
|
||||||
|
Some(v) => {
|
||||||
|
let seq = v["seq"].as_i64().unwrap_or(0);
|
||||||
|
let hash = v["block_hash"].as_str().unwrap_or(GENESIS_HASH).to_string();
|
||||||
|
(seq + 1, hash)
|
||||||
|
}
|
||||||
|
None => (0, GENESIS_HASH.to_string()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let entry_id = uuid::Uuid::new_v4().to_string();
|
||||||
|
let timestamp = Utc::now();
|
||||||
|
let block_hash = compute_block_hash(
|
||||||
|
&prev_hash,
|
||||||
|
next_seq,
|
||||||
|
&entry_id,
|
||||||
|
×tamp,
|
||||||
|
&workflow_id,
|
||||||
|
&event_type,
|
||||||
|
&actor,
|
||||||
|
&details,
|
||||||
|
);
|
||||||
|
|
||||||
|
let entry = AuditEntry {
|
||||||
|
seq: next_seq,
|
||||||
|
entry_id,
|
||||||
|
timestamp,
|
||||||
|
workflow_id,
|
||||||
|
event_type,
|
||||||
|
actor,
|
||||||
|
details,
|
||||||
|
prev_hash,
|
||||||
|
block_hash,
|
||||||
|
};
|
||||||
|
|
||||||
|
let json = serde_json::to_value(&entry).context("serializing audit entry")?;
|
||||||
|
let _: Option<serde_json::Value> = self
|
||||||
|
.db
|
||||||
|
.create("audit_entries")
|
||||||
|
.content(json)
|
||||||
|
.await
|
||||||
|
.context("persisting audit entry to SurrealDB")?;
|
||||||
|
|
||||||
|
Ok(entry)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get audit entries for a workflow
|
/// Fetch all entries for a workflow, ordered by seq ascending.
|
||||||
pub async fn get_workflow_audit(&self, workflow_id: &str) -> Vec<AuditEntry> {
|
pub async fn get_workflow_audit(&self, workflow_id: &str) -> Result<Vec<AuditEntry>> {
|
||||||
let entries = self.entries.read().await;
|
let mut response = self
|
||||||
entries
|
.db
|
||||||
.iter()
|
.query("SELECT * FROM audit_entries WHERE workflow_id = $wf ORDER BY seq ASC")
|
||||||
.filter(|e| e.workflow_id == workflow_id)
|
.bind(("wf", workflow_id.to_string()))
|
||||||
.cloned()
|
.await
|
||||||
.collect()
|
.context("querying workflow audit entries")?;
|
||||||
|
|
||||||
|
let raw: Vec<serde_json::Value> = response.take(0).context("taking audit query")?;
|
||||||
|
Ok(raw
|
||||||
|
.into_iter()
|
||||||
|
.filter_map(|v| serde_json::from_value(v).ok())
|
||||||
|
.collect())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get all audit entries
|
/// Fetch all entries across all workflows, ordered by seq ascending.
|
||||||
pub async fn get_all_entries(&self) -> Vec<AuditEntry> {
|
pub async fn get_all_entries(&self) -> Result<Vec<AuditEntry>> {
|
||||||
let entries = self.entries.read().await;
|
let mut response = self
|
||||||
entries.clone()
|
.db
|
||||||
|
.query("SELECT * FROM audit_entries ORDER BY seq ASC")
|
||||||
|
.await
|
||||||
|
.context("querying all audit entries")?;
|
||||||
|
|
||||||
|
let raw: Vec<serde_json::Value> = response.take(0).context("taking audit query")?;
|
||||||
|
Ok(raw
|
||||||
|
.into_iter()
|
||||||
|
.filter_map(|v| serde_json::from_value(v).ok())
|
||||||
|
.collect())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get entries by event type
|
/// Fetch entries by event type, ordered by seq ascending.
|
||||||
pub async fn get_by_event_type(&self, event_type: &str) -> Vec<AuditEntry> {
|
pub async fn get_by_event_type(&self, event_type: &str) -> Result<Vec<AuditEntry>> {
|
||||||
let entries = self.entries.read().await;
|
let mut response = self
|
||||||
entries
|
.db
|
||||||
.iter()
|
.query("SELECT * FROM audit_entries WHERE event_type = $et ORDER BY seq ASC")
|
||||||
.filter(|e| e.event_type == event_type)
|
.bind(("et", event_type.to_string()))
|
||||||
.cloned()
|
.await
|
||||||
.collect()
|
.context("querying audit entries by event type")?;
|
||||||
|
|
||||||
|
let raw: Vec<serde_json::Value> = response.take(0).context("taking audit query")?;
|
||||||
|
Ok(raw
|
||||||
|
.into_iter()
|
||||||
|
.filter_map(|v| serde_json::from_value(v).ok())
|
||||||
|
.collect())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get entries by actor
|
/// Fetch entries by actor, ordered by seq ascending.
|
||||||
pub async fn get_by_actor(&self, actor: &str) -> Vec<AuditEntry> {
|
pub async fn get_by_actor(&self, actor: &str) -> Result<Vec<AuditEntry>> {
|
||||||
let entries = self.entries.read().await;
|
let mut response = self
|
||||||
entries
|
.db
|
||||||
.iter()
|
.query("SELECT * FROM audit_entries WHERE actor = $actor ORDER BY seq ASC")
|
||||||
.filter(|e| e.actor == actor)
|
.bind(("actor", actor.to_string()))
|
||||||
.cloned()
|
.await
|
||||||
.collect()
|
.context("querying audit entries by actor")?;
|
||||||
|
|
||||||
|
let raw: Vec<serde_json::Value> = response.take(0).context("taking audit query")?;
|
||||||
|
Ok(raw
|
||||||
|
.into_iter()
|
||||||
|
.filter_map(|v| serde_json::from_value(v).ok())
|
||||||
|
.collect())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Clear all entries (for testing)
|
/// Walk the chain from genesis, recomputing each block hash.
|
||||||
pub async fn clear(&self) {
|
///
|
||||||
let mut entries = self.entries.write().await;
|
/// Returns `valid = false` and `first_tampered_seq` if any entry's stored
|
||||||
entries.clear();
|
/// hash does not match the recomputed hash, or if `prev_hash` does not
|
||||||
|
/// match the previous entry's `block_hash`.
|
||||||
|
pub async fn verify_integrity(&self) -> Result<IntegrityReport> {
|
||||||
|
let entries = self.get_all_entries().await?;
|
||||||
|
let total_entries = entries.len();
|
||||||
|
let mut expected_prev = GENESIS_HASH.to_string();
|
||||||
|
let mut first_tampered_seq: Option<i64> = None;
|
||||||
|
|
||||||
|
for entry in &entries {
|
||||||
|
if entry.prev_hash != expected_prev {
|
||||||
|
warn!(
|
||||||
|
seq = entry.seq,
|
||||||
|
stored_prev = %entry.prev_hash,
|
||||||
|
expected_prev = %expected_prev,
|
||||||
|
"Audit chain break: prev_hash mismatch"
|
||||||
|
);
|
||||||
|
first_tampered_seq = Some(entry.seq);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
let expected_hash = compute_block_hash(
|
||||||
|
&entry.prev_hash,
|
||||||
|
entry.seq,
|
||||||
|
&entry.entry_id,
|
||||||
|
&entry.timestamp,
|
||||||
|
&entry.workflow_id,
|
||||||
|
&entry.event_type,
|
||||||
|
&entry.actor,
|
||||||
|
&entry.details,
|
||||||
|
);
|
||||||
|
|
||||||
|
if expected_hash != entry.block_hash {
|
||||||
|
warn!(
|
||||||
|
seq = entry.seq,
|
||||||
|
stored_hash = %entry.block_hash,
|
||||||
|
expected_hash = %expected_hash,
|
||||||
|
"Audit chain break: block_hash mismatch"
|
||||||
|
);
|
||||||
|
first_tampered_seq = Some(entry.seq);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
expected_prev = entry.block_hash.clone();
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(IntegrityReport {
|
||||||
|
valid: first_tampered_seq.is_none(),
|
||||||
|
total_entries,
|
||||||
|
first_tampered_seq,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for AuditTrail {
|
/// Event type constants for audit trail entries.
|
||||||
fn default() -> Self {
|
|
||||||
Self::new()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Event types for audit trail
|
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub mod events {
|
pub mod events {
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const WORKFLOW_CREATED: &str = "workflow_created";
|
pub const WORKFLOW_CREATED: &str = "workflow_created";
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const WORKFLOW_STARTED: &str = "workflow_started";
|
pub const WORKFLOW_STARTED: &str = "workflow_started";
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const WORKFLOW_COMPLETED: &str = "workflow_completed";
|
pub const WORKFLOW_COMPLETED: &str = "workflow_completed";
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const WORKFLOW_FAILED: &str = "workflow_failed";
|
pub const WORKFLOW_FAILED: &str = "workflow_failed";
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const WORKFLOW_ROLLED_BACK: &str = "workflow_rolled_back";
|
pub const WORKFLOW_ROLLED_BACK: &str = "workflow_rolled_back";
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const PHASE_STARTED: &str = "phase_started";
|
pub const PHASE_STARTED: &str = "phase_started";
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const PHASE_COMPLETED: &str = "phase_completed";
|
pub const PHASE_COMPLETED: &str = "phase_completed";
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const STEP_STARTED: &str = "step_started";
|
pub const STEP_STARTED: &str = "step_started";
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const STEP_COMPLETED: &str = "step_completed";
|
pub const STEP_COMPLETED: &str = "step_completed";
|
||||||
#[allow(dead_code)]
|
|
||||||
pub const STEP_FAILED: &str = "step_failed";
|
pub const STEP_FAILED: &str = "step_failed";
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -141,109 +289,257 @@ pub mod events {
|
|||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
|
|
||||||
|
fn make_details() -> serde_json::Value {
|
||||||
|
serde_json::json!({"key": "value", "count": 42})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn fixed_ts() -> DateTime<Utc> {
|
||||||
|
"2026-02-26T12:00:00Z"
|
||||||
|
.parse::<DateTime<Utc>>()
|
||||||
|
.expect("valid timestamp")
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn compute_block_hash_is_deterministic() {
|
||||||
|
let ts = fixed_ts();
|
||||||
|
let details = make_details();
|
||||||
|
let h1 = compute_block_hash(
|
||||||
|
GENESIS_HASH,
|
||||||
|
0,
|
||||||
|
"entry-id-1",
|
||||||
|
&ts,
|
||||||
|
"wf-1",
|
||||||
|
events::WORKFLOW_CREATED,
|
||||||
|
"system",
|
||||||
|
&details,
|
||||||
|
);
|
||||||
|
let h2 = compute_block_hash(
|
||||||
|
GENESIS_HASH,
|
||||||
|
0,
|
||||||
|
"entry-id-1",
|
||||||
|
&ts,
|
||||||
|
"wf-1",
|
||||||
|
events::WORKFLOW_CREATED,
|
||||||
|
"system",
|
||||||
|
&details,
|
||||||
|
);
|
||||||
|
assert_eq!(h1, h2);
|
||||||
|
assert_eq!(h1.len(), 64, "SHA256 hex is always 64 chars");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn compute_block_hash_differs_on_prev_hash_change() {
|
||||||
|
let ts = fixed_ts();
|
||||||
|
let details = make_details();
|
||||||
|
let h1 = compute_block_hash(
|
||||||
|
GENESIS_HASH,
|
||||||
|
0,
|
||||||
|
"id",
|
||||||
|
&ts,
|
||||||
|
"wf-1",
|
||||||
|
"created",
|
||||||
|
"system",
|
||||||
|
&details,
|
||||||
|
);
|
||||||
|
let h2 = compute_block_hash(
|
||||||
|
"deadbeef00000000000000000000000000000000000000000000000000000000",
|
||||||
|
0,
|
||||||
|
"id",
|
||||||
|
&ts,
|
||||||
|
"wf-1",
|
||||||
|
"created",
|
||||||
|
"system",
|
||||||
|
&details,
|
||||||
|
);
|
||||||
|
assert_ne!(h1, h2);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn compute_block_hash_differs_on_seq_change() {
|
||||||
|
let ts = fixed_ts();
|
||||||
|
let details = make_details();
|
||||||
|
let h1 = compute_block_hash(GENESIS_HASH, 0, "id", &ts, "wf", "created", "a", &details);
|
||||||
|
let h2 = compute_block_hash(GENESIS_HASH, 1, "id", &ts, "wf", "created", "a", &details);
|
||||||
|
assert_ne!(h1, h2);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn compute_block_hash_differs_on_field_change() {
|
||||||
|
let ts = fixed_ts();
|
||||||
|
let details = make_details();
|
||||||
|
let base = compute_block_hash(
|
||||||
|
GENESIS_HASH,
|
||||||
|
0,
|
||||||
|
"id",
|
||||||
|
&ts,
|
||||||
|
"wf",
|
||||||
|
"created",
|
||||||
|
"alice",
|
||||||
|
&details,
|
||||||
|
);
|
||||||
|
|
||||||
|
let changed_actor =
|
||||||
|
compute_block_hash(GENESIS_HASH, 0, "id", &ts, "wf", "created", "bob", &details);
|
||||||
|
assert_ne!(base, changed_actor);
|
||||||
|
|
||||||
|
let changed_wf = compute_block_hash(
|
||||||
|
GENESIS_HASH,
|
||||||
|
0,
|
||||||
|
"id",
|
||||||
|
&ts,
|
||||||
|
"wf-other",
|
||||||
|
"created",
|
||||||
|
"alice",
|
||||||
|
&details,
|
||||||
|
);
|
||||||
|
assert_ne!(base, changed_wf);
|
||||||
|
|
||||||
|
let changed_details = compute_block_hash(
|
||||||
|
GENESIS_HASH,
|
||||||
|
0,
|
||||||
|
"id",
|
||||||
|
&ts,
|
||||||
|
"wf",
|
||||||
|
"created",
|
||||||
|
"alice",
|
||||||
|
&serde_json::json!({}),
|
||||||
|
);
|
||||||
|
assert_ne!(base, changed_details);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Integration tests below require a running SurrealDB instance.
|
||||||
|
// Run with: cargo test -p vapora-backend -- --ignored
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_audit_trail_creation() {
|
#[ignore = "requires SurrealDB at ws://localhost:8000"]
|
||||||
let audit = AuditTrail::new();
|
async fn log_event_persists_and_chains() {
|
||||||
assert!(audit.get_all_entries().await.is_empty());
|
let db =
|
||||||
|
surrealdb::Surreal::new::<surrealdb::engine::remote::ws::Ws>("ws://localhost:8000")
|
||||||
|
.await
|
||||||
|
.expect("connect to SurrealDB");
|
||||||
|
db.signin(surrealdb::opt::auth::Root {
|
||||||
|
username: "root".to_string(),
|
||||||
|
password: "root".to_string(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.expect("signin");
|
||||||
|
db.use_ns("vapora_test")
|
||||||
|
.use_db("audit_test")
|
||||||
|
.await
|
||||||
|
.expect("use ns/db");
|
||||||
|
|
||||||
|
let trail = AuditTrail::new(db);
|
||||||
|
|
||||||
|
let e1 = trail
|
||||||
|
.log_event(
|
||||||
|
"wf-1".into(),
|
||||||
|
events::WORKFLOW_CREATED.into(),
|
||||||
|
"system".into(),
|
||||||
|
serde_json::json!({"title": "Test"}),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.expect("log_event e1");
|
||||||
|
|
||||||
|
let e2 = trail
|
||||||
|
.log_event(
|
||||||
|
"wf-1".into(),
|
||||||
|
events::WORKFLOW_STARTED.into(),
|
||||||
|
"system".into(),
|
||||||
|
serde_json::json!({}),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.expect("log_event e2");
|
||||||
|
|
||||||
|
assert_eq!(e1.seq, 0);
|
||||||
|
assert_eq!(e1.prev_hash, GENESIS_HASH);
|
||||||
|
assert_eq!(e2.seq, 1);
|
||||||
|
assert_eq!(e2.prev_hash, e1.block_hash);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_log_event() {
|
#[ignore = "requires SurrealDB at ws://localhost:8000"]
|
||||||
let audit = AuditTrail::new();
|
async fn verify_integrity_valid_chain() {
|
||||||
|
let db =
|
||||||
|
surrealdb::Surreal::new::<surrealdb::engine::remote::ws::Ws>("ws://localhost:8000")
|
||||||
|
.await
|
||||||
|
.expect("connect");
|
||||||
|
db.signin(surrealdb::opt::auth::Root {
|
||||||
|
username: "root".to_string(),
|
||||||
|
password: "root".to_string(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.expect("signin");
|
||||||
|
db.use_ns("vapora_test")
|
||||||
|
.use_db("audit_verify_test")
|
||||||
|
.await
|
||||||
|
.expect("use ns/db");
|
||||||
|
|
||||||
audit
|
let trail = AuditTrail::new(db);
|
||||||
|
trail
|
||||||
.log_event(
|
.log_event(
|
||||||
"wf-1".to_string(),
|
"wf".into(),
|
||||||
events::WORKFLOW_STARTED.to_string(),
|
"created".into(),
|
||||||
"system".to_string(),
|
"alice".into(),
|
||||||
serde_json::json!({"test": "data"}),
|
serde_json::json!({}),
|
||||||
)
|
)
|
||||||
.await;
|
.await
|
||||||
|
.expect("e1");
|
||||||
|
trail
|
||||||
|
.log_event(
|
||||||
|
"wf".into(),
|
||||||
|
"started".into(),
|
||||||
|
"alice".into(),
|
||||||
|
serde_json::json!({}),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.expect("e2");
|
||||||
|
|
||||||
let entries = audit.get_all_entries().await;
|
let report = trail.verify_integrity().await.expect("verify");
|
||||||
assert_eq!(entries.len(), 1);
|
assert!(report.valid);
|
||||||
assert_eq!(entries[0].workflow_id, "wf-1");
|
assert_eq!(report.total_entries, 2);
|
||||||
assert_eq!(entries[0].event_type, events::WORKFLOW_STARTED);
|
assert!(report.first_tampered_seq.is_none());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_get_workflow_audit() {
|
#[ignore = "requires SurrealDB at ws://localhost:8000"]
|
||||||
let audit = AuditTrail::new();
|
async fn get_workflow_audit_filters_by_workflow() {
|
||||||
|
let db =
|
||||||
|
surrealdb::Surreal::new::<surrealdb::engine::remote::ws::Ws>("ws://localhost:8000")
|
||||||
|
.await
|
||||||
|
.expect("connect");
|
||||||
|
db.signin(surrealdb::opt::auth::Root {
|
||||||
|
username: "root".to_string(),
|
||||||
|
password: "root".to_string(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.expect("signin");
|
||||||
|
db.use_ns("vapora_test")
|
||||||
|
.use_db("audit_filter_test")
|
||||||
|
.await
|
||||||
|
.expect("use ns/db");
|
||||||
|
|
||||||
audit
|
let trail = AuditTrail::new(db);
|
||||||
|
trail
|
||||||
.log_event(
|
.log_event(
|
||||||
"wf-1".to_string(),
|
"wf-A".into(),
|
||||||
events::WORKFLOW_STARTED.to_string(),
|
"created".into(),
|
||||||
"system".to_string(),
|
"system".into(),
|
||||||
serde_json::json!({}),
|
serde_json::json!({}),
|
||||||
)
|
)
|
||||||
.await;
|
.await
|
||||||
|
.expect("wf-A");
|
||||||
audit
|
trail
|
||||||
.log_event(
|
.log_event(
|
||||||
"wf-2".to_string(),
|
"wf-B".into(),
|
||||||
events::WORKFLOW_STARTED.to_string(),
|
"created".into(),
|
||||||
"system".to_string(),
|
"system".into(),
|
||||||
serde_json::json!({}),
|
serde_json::json!({}),
|
||||||
)
|
)
|
||||||
.await;
|
.await
|
||||||
|
.expect("wf-B");
|
||||||
|
|
||||||
let entries = audit.get_workflow_audit("wf-1").await;
|
let entries = trail.get_workflow_audit("wf-A").await.expect("query");
|
||||||
assert_eq!(entries.len(), 1);
|
assert_eq!(entries.len(), 1);
|
||||||
assert_eq!(entries[0].workflow_id, "wf-1");
|
assert_eq!(entries[0].workflow_id, "wf-A");
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn test_filter_by_event_type() {
|
|
||||||
let audit = AuditTrail::new();
|
|
||||||
|
|
||||||
audit
|
|
||||||
.log_event(
|
|
||||||
"wf-1".to_string(),
|
|
||||||
events::WORKFLOW_STARTED.to_string(),
|
|
||||||
"system".to_string(),
|
|
||||||
serde_json::json!({}),
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
audit
|
|
||||||
.log_event(
|
|
||||||
"wf-1".to_string(),
|
|
||||||
events::WORKFLOW_COMPLETED.to_string(),
|
|
||||||
"system".to_string(),
|
|
||||||
serde_json::json!({}),
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
let entries = audit.get_by_event_type(events::WORKFLOW_STARTED).await;
|
|
||||||
assert_eq!(entries.len(), 1);
|
|
||||||
assert_eq!(entries[0].event_type, events::WORKFLOW_STARTED);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn test_filter_by_actor() {
|
|
||||||
let audit = AuditTrail::new();
|
|
||||||
|
|
||||||
audit
|
|
||||||
.log_event(
|
|
||||||
"wf-1".to_string(),
|
|
||||||
events::WORKFLOW_STARTED.to_string(),
|
|
||||||
"user-1".to_string(),
|
|
||||||
serde_json::json!({}),
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
audit
|
|
||||||
.log_event(
|
|
||||||
"wf-2".to_string(),
|
|
||||||
events::WORKFLOW_STARTED.to_string(),
|
|
||||||
"user-2".to_string(),
|
|
||||||
serde_json::json!({}),
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
let entries = audit.get_by_actor("user-1").await;
|
|
||||||
assert_eq!(entries.len(), 1);
|
|
||||||
assert_eq!(entries[0].actor, "user-1");
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -18,16 +18,20 @@ use axum::{
|
|||||||
};
|
};
|
||||||
use clap::Parser;
|
use clap::Parser;
|
||||||
use tower_http::cors::{Any, CorsLayer};
|
use tower_http::cors::{Any, CorsLayer};
|
||||||
use tracing::{info, Level};
|
use tracing::{info, warn, Level};
|
||||||
use vapora_channels::{ChannelConfig, ChannelRegistry};
|
use vapora_channels::{ChannelConfig, ChannelRegistry};
|
||||||
use vapora_swarm::{SwarmCoordinator, SwarmMetrics};
|
use vapora_swarm::{SwarmCoordinator, SwarmMetrics};
|
||||||
use vapora_workflow_engine::ScheduleStore;
|
use vapora_workflow_engine::{ScheduleStore, WorkflowOrchestrator};
|
||||||
|
|
||||||
|
use crate::api::websocket::WorkflowBroadcaster;
|
||||||
use crate::api::AppState;
|
use crate::api::AppState;
|
||||||
|
use crate::audit::AuditTrail;
|
||||||
use crate::config::Config;
|
use crate::config::Config;
|
||||||
use crate::services::{
|
use crate::services::{
|
||||||
AgentService, ProjectService, ProposalService, ProviderAnalyticsService, TaskService,
|
AgentService, ProjectService, ProposalService, ProviderAnalyticsService, TaskService,
|
||||||
|
WorkflowService,
|
||||||
};
|
};
|
||||||
|
use crate::workflow::{executor::StepExecutor, WorkflowEngine};
|
||||||
|
|
||||||
#[derive(Parser, Debug)]
|
#[derive(Parser, Debug)]
|
||||||
#[command(
|
#[command(
|
||||||
@ -204,6 +208,64 @@ async fn main() -> Result<()> {
|
|||||||
let swarm_coordinator = Arc::new(swarm_coordinator);
|
let swarm_coordinator = Arc::new(swarm_coordinator);
|
||||||
info!("SwarmCoordinator initialized for Phase 5.2");
|
info!("SwarmCoordinator initialized for Phase 5.2");
|
||||||
|
|
||||||
|
// Connect to NATS and initialize WorkflowOrchestrator for schedule firing.
|
||||||
|
// Both steps are non-fatal: if NATS is unavailable or the workflow config
|
||||||
|
// file is missing, the orchestrator stays None and /schedules/*/fire returns
|
||||||
|
// 503 instead of panicking.
|
||||||
|
match async_nats::connect(&config.nats.url).await {
|
||||||
|
Ok(nats_client) => {
|
||||||
|
info!("Connected to NATS at {}", config.nats.url);
|
||||||
|
let nats = Arc::new(nats_client);
|
||||||
|
match WorkflowOrchestrator::new(
|
||||||
|
"config/workflows.toml",
|
||||||
|
swarm_coordinator.clone(),
|
||||||
|
kg_persistence.clone(),
|
||||||
|
nats,
|
||||||
|
db.clone(),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(orchestrator) => {
|
||||||
|
app_state = app_state.with_workflow_orchestrator(Arc::new(orchestrator));
|
||||||
|
info!("WorkflowOrchestrator initialized; /schedules/*/fire operational");
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"WorkflowOrchestrator init failed: {e}; /schedules/*/fire will return 503"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"NATS connect failed ({}): {e}; workflow orchestrator disabled",
|
||||||
|
config.nats.url
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize WorkflowService (backend workflow management: CRUD, execute, audit
|
||||||
|
// trail) Non-fatal: if agent coordinator init fails, /api/v1/workflows/*
|
||||||
|
// returns 503.
|
||||||
|
{
|
||||||
|
let registry = Arc::new(vapora_agents::AgentRegistry::new(10));
|
||||||
|
let agent_config = vapora_agents::AgentConfig::default();
|
||||||
|
match vapora_agents::coordinator::AgentCoordinator::new(agent_config, registry).await {
|
||||||
|
Ok(coordinator) => {
|
||||||
|
let executor = StepExecutor::new(Arc::new(coordinator));
|
||||||
|
let engine = Arc::new(WorkflowEngine::new(executor));
|
||||||
|
let broadcaster = Arc::new(WorkflowBroadcaster::new());
|
||||||
|
let audit = Arc::new(AuditTrail::new(db.clone()));
|
||||||
|
let svc = Arc::new(WorkflowService::new(engine, broadcaster, audit));
|
||||||
|
app_state = app_state.with_workflow_service(svc);
|
||||||
|
info!("WorkflowService initialized; /api/v1/workflows/* operational");
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!("WorkflowService init failed: {e}; /api/v1/workflows/* will return 503");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Initialize analytics metrics (Phase 6)
|
// Initialize analytics metrics (Phase 6)
|
||||||
api::analytics_metrics::register_analytics_metrics();
|
api::analytics_metrics::register_analytics_metrics();
|
||||||
info!("Analytics metrics registered for Prometheus");
|
info!("Analytics metrics registered for Prometheus");
|
||||||
@ -422,6 +484,8 @@ async fn main() -> Result<()> {
|
|||||||
"/api/v1/schedules/:id/fire",
|
"/api/v1/schedules/:id/fire",
|
||||||
post(api::schedules::fire_schedule),
|
post(api::schedules::fire_schedule),
|
||||||
)
|
)
|
||||||
|
// Workflow management endpoints
|
||||||
|
.nest("/api/v1/workflows", api::workflows::workflow_routes())
|
||||||
// Apply CORS, state, and extensions
|
// Apply CORS, state, and extensions
|
||||||
.layer(Extension(swarm_coordinator))
|
.layer(Extension(swarm_coordinator))
|
||||||
.layer(cors)
|
.layer(cors)
|
||||||
|
|||||||
@ -4,7 +4,7 @@
|
|||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
use tracing::{error, info};
|
use tracing::{error, info, warn};
|
||||||
|
|
||||||
use crate::api::websocket::{WorkflowBroadcaster, WorkflowUpdate};
|
use crate::api::websocket::{WorkflowBroadcaster, WorkflowUpdate};
|
||||||
use crate::audit::{events, AuditEntry, AuditTrail};
|
use crate::audit::{events, AuditEntry, AuditTrail};
|
||||||
@ -57,7 +57,8 @@ impl WorkflowService {
|
|||||||
self.engine.register_workflow(workflow.clone()).await?;
|
self.engine.register_workflow(workflow.clone()).await?;
|
||||||
|
|
||||||
// Audit event
|
// Audit event
|
||||||
self.audit
|
if let Err(e) = self
|
||||||
|
.audit
|
||||||
.log_event(
|
.log_event(
|
||||||
workflow_id.clone(),
|
workflow_id.clone(),
|
||||||
events::WORKFLOW_CREATED.to_string(),
|
events::WORKFLOW_CREATED.to_string(),
|
||||||
@ -67,7 +68,10 @@ impl WorkflowService {
|
|||||||
"phases": workflow.phases.len(),
|
"phases": workflow.phases.len(),
|
||||||
}),
|
}),
|
||||||
)
|
)
|
||||||
.await;
|
.await
|
||||||
|
{
|
||||||
|
warn!(workflow_id = %workflow_id, error = %e, "Failed to log audit event");
|
||||||
|
}
|
||||||
|
|
||||||
// Broadcast update
|
// Broadcast update
|
||||||
self.broadcaster.send_update(WorkflowUpdate::new(
|
self.broadcaster.send_update(WorkflowUpdate::new(
|
||||||
@ -97,14 +101,18 @@ impl WorkflowService {
|
|||||||
));
|
));
|
||||||
|
|
||||||
// Audit event
|
// Audit event
|
||||||
self.audit
|
if let Err(e) = self
|
||||||
|
.audit
|
||||||
.log_event(
|
.log_event(
|
||||||
workflow_id.to_string(),
|
workflow_id.to_string(),
|
||||||
events::WORKFLOW_STARTED.to_string(),
|
events::WORKFLOW_STARTED.to_string(),
|
||||||
"system".to_string(),
|
"system".to_string(),
|
||||||
serde_json::json!({}),
|
serde_json::json!({}),
|
||||||
)
|
)
|
||||||
.await;
|
.await
|
||||||
|
{
|
||||||
|
warn!(workflow_id = %workflow_id, error = %e, "Failed to log audit event");
|
||||||
|
}
|
||||||
|
|
||||||
// Execute workflow
|
// Execute workflow
|
||||||
let result = self.engine.execute_workflow(workflow_id).await;
|
let result = self.engine.execute_workflow(workflow_id).await;
|
||||||
@ -123,7 +131,8 @@ impl WorkflowService {
|
|||||||
));
|
));
|
||||||
|
|
||||||
// Audit event
|
// Audit event
|
||||||
self.audit
|
if let Err(e) = self
|
||||||
|
.audit
|
||||||
.log_event(
|
.log_event(
|
||||||
workflow_id.to_string(),
|
workflow_id.to_string(),
|
||||||
events::WORKFLOW_COMPLETED.to_string(),
|
events::WORKFLOW_COMPLETED.to_string(),
|
||||||
@ -133,7 +142,10 @@ impl WorkflowService {
|
|||||||
"progress": progress,
|
"progress": progress,
|
||||||
}),
|
}),
|
||||||
)
|
)
|
||||||
.await;
|
.await
|
||||||
|
{
|
||||||
|
warn!(workflow_id = %workflow_id, error = %e, "Failed to log audit event");
|
||||||
|
}
|
||||||
|
|
||||||
info!("Workflow {} completed with status: {}", workflow_id, status);
|
info!("Workflow {} completed with status: {}", workflow_id, status);
|
||||||
Ok(workflow)
|
Ok(workflow)
|
||||||
@ -150,7 +162,8 @@ impl WorkflowService {
|
|||||||
));
|
));
|
||||||
|
|
||||||
// Audit event
|
// Audit event
|
||||||
self.audit
|
if let Err(audit_err) = self
|
||||||
|
.audit
|
||||||
.log_event(
|
.log_event(
|
||||||
workflow_id.to_string(),
|
workflow_id.to_string(),
|
||||||
events::WORKFLOW_FAILED.to_string(),
|
events::WORKFLOW_FAILED.to_string(),
|
||||||
@ -159,7 +172,10 @@ impl WorkflowService {
|
|||||||
"error": error_msg,
|
"error": error_msg,
|
||||||
}),
|
}),
|
||||||
)
|
)
|
||||||
.await;
|
.await
|
||||||
|
{
|
||||||
|
warn!(workflow_id = %workflow_id, error = %audit_err, "Failed to log audit event");
|
||||||
|
}
|
||||||
|
|
||||||
error!("Workflow {} failed: {}", workflow_id, error_msg);
|
error!("Workflow {} failed: {}", workflow_id, error_msg);
|
||||||
Err(WorkflowServiceError::from(e))
|
Err(WorkflowServiceError::from(e))
|
||||||
@ -195,20 +211,24 @@ impl WorkflowService {
|
|||||||
));
|
));
|
||||||
|
|
||||||
// Audit event
|
// Audit event
|
||||||
self.audit
|
if let Err(e) = self
|
||||||
|
.audit
|
||||||
.log_event(
|
.log_event(
|
||||||
workflow_id.to_string(),
|
workflow_id.to_string(),
|
||||||
events::WORKFLOW_ROLLED_BACK.to_string(),
|
events::WORKFLOW_ROLLED_BACK.to_string(),
|
||||||
"system".to_string(),
|
"system".to_string(),
|
||||||
serde_json::json!({}),
|
serde_json::json!({}),
|
||||||
)
|
)
|
||||||
.await;
|
.await
|
||||||
|
{
|
||||||
|
warn!(workflow_id = %workflow_id, error = %e, "Failed to log audit event");
|
||||||
|
}
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get audit trail for workflow
|
/// Get audit trail for workflow.
|
||||||
pub async fn get_audit_trail(&self, workflow_id: &str) -> Vec<AuditEntry> {
|
pub async fn get_audit_trail(&self, workflow_id: &str) -> anyhow::Result<Vec<AuditEntry>> {
|
||||||
self.audit.get_workflow_audit(workflow_id).await
|
self.audit.get_workflow_audit(workflow_id).await
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -258,8 +278,30 @@ mod tests {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn connect_test_db(
|
||||||
|
ns: &str,
|
||||||
|
) -> surrealdb::Surreal<surrealdb::engine::remote::ws::Client> {
|
||||||
|
let db =
|
||||||
|
surrealdb::Surreal::new::<surrealdb::engine::remote::ws::Ws>("ws://localhost:8000")
|
||||||
|
.await
|
||||||
|
.expect("connect to SurrealDB");
|
||||||
|
db.signin(surrealdb::opt::auth::Root {
|
||||||
|
username: "root".to_string(),
|
||||||
|
password: "root".to_string(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.expect("signin");
|
||||||
|
db.use_ns("vapora_test")
|
||||||
|
.use_db(ns)
|
||||||
|
.await
|
||||||
|
.expect("use ns/db");
|
||||||
|
db
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
|
#[ignore = "requires SurrealDB at ws://localhost:8000"]
|
||||||
async fn test_service_creation() {
|
async fn test_service_creation() {
|
||||||
|
let db = connect_test_db("wf_svc_creation").await;
|
||||||
let registry = Arc::new(AgentRegistry::new(5));
|
let registry = Arc::new(AgentRegistry::new(5));
|
||||||
let config = AgentConfig {
|
let config = AgentConfig {
|
||||||
registry: RegistryConfig {
|
registry: RegistryConfig {
|
||||||
@ -277,14 +319,16 @@ mod tests {
|
|||||||
let executor = StepExecutor::new(coordinator);
|
let executor = StepExecutor::new(coordinator);
|
||||||
let engine = Arc::new(WorkflowEngine::new(executor));
|
let engine = Arc::new(WorkflowEngine::new(executor));
|
||||||
let broadcaster = Arc::new(WorkflowBroadcaster::new());
|
let broadcaster = Arc::new(WorkflowBroadcaster::new());
|
||||||
let audit = Arc::new(AuditTrail::new());
|
let audit = Arc::new(AuditTrail::new(db));
|
||||||
|
|
||||||
let service = WorkflowService::new(engine, broadcaster, audit);
|
let service = WorkflowService::new(engine, broadcaster, audit);
|
||||||
assert!(service.list_workflows().await.is_empty());
|
assert!(service.list_workflows().await.is_empty());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
|
#[ignore = "requires SurrealDB at ws://localhost:8000"]
|
||||||
async fn test_create_workflow() {
|
async fn test_create_workflow() {
|
||||||
|
let db = connect_test_db("wf_svc_create").await;
|
||||||
let registry = Arc::new(AgentRegistry::new(5));
|
let registry = Arc::new(AgentRegistry::new(5));
|
||||||
let config = AgentConfig {
|
let config = AgentConfig {
|
||||||
registry: RegistryConfig {
|
registry: RegistryConfig {
|
||||||
@ -302,7 +346,7 @@ mod tests {
|
|||||||
let executor = StepExecutor::new(coordinator);
|
let executor = StepExecutor::new(coordinator);
|
||||||
let engine = Arc::new(WorkflowEngine::new(executor));
|
let engine = Arc::new(WorkflowEngine::new(executor));
|
||||||
let broadcaster = Arc::new(WorkflowBroadcaster::new());
|
let broadcaster = Arc::new(WorkflowBroadcaster::new());
|
||||||
let audit = Arc::new(AuditTrail::new());
|
let audit = Arc::new(AuditTrail::new(db));
|
||||||
|
|
||||||
let service = WorkflowService::new(engine, broadcaster, audit);
|
let service = WorkflowService::new(engine, broadcaster, audit);
|
||||||
|
|
||||||
@ -319,7 +363,9 @@ mod tests {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
|
#[ignore = "requires SurrealDB at ws://localhost:8000"]
|
||||||
async fn test_audit_trail_logging() {
|
async fn test_audit_trail_logging() {
|
||||||
|
let db = connect_test_db("wf_svc_audit").await;
|
||||||
let registry = Arc::new(AgentRegistry::new(5));
|
let registry = Arc::new(AgentRegistry::new(5));
|
||||||
let config = AgentConfig {
|
let config = AgentConfig {
|
||||||
registry: RegistryConfig {
|
registry: RegistryConfig {
|
||||||
@ -337,7 +383,7 @@ mod tests {
|
|||||||
let executor = StepExecutor::new(coordinator);
|
let executor = StepExecutor::new(coordinator);
|
||||||
let engine = Arc::new(WorkflowEngine::new(executor));
|
let engine = Arc::new(WorkflowEngine::new(executor));
|
||||||
let broadcaster = Arc::new(WorkflowBroadcaster::new());
|
let broadcaster = Arc::new(WorkflowBroadcaster::new());
|
||||||
let audit = Arc::new(AuditTrail::new());
|
let audit = Arc::new(AuditTrail::new(db));
|
||||||
|
|
||||||
let service = WorkflowService::new(engine, broadcaster, audit);
|
let service = WorkflowService::new(engine, broadcaster, audit);
|
||||||
|
|
||||||
@ -346,7 +392,7 @@ mod tests {
|
|||||||
|
|
||||||
let _: Result<Workflow, WorkflowServiceError> = service.create_workflow(workflow).await;
|
let _: Result<Workflow, WorkflowServiceError> = service.create_workflow(workflow).await;
|
||||||
|
|
||||||
let audit_entries: Vec<_> = service.get_audit_trail(&id).await;
|
let audit_entries = service.get_audit_trail(&id).await.expect("get_audit_trail");
|
||||||
assert!(!audit_entries.is_empty());
|
assert!(!audit_entries.is_empty());
|
||||||
assert_eq!(audit_entries[0].event_type, events::WORKFLOW_CREATED);
|
assert_eq!(audit_entries[0].event_type, events::WORKFLOW_CREATED);
|
||||||
}
|
}
|
||||||
|
|||||||
@ -185,7 +185,22 @@ async fn test_workflow_engine() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
|
#[ignore = "requires SurrealDB at ws://localhost:8000"]
|
||||||
async fn test_workflow_service_integration() {
|
async fn test_workflow_service_integration() {
|
||||||
|
let db = surrealdb::Surreal::new::<surrealdb::engine::remote::ws::Ws>("ws://localhost:8000")
|
||||||
|
.await
|
||||||
|
.expect("connect to SurrealDB");
|
||||||
|
db.signin(surrealdb::opt::auth::Root {
|
||||||
|
username: "root".to_string(),
|
||||||
|
password: "root".to_string(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.expect("signin");
|
||||||
|
db.use_ns("vapora_test")
|
||||||
|
.use_db("wf_integration")
|
||||||
|
.await
|
||||||
|
.expect("use ns/db");
|
||||||
|
|
||||||
let registry = Arc::new(AgentRegistry::new(5));
|
let registry = Arc::new(AgentRegistry::new(5));
|
||||||
let config = AgentConfig {
|
let config = AgentConfig {
|
||||||
registry: RegistryConfig {
|
registry: RegistryConfig {
|
||||||
@ -203,24 +218,10 @@ async fn test_workflow_service_integration() {
|
|||||||
let executor = StepExecutor::new(coordinator);
|
let executor = StepExecutor::new(coordinator);
|
||||||
let engine = Arc::new(WorkflowEngine::new(executor));
|
let engine = Arc::new(WorkflowEngine::new(executor));
|
||||||
let broadcaster = Arc::new(WorkflowBroadcaster::new());
|
let broadcaster = Arc::new(WorkflowBroadcaster::new());
|
||||||
let audit = Arc::new(AuditTrail::new());
|
let audit = Arc::new(AuditTrail::new(db));
|
||||||
|
|
||||||
let service = WorkflowService::new(engine, broadcaster, audit.clone());
|
let service = WorkflowService::new(engine, broadcaster, audit);
|
||||||
|
|
||||||
let _workflow = Workflow::new(
|
|
||||||
"service-test".to_string(),
|
|
||||||
"Service Test".to_string(),
|
|
||||||
vec![Phase {
|
|
||||||
id: "p1".to_string(),
|
|
||||||
name: "Test Phase".to_string(),
|
|
||||||
status: StepStatus::Pending,
|
|
||||||
parallel: false,
|
|
||||||
estimated_hours: 1.0,
|
|
||||||
steps: vec![],
|
|
||||||
}],
|
|
||||||
);
|
|
||||||
|
|
||||||
// Need at least one step for valid workflow
|
|
||||||
let workflow = Workflow::new(
|
let workflow = Workflow::new(
|
||||||
"service-test".to_string(),
|
"service-test".to_string(),
|
||||||
"Service Test".to_string(),
|
"Service Test".to_string(),
|
||||||
@ -249,8 +250,7 @@ async fn test_workflow_service_integration() {
|
|||||||
let result: Result<Workflow, WorkflowServiceError> = service.create_workflow(workflow).await;
|
let result: Result<Workflow, WorkflowServiceError> = service.create_workflow(workflow).await;
|
||||||
assert!(result.is_ok());
|
assert!(result.is_ok());
|
||||||
|
|
||||||
// Check audit trail
|
let audit_entries = service.get_audit_trail(&id).await.expect("get_audit_trail");
|
||||||
let audit_entries: Vec<_> = service.get_audit_trail(&id).await;
|
|
||||||
assert!(!audit_entries.is_empty());
|
assert!(!audit_entries.is_empty());
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -274,8 +274,23 @@ async fn test_websocket_broadcaster() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
|
#[ignore = "requires SurrealDB at ws://localhost:8000"]
|
||||||
async fn test_audit_trail() {
|
async fn test_audit_trail() {
|
||||||
let audit = AuditTrail::new();
|
let db = surrealdb::Surreal::new::<surrealdb::engine::remote::ws::Ws>("ws://localhost:8000")
|
||||||
|
.await
|
||||||
|
.expect("connect to SurrealDB");
|
||||||
|
db.signin(surrealdb::opt::auth::Root {
|
||||||
|
username: "root".to_string(),
|
||||||
|
password: "root".to_string(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.expect("signin");
|
||||||
|
db.use_ns("vapora_test")
|
||||||
|
.use_db("wf_audit_trail")
|
||||||
|
.await
|
||||||
|
.expect("use ns/db");
|
||||||
|
|
||||||
|
let audit = AuditTrail::new(db);
|
||||||
|
|
||||||
audit
|
audit
|
||||||
.log_event(
|
.log_event(
|
||||||
@ -284,9 +299,13 @@ async fn test_audit_trail() {
|
|||||||
"system".to_string(),
|
"system".to_string(),
|
||||||
serde_json::json!({"test": "data"}),
|
serde_json::json!({"test": "data"}),
|
||||||
)
|
)
|
||||||
.await;
|
.await
|
||||||
|
.expect("log_event");
|
||||||
|
|
||||||
let entries = audit.get_workflow_audit("wf-1").await;
|
let entries = audit
|
||||||
|
.get_workflow_audit("wf-1")
|
||||||
|
.await
|
||||||
|
.expect("get_workflow_audit");
|
||||||
assert_eq!(entries.len(), 1);
|
assert_eq!(entries.len(), 1);
|
||||||
assert_eq!(entries[0].event_type, "workflow_started");
|
assert_eq!(entries[0].event_type, "workflow_started");
|
||||||
}
|
}
|
||||||
|
|||||||
@ -9,7 +9,7 @@ rust-version.workspace = true
|
|||||||
description = "VAPORA adapter for documentation lifecycle management"
|
description = "VAPORA adapter for documentation lifecycle management"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
doc-lifecycle-core = { path = "../doc-lifecycle-core" }
|
doc-lifecycle-core = { path = "../../../../Tools/doc-lifecycle-manager/crates/doc-lifecycle-core" }
|
||||||
tokio = { workspace = true }
|
tokio = { workspace = true }
|
||||||
serde = { workspace = true }
|
serde = { workspace = true }
|
||||||
serde_json = { workspace = true }
|
serde_json = { workspace = true }
|
||||||
|
|||||||
@ -1,8 +1,9 @@
|
|||||||
//! Configuration for doc-lifecycle VAPORA adapter
|
//! Configuration for doc-lifecycle VAPORA adapter
|
||||||
|
|
||||||
use serde::{Deserialize, Serialize};
|
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
/// Plugin configuration
|
/// Plugin configuration
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct PluginConfig {
|
pub struct PluginConfig {
|
||||||
|
|||||||
@ -1,5 +1,7 @@
|
|||||||
//! Documenter agent integration with doc-lifecycle
|
//! Documenter agent integration with doc-lifecycle
|
||||||
|
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
use crate::config::PluginConfig;
|
use crate::config::PluginConfig;
|
||||||
use crate::plugin::DocLifecyclePlugin;
|
use crate::plugin::DocLifecyclePlugin;
|
||||||
use crate::Result;
|
use crate::Result;
|
||||||
@ -9,6 +11,7 @@ use crate::Result;
|
|||||||
pub struct DocumenterIntegration {
|
pub struct DocumenterIntegration {
|
||||||
plugin: DocLifecyclePlugin,
|
plugin: DocLifecyclePlugin,
|
||||||
config: PluginConfig,
|
config: PluginConfig,
|
||||||
|
nats: Option<Arc<async_nats::Client>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl DocumenterIntegration {
|
impl DocumenterIntegration {
|
||||||
@ -16,7 +19,17 @@ impl DocumenterIntegration {
|
|||||||
pub fn new(config: PluginConfig) -> Result<Self> {
|
pub fn new(config: PluginConfig) -> Result<Self> {
|
||||||
let plugin = DocLifecyclePlugin::new(config.clone())?;
|
let plugin = DocLifecyclePlugin::new(config.clone())?;
|
||||||
|
|
||||||
Ok(Self { plugin, config })
|
Ok(Self {
|
||||||
|
plugin,
|
||||||
|
config,
|
||||||
|
nats: None,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Attach a NATS client for publishing doc-updated events.
|
||||||
|
pub fn with_nats(mut self, client: Arc<async_nats::Client>) -> Self {
|
||||||
|
self.nats = Some(client);
|
||||||
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Handle task completion event
|
/// Handle task completion event
|
||||||
@ -26,25 +39,58 @@ impl DocumenterIntegration {
|
|||||||
task_id
|
task_id
|
||||||
);
|
);
|
||||||
|
|
||||||
// 1. Process documentation
|
|
||||||
self.plugin.process_task_docs(task_id).await?;
|
self.plugin.process_task_docs(task_id).await?;
|
||||||
|
|
||||||
// 2. Update root files (README, CHANGELOG, ROADMAP)
|
|
||||||
self.update_root_files(task_id).await?;
|
self.update_root_files(task_id).await?;
|
||||||
|
|
||||||
// 3. Publish event for other agents
|
|
||||||
self.publish_docs_updated_event(task_id).await?;
|
self.publish_docs_updated_event(task_id).await?;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn update_root_files(&self, _task_id: &str) -> Result<()> {
|
async fn update_root_files(&self, task_id: &str) -> Result<()> {
|
||||||
// TODO: Update README, CHANGELOG, ROADMAP
|
use std::io::Write;
|
||||||
|
|
||||||
|
let changes_path = self.config.docs_root.join("CHANGES.md");
|
||||||
|
let timestamp = chrono::Utc::now().format("%Y-%m-%dT%H:%M:%SZ");
|
||||||
|
let line = format!("- {timestamp} task `{task_id}` docs processed\n");
|
||||||
|
|
||||||
|
let mut file = std::fs::OpenOptions::new()
|
||||||
|
.create(true)
|
||||||
|
.append(true)
|
||||||
|
.open(&changes_path)?;
|
||||||
|
|
||||||
|
file.write_all(line.as_bytes())?;
|
||||||
|
tracing::info!(
|
||||||
|
task_id = %task_id,
|
||||||
|
path = %changes_path.display(),
|
||||||
|
"CHANGES.md updated"
|
||||||
|
);
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn publish_docs_updated_event(&self, _task_id: &str) -> Result<()> {
|
async fn publish_docs_updated_event(&self, task_id: &str) -> Result<()> {
|
||||||
// TODO: Publish to NATS
|
let Some(nats) = &self.nats else {
|
||||||
|
tracing::debug!(task_id = %task_id, "NATS not configured; skipping docs-updated event");
|
||||||
|
return Ok(());
|
||||||
|
};
|
||||||
|
|
||||||
|
let payload = serde_json::json!({
|
||||||
|
"event": "docs.updated",
|
||||||
|
"task_id": task_id,
|
||||||
|
"timestamp": chrono::Utc::now().to_rfc3339(),
|
||||||
|
});
|
||||||
|
|
||||||
|
let bytes = serde_json::to_vec(&payload)
|
||||||
|
.map_err(|e| crate::Error::NatsError(format!("Serialization failed: {e}")))?;
|
||||||
|
|
||||||
|
nats.publish(self.config.nats_subject.clone(), bytes.into())
|
||||||
|
.await
|
||||||
|
.map_err(|e| crate::Error::NatsError(format!("Publish failed: {e}")))?;
|
||||||
|
|
||||||
|
tracing::info!(
|
||||||
|
task_id = %task_id,
|
||||||
|
subject = %self.config.nats_subject,
|
||||||
|
"docs-updated event published"
|
||||||
|
);
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -39,3 +39,9 @@ impl From<doc_lifecycle_core::Error> for Error {
|
|||||||
Self::Core(err)
|
Self::Core(err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl From<std::io::Error> for Error {
|
||||||
|
fn from(err: std::io::Error) -> Self {
|
||||||
|
Self::Other(Box::new(err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@ -12,17 +12,17 @@
|
|||||||
#![warn(missing_docs)]
|
#![warn(missing_docs)]
|
||||||
#![warn(missing_debug_implementations)]
|
#![warn(missing_debug_implementations)]
|
||||||
|
|
||||||
pub mod plugin;
|
|
||||||
pub mod documenter;
|
|
||||||
pub mod config;
|
pub mod config;
|
||||||
|
pub mod documenter;
|
||||||
pub mod error;
|
pub mod error;
|
||||||
|
pub mod plugin;
|
||||||
|
|
||||||
pub use error::{Error, Result};
|
pub use error::{Error, Result};
|
||||||
|
|
||||||
/// Re-export commonly used types
|
/// Re-export commonly used types
|
||||||
pub mod prelude {
|
pub mod prelude {
|
||||||
pub use crate::plugin::DocLifecyclePlugin;
|
|
||||||
pub use crate::documenter::DocumenterIntegration;
|
|
||||||
pub use crate::config::PluginConfig;
|
pub use crate::config::PluginConfig;
|
||||||
|
pub use crate::documenter::DocumenterIntegration;
|
||||||
|
pub use crate::plugin::DocLifecyclePlugin;
|
||||||
pub use crate::{Error, Result};
|
pub use crate::{Error, Result};
|
||||||
}
|
}
|
||||||
|
|||||||
@ -1,9 +1,12 @@
|
|||||||
//! Doc-lifecycle plugin for VAPORA
|
//! Doc-lifecycle plugin for VAPORA
|
||||||
|
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
use doc_lifecycle_core::prelude::*;
|
||||||
|
use doc_lifecycle_core::rag_indexer::DocumentChunk;
|
||||||
|
|
||||||
use crate::config::PluginConfig;
|
use crate::config::PluginConfig;
|
||||||
use crate::Result;
|
use crate::Result;
|
||||||
use doc_lifecycle_core::prelude::*;
|
|
||||||
use std::path::PathBuf;
|
|
||||||
|
|
||||||
/// Main plugin interface for doc-lifecycle integration
|
/// Main plugin interface for doc-lifecycle integration
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
@ -41,22 +44,18 @@ impl DocLifecyclePlugin {
|
|||||||
pub async fn process_task_docs(&mut self, task_id: &str) -> Result<()> {
|
pub async fn process_task_docs(&mut self, task_id: &str) -> Result<()> {
|
||||||
tracing::info!("Processing task docs for task {}", task_id);
|
tracing::info!("Processing task docs for task {}", task_id);
|
||||||
|
|
||||||
// 1. Classify documents
|
|
||||||
if self.config.auto_classify {
|
if self.config.auto_classify {
|
||||||
self.classify_session_docs(task_id).await?;
|
self.classify_session_docs(task_id).await?;
|
||||||
}
|
}
|
||||||
|
|
||||||
// 2. Consolidate duplicates
|
|
||||||
if self.config.auto_consolidate {
|
if self.config.auto_consolidate {
|
||||||
self.consolidate_docs().await?;
|
self.consolidate_docs().await?;
|
||||||
}
|
}
|
||||||
|
|
||||||
// 3. Generate RAG index
|
|
||||||
if self.config.generate_rag_index {
|
if self.config.generate_rag_index {
|
||||||
self.update_rag_index().await?;
|
self.update_rag_index().await?;
|
||||||
}
|
}
|
||||||
|
|
||||||
// 4. Generate mdBook
|
|
||||||
if self.config.generate_mdbook {
|
if self.config.generate_mdbook {
|
||||||
self.generate_mdbook().await?;
|
self.generate_mdbook().await?;
|
||||||
}
|
}
|
||||||
@ -64,18 +63,86 @@ impl DocLifecyclePlugin {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn classify_session_docs(&self, _task_id: &str) -> Result<()> {
|
async fn classify_session_docs(&self, task_id: &str) -> Result<()> {
|
||||||
// TODO: Implement session doc classification
|
let session_dir = PathBuf::from(".coder");
|
||||||
|
if !session_dir.exists() {
|
||||||
|
tracing::debug!(task_id = %task_id, "No .coder/ directory; skipping classification");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let docs = collect_md_docs(&session_dir).await?;
|
||||||
|
for (path, content) in &docs {
|
||||||
|
match self.classifier.classify(path, Some(content)) {
|
||||||
|
Ok(result) => {
|
||||||
|
tracing::info!(
|
||||||
|
task_id = %task_id,
|
||||||
|
path = %path.display(),
|
||||||
|
doc_type = %result.doc_type,
|
||||||
|
confidence = result.confidence,
|
||||||
|
"Session doc classified"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!(
|
||||||
|
task_id = %task_id,
|
||||||
|
path = %path.display(),
|
||||||
|
error = %e,
|
||||||
|
"Failed to classify session doc"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn consolidate_docs(&self) -> Result<()> {
|
async fn consolidate_docs(&self) -> Result<()> {
|
||||||
// TODO: Implement consolidation
|
let docs = collect_md_docs(&self.config.docs_root).await?;
|
||||||
|
if docs.is_empty() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let matches = self.consolidator.find_duplicates(&docs)?;
|
||||||
|
for m in &matches {
|
||||||
|
tracing::warn!(
|
||||||
|
path1 = %m.path1.display(),
|
||||||
|
path2 = %m.path2.display(),
|
||||||
|
score = m.score,
|
||||||
|
"Possible duplicate docs detected; manual review recommended"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
tracing::info!(
|
||||||
|
duplicates = matches.len(),
|
||||||
|
docs_scanned = docs.len(),
|
||||||
|
"Consolidation complete"
|
||||||
|
);
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn update_rag_index(&self) -> Result<()> {
|
async fn update_rag_index(&self) -> Result<()> {
|
||||||
// TODO: Implement RAG index update
|
let docs = collect_md_docs(&self.config.docs_root).await?;
|
||||||
|
if docs.is_empty() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut all_chunks = Vec::new();
|
||||||
|
for (path, content) in &docs {
|
||||||
|
let chunks = self.rag_indexer.chunk_document(path.clone(), content)?;
|
||||||
|
all_chunks.extend(chunks);
|
||||||
|
}
|
||||||
|
|
||||||
|
let embeddings = self.rag_indexer.generate_embeddings(&all_chunks).await?;
|
||||||
|
|
||||||
|
let indexed: Vec<DocumentChunk> = all_chunks
|
||||||
|
.into_iter()
|
||||||
|
.zip(embeddings)
|
||||||
|
.map(|(mut chunk, emb)| {
|
||||||
|
chunk.metadata.embedding = Some(emb);
|
||||||
|
chunk
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
self.rag_indexer.build_index(&indexed)?;
|
||||||
|
tracing::info!(chunks = indexed.len(), "RAG index updated");
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -86,3 +153,31 @@ impl DocLifecyclePlugin {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Recursively collect all `.md` files under `root` and return their content.
|
||||||
|
async fn collect_md_docs(root: &Path) -> Result<Vec<(PathBuf, String)>> {
|
||||||
|
let mut result = Vec::new();
|
||||||
|
if !root.exists() {
|
||||||
|
return Ok(result);
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut stack: Vec<PathBuf> = vec![root.to_path_buf()];
|
||||||
|
while let Some(dir) = stack.pop() {
|
||||||
|
let mut entries = tokio::fs::read_dir(&dir).await?;
|
||||||
|
while let Some(entry) = entries.next_entry().await? {
|
||||||
|
let path = entry.path();
|
||||||
|
let file_type = entry.file_type().await?;
|
||||||
|
if file_type.is_dir() {
|
||||||
|
stack.push(path);
|
||||||
|
} else if path.extension().and_then(|e| e.to_str()) == Some("md") {
|
||||||
|
match tokio::fs::read_to_string(&path).await {
|
||||||
|
Ok(content) => result.push((path, content)),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!(path = %path.display(), error = %e, "Failed to read doc file");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(result)
|
||||||
|
}
|
||||||
|
|||||||
@ -333,6 +333,10 @@ impl LLMRouter {
|
|||||||
.await?;
|
.await?;
|
||||||
let provider = self.get_provider(&provider_name)?;
|
let provider = self.get_provider(&provider_name)?;
|
||||||
|
|
||||||
|
// Clone before the primary call so fallback can reuse them if needed.
|
||||||
|
let prompt_copy = prompt.clone();
|
||||||
|
let context_copy = context.clone();
|
||||||
|
|
||||||
match provider.complete(prompt, context).await {
|
match provider.complete(prompt, context).await {
|
||||||
Ok(response) => {
|
Ok(response) => {
|
||||||
// Track cost
|
// Track cost
|
||||||
@ -348,7 +352,7 @@ impl LLMRouter {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// Record spend with budget manager if available
|
// Record spend with budget manager if available
|
||||||
self.record_budget_spend(agent_role, cost as u32).await;
|
self.record_budget_spend(agent_role, cost).await;
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(response)
|
Ok(response)
|
||||||
@ -359,7 +363,13 @@ impl LLMRouter {
|
|||||||
// Try fallback if enabled
|
// Try fallback if enabled
|
||||||
if self.config.routing.fallback_enabled {
|
if self.config.routing.fallback_enabled {
|
||||||
return self
|
return self
|
||||||
.try_fallback_with_budget(task_type, &provider_name, agent_role)
|
.try_fallback_with_budget(
|
||||||
|
task_type,
|
||||||
|
&provider_name,
|
||||||
|
prompt_copy,
|
||||||
|
context_copy,
|
||||||
|
agent_role,
|
||||||
|
)
|
||||||
.await;
|
.await;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -368,14 +378,16 @@ impl LLMRouter {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Try fallback providers with budget tracking
|
/// Try fallback providers with budget tracking, retrying the original
|
||||||
|
/// prompt.
|
||||||
async fn try_fallback_with_budget(
|
async fn try_fallback_with_budget(
|
||||||
&self,
|
&self,
|
||||||
task_type: &str,
|
task_type: &str,
|
||||||
failed_provider: &str,
|
failed_provider: &str,
|
||||||
_agent_role: Option<&str>,
|
prompt: String,
|
||||||
|
context: Option<String>,
|
||||||
|
agent_role: Option<&str>,
|
||||||
) -> Result<CompletionResponse, RouterError> {
|
) -> Result<CompletionResponse, RouterError> {
|
||||||
// Build fallback chain excluding failed provider
|
|
||||||
let fallback_chain: Vec<String> = self
|
let fallback_chain: Vec<String> = self
|
||||||
.providers
|
.providers
|
||||||
.iter()
|
.iter()
|
||||||
@ -388,19 +400,45 @@ impl LLMRouter {
|
|||||||
}
|
}
|
||||||
|
|
||||||
warn!(
|
warn!(
|
||||||
"Primary provider {} failed for {}, trying fallback chain",
|
"Primary provider {} failed for {}, trying {} fallback provider(s)",
|
||||||
failed_provider, task_type
|
failed_provider,
|
||||||
|
task_type,
|
||||||
|
fallback_chain.len()
|
||||||
);
|
);
|
||||||
|
|
||||||
// Try each fallback provider (placeholder implementation)
|
|
||||||
// In production, you would retry the original prompt with each fallback
|
|
||||||
// provider For now, we log which providers would be tried and return
|
|
||||||
// error
|
|
||||||
for provider_name in fallback_chain {
|
for provider_name in fallback_chain {
|
||||||
warn!("Trying fallback provider: {}", provider_name);
|
let provider = match self.providers.get(&provider_name) {
|
||||||
// Actual retry logic would go here with cost tracking
|
Some(p) => p,
|
||||||
// For this phase, we return the error as fallbacks are handled at
|
None => continue,
|
||||||
// routing level
|
};
|
||||||
|
|
||||||
|
match provider.complete(prompt.clone(), context.clone()).await {
|
||||||
|
Ok(response) => {
|
||||||
|
if self.config.routing.cost_tracking_enabled {
|
||||||
|
let cost =
|
||||||
|
provider.calculate_cost(response.input_tokens, response.output_tokens);
|
||||||
|
self.cost_tracker.log_usage(
|
||||||
|
&provider_name,
|
||||||
|
task_type,
|
||||||
|
response.input_tokens,
|
||||||
|
response.output_tokens,
|
||||||
|
cost,
|
||||||
|
);
|
||||||
|
self.record_budget_spend(agent_role, cost).await;
|
||||||
|
}
|
||||||
|
info!(
|
||||||
|
"Fallback provider {} succeeded for {}",
|
||||||
|
provider_name, task_type
|
||||||
|
);
|
||||||
|
return Ok(response);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"Fallback provider {} failed for {}: {}",
|
||||||
|
provider_name, task_type, e
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Err(RouterError::AllProvidersFailed)
|
Err(RouterError::AllProvidersFailed)
|
||||||
|
|||||||
@ -41,7 +41,12 @@ pub use tracking_core::*;
|
|||||||
|
|
||||||
/// VAPORA-specific plugin integration
|
/// VAPORA-specific plugin integration
|
||||||
pub mod plugin {
|
pub mod plugin {
|
||||||
use tracking_core::TrackingDb;
|
use std::collections::HashMap;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use chrono::Utc;
|
||||||
|
use tracking_core::{EntryType, Impact, TrackingDb, TrackingEntry, TrackingSource};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
/// Plugin for integrating tracking with VAPORA agents
|
/// Plugin for integrating tracking with VAPORA agents
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
@ -58,17 +63,52 @@ pub mod plugin {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Called when a task is completed
|
/// Records task completion as a backend change entry in the tracking
|
||||||
|
/// DB.
|
||||||
pub async fn on_task_completed(&self, task_id: &str) -> tracking_core::Result<()> {
|
pub async fn on_task_completed(&self, task_id: &str) -> tracking_core::Result<()> {
|
||||||
tracing::info!("Task completed: {}", task_id);
|
let entry = TrackingEntry {
|
||||||
// TODO: Implement task completion tracking
|
id: Uuid::new_v4(),
|
||||||
|
project_path: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
|
||||||
|
source: TrackingSource::WorkflowYaml,
|
||||||
|
entry_type: EntryType::Change {
|
||||||
|
impact: Impact::Backend,
|
||||||
|
breaking: false,
|
||||||
|
files_affected: 0,
|
||||||
|
},
|
||||||
|
timestamp: Utc::now(),
|
||||||
|
summary: format!("Task completed: {task_id}"),
|
||||||
|
details_link: None,
|
||||||
|
metadata: HashMap::new(),
|
||||||
|
};
|
||||||
|
self.db.insert_entry(&entry).await?;
|
||||||
|
tracing::info!(task_id = %task_id, "Task completion tracked");
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Called when a document is created
|
/// Records document creation as a docs change entry in the tracking DB.
|
||||||
pub async fn on_document_created(&self, doc_path: &str) -> tracking_core::Result<()> {
|
pub async fn on_document_created(&self, doc_path: &str) -> tracking_core::Result<()> {
|
||||||
tracing::info!("Document created: {}", doc_path);
|
let path = std::path::Path::new(doc_path);
|
||||||
// TODO: Implement document tracking
|
let project_path = path
|
||||||
|
.parent()
|
||||||
|
.map(|p| p.to_path_buf())
|
||||||
|
.unwrap_or_else(|| PathBuf::from("."));
|
||||||
|
|
||||||
|
let entry = TrackingEntry {
|
||||||
|
id: Uuid::new_v4(),
|
||||||
|
project_path,
|
||||||
|
source: TrackingSource::CoderChanges,
|
||||||
|
entry_type: EntryType::Change {
|
||||||
|
impact: Impact::Docs,
|
||||||
|
breaking: false,
|
||||||
|
files_affected: 1,
|
||||||
|
},
|
||||||
|
timestamp: Utc::now(),
|
||||||
|
summary: format!("Document created: {doc_path}"),
|
||||||
|
details_link: Some(PathBuf::from(doc_path)),
|
||||||
|
metadata: HashMap::new(),
|
||||||
|
};
|
||||||
|
self.db.insert_entry(&entry).await?;
|
||||||
|
tracing::info!(doc_path = %doc_path, "Document creation tracked");
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -77,16 +117,67 @@ pub mod plugin {
|
|||||||
/// NATS event streaming integration (optional)
|
/// NATS event streaming integration (optional)
|
||||||
#[cfg(feature = "async-nats")]
|
#[cfg(feature = "async-nats")]
|
||||||
pub mod events {
|
pub mod events {
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
use crate::TrackingEntry;
|
use crate::TrackingEntry;
|
||||||
|
|
||||||
/// Event published when a tracking entry is created
|
/// Publisher that streams tracking events to NATS subjects.
|
||||||
#[derive(Debug, Clone)]
|
///
|
||||||
pub struct TrackingEntryCreatedEvent {
|
/// Subject format: `vapora.tracking.<source>` where `<source>` is the
|
||||||
/// The entry that was created
|
/// serialized [`TrackingSource`](crate::TrackingSource) variant name.
|
||||||
pub entry: TrackingEntry,
|
#[derive(Clone)]
|
||||||
|
pub struct NatsPublisher {
|
||||||
|
client: Arc<async_nats::Client>,
|
||||||
|
subject_prefix: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO: Implement NATS publisher
|
impl NatsPublisher {
|
||||||
|
/// Create a new publisher.
|
||||||
|
///
|
||||||
|
/// `subject_prefix` is prepended to the source name, e.g.
|
||||||
|
/// `"vapora.tracking"` produces `"vapora.tracking.CoderChanges"`.
|
||||||
|
pub fn new(client: Arc<async_nats::Client>, subject_prefix: impl Into<String>) -> Self {
|
||||||
|
Self {
|
||||||
|
client,
|
||||||
|
subject_prefix: subject_prefix.into(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Publish a `tracking.entry.created` event for `entry`.
|
||||||
|
///
|
||||||
|
/// The payload is JSON-encoded. Errors are returned to the caller so
|
||||||
|
/// they can decide whether to retry or log-and-continue.
|
||||||
|
pub async fn publish_entry_created(
|
||||||
|
&self,
|
||||||
|
entry: &TrackingEntry,
|
||||||
|
) -> tracking_core::Result<()> {
|
||||||
|
let subject = format!("{}.{:?}", self.subject_prefix, entry.source);
|
||||||
|
let payload = serde_json::to_vec(entry).map_err(|e| {
|
||||||
|
tracking_core::TrackingError::StorageError(format!(
|
||||||
|
"Failed to serialize entry: {e}"
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
self.client
|
||||||
|
.publish(subject.clone(), payload.into())
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
tracking_core::TrackingError::StorageError(format!(
|
||||||
|
"NATS publish to {subject} failed: {e}"
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Debug for NatsPublisher {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
f.debug_struct("NatsPublisher")
|
||||||
|
.field("subject_prefix", &self.subject_prefix)
|
||||||
|
.finish_non_exhaustive()
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub mod prelude {
|
pub mod prelude {
|
||||||
|
|||||||
132
docs/adrs/0039-merkle-audit-trail.md
Normal file
132
docs/adrs/0039-merkle-audit-trail.md
Normal file
@ -0,0 +1,132 @@
|
|||||||
|
# ADR-0039: Tamper-Evident Audit Trail — Merkle Hash Chain
|
||||||
|
|
||||||
|
**Status**: Implemented
|
||||||
|
**Date**: 2026-02-26
|
||||||
|
**Deciders**: VAPORA Team
|
||||||
|
**Technical Story**: Competitive analysis against enterprise orchestration platforms (OpenFang included) revealed that VAPORA's `audit.rs` was a simple append-only log: any direct database modification (unauthorized `UPDATE audit_entries ...`) was undetectable. Enterprise compliance frameworks (SOC 2, ISO 27001, HIPAA) require tamper-evident logs where post-hoc modification is provably detectable.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Replace the append-only audit log in `vapora-backend/src/audit/mod.rs` with a Merkle hash-chain where each entry cryptographically commits to every entry before it.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
### Why Append-Only Is Insufficient
|
||||||
|
|
||||||
|
An append-only log prevents deletion (assuming no `DELETE` privilege) but does not prevent modification. An attacker with write access to `audit_entries` can silently rewrite the `event_type`, `actor`, or `details` fields of any existing row without leaving any trace detectable by the application.
|
||||||
|
|
||||||
|
The previous implementation stored `seq`, `entry_id`, `timestamp`, `workflow_id`, `event_type`, `actor`, and `details` — but no integrity metadata. Any row could be updated without detection.
|
||||||
|
|
||||||
|
### Merkle Hash Chain Model
|
||||||
|
|
||||||
|
Each audit entry stores two additional fields:
|
||||||
|
|
||||||
|
- `prev_hash` — the `block_hash` of the immediately preceding entry (genesis entry uses `GENESIS_HASH = "00...00"` / 64 zeros)
|
||||||
|
- `block_hash` — SHA-256 of the concatenation: `prev_hash|seq|entry_id|timestamp_rfc3339|workflow_id|event_type|actor|details_json`
|
||||||
|
|
||||||
|
Modifying *any* covered field of entry N invalidates `block_hash` of entry N, which causes `prev_hash` in entry N+1 to mismatch its predecessor's hash, propagating invalidation through the entire suffix of the chain.
|
||||||
|
|
||||||
|
### Write Serialization
|
||||||
|
|
||||||
|
Fetching the previous hash and appending the new entry must be atomic with respect to other concurrent appends. A `write_lock: Arc<Mutex<()>>` serializes all `append` calls within the process. This is sufficient because VAPORA's backend is a single process; multi-node deployments would require a distributed lock (e.g., a SurrealDB `UPDATE ... IF locked IS NONE` CAS operation, as used by the scheduler).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### `AuditEntry` struct additions
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub struct AuditEntry {
|
||||||
|
pub seq: i64,
|
||||||
|
pub entry_id: String,
|
||||||
|
pub timestamp: DateTime<Utc>,
|
||||||
|
pub workflow_id: String,
|
||||||
|
pub event_type: String,
|
||||||
|
pub actor: String,
|
||||||
|
pub details: serde_json::Value,
|
||||||
|
pub prev_hash: String, // hash of predecessor
|
||||||
|
pub block_hash: String, // SHA-256 over all fields above
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Hash function
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn compute_block_hash(
|
||||||
|
prev_hash: &str,
|
||||||
|
seq: i64,
|
||||||
|
entry_id: &str,
|
||||||
|
timestamp: &DateTime<Utc>,
|
||||||
|
workflow_id: &str,
|
||||||
|
event_type: &str,
|
||||||
|
actor: &str,
|
||||||
|
details: &serde_json::Value,
|
||||||
|
) -> String {
|
||||||
|
let details = details.to_string();
|
||||||
|
let ts = timestamp.to_rfc3339();
|
||||||
|
let preimage = format!(
|
||||||
|
"{prev_hash}|{seq}|{entry_id}|{ts}|{workflow_id}|{event_type}|{actor}|{details}"
|
||||||
|
);
|
||||||
|
let digest = Sha256::digest(preimage.as_bytes());
|
||||||
|
hex::encode(digest)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integrity verification
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub async fn verify_integrity(&self, workflow_id: &str) -> Result<IntegrityReport> {
|
||||||
|
// Fetch all entries for workflow ordered by seq
|
||||||
|
// Re-derive each block_hash from stored fields
|
||||||
|
// Compare against stored block_hash
|
||||||
|
// Check prev_hash == previous entry's block_hash
|
||||||
|
// Return IntegrityReport { valid, total_entries, first_tampered_seq }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`IntegrityReport` indicates the first tampered sequence number, allowing forensic identification of the modification point and every invalidated subsequent entry.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### What Becomes Possible
|
||||||
|
|
||||||
|
- **Tamper detection**: Any direct `UPDATE audit_entries SET event_type = ...` in SurrealDB is detectable on the next `verify_integrity` call.
|
||||||
|
- **Compliance evidence**: The chain can be presented as evidence that audit records have not been modified since creation.
|
||||||
|
- **API exposure**: `GET /api/v1/workflows/:id/audit` returns the full chain; clients can independently verify hashes.
|
||||||
|
|
||||||
|
### Limitations and Known Gaps
|
||||||
|
|
||||||
|
1. **No protection against log truncation**: A `DELETE audit_entries WHERE workflow_id = ...` is not detectable by the chain (you cannot prove absence of entries). A separate monotonic counter or external timestamp anchor would address this.
|
||||||
|
2. **Single-process write lock**: The `Arc<Mutex<()>>` is sufficient for a single backend process. Multi-node deployments need a distributed lock or a database-level sequence generator with compare-and-swap semantics.
|
||||||
|
3. **SHA-256 without salting**: The hash is deterministic given the inputs. This is correct for tamper detection (you want reproducibility) but means the hash does not serve as a MAC (an attacker who rewrites a row can also recompute a valid hash chain if they have write access). For full WORM guarantees, chain anchoring to an external append-only service (e.g., a transparency log) would be required.
|
||||||
|
4. **Key rotation not addressed**: There is no HMAC key — `sha2` is used purely for commitment, not authentication. Adding a server-side HMAC key would prevent an attacker with DB write access from forging a valid chain, but requires key management.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Alternatives Considered
|
||||||
|
|
||||||
|
### Database-Level Audit Triggers
|
||||||
|
|
||||||
|
SurrealDB (v3) does not expose write triggers that could hash entries at the storage level. A pure DB-level solution is not available.
|
||||||
|
|
||||||
|
### External Append-Only Log (NATS JetStream with `MaxMsgs` and no delete)
|
||||||
|
|
||||||
|
Would require a separate NATS stream per workflow and cross-referencing two storage systems. Deferred — the Merkle chain provides sufficient tamper evidence for current compliance requirements without external dependencies.
|
||||||
|
|
||||||
|
### HMAC-based Authentication
|
||||||
|
|
||||||
|
Adds server-side secret management (rotation, distribution across nodes). Deferred until multi-node deployment requires it.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Related
|
||||||
|
|
||||||
|
- [ADR-0038: SSRF Protection and Prompt Injection Scanning](0038-security-ssrf-prompt-injection.md)
|
||||||
|
- [Workflow Orchestrator feature reference](../features/workflow-orchestrator.md)
|
||||||
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
Documentación de las decisiones arquitectónicas clave del proyecto VAPORA.
|
Documentación de las decisiones arquitectónicas clave del proyecto VAPORA.
|
||||||
|
|
||||||
**Status**: Complete (38 ADRs documented)
|
**Status**: Complete (39 ADRs documented)
|
||||||
**Last Updated**: 2026-02-26
|
**Last Updated**: 2026-02-26
|
||||||
**Format**: Custom VAPORA (Decision, Rationale, Alternatives, Trade-offs, Implementation, Verification, Consequences)
|
**Format**: Custom VAPORA (Decision, Rationale, Alternatives, Trade-offs, Implementation, Verification, Consequences)
|
||||||
|
|
||||||
@ -51,7 +51,7 @@ Decisiones sobre coordinación entre agentes y comunicación de mensajes.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## ☁️ Infrastructure & Security (5 ADRs)
|
## ☁️ Infrastructure & Security (6 ADRs)
|
||||||
|
|
||||||
Decisiones sobre infraestructura Kubernetes, seguridad, y gestión de secretos.
|
Decisiones sobre infraestructura Kubernetes, seguridad, y gestión de secretos.
|
||||||
|
|
||||||
@ -62,6 +62,7 @@ Decisiones sobre infraestructura Kubernetes, seguridad, y gestión de secretos.
|
|||||||
| [011](./0011-secretumvault.md) | SecretumVault Secrets Management | Post-quantum crypto para gestión de secretos | ✅ Accepted |
|
| [011](./0011-secretumvault.md) | SecretumVault Secrets Management | Post-quantum crypto para gestión de secretos | ✅ Accepted |
|
||||||
| [012](./0012-llm-routing-tiers.md) | Three-Tier LLM Routing | Rules-based + Dynamic + Manual Override | ✅ Accepted |
|
| [012](./0012-llm-routing-tiers.md) | Three-Tier LLM Routing | Rules-based + Dynamic + Manual Override | ✅ Accepted |
|
||||||
| [038](./0038-security-ssrf-prompt-injection.md) | SSRF Protection and Prompt Injection Scanning | Pattern-based scanner + URL deny-list at API boundary; channels filter-before-register | ✅ Implemented |
|
| [038](./0038-security-ssrf-prompt-injection.md) | SSRF Protection and Prompt Injection Scanning | Pattern-based scanner + URL deny-list at API boundary; channels filter-before-register | ✅ Implemented |
|
||||||
|
| [039](./0039-merkle-audit-trail.md) | Tamper-Evident Audit Trail — Merkle Hash Chain | SHA-256 hash chain per workflow; `block_hash = SHA256(prev_hash\|seq\|entry_id\|...\|details_json)`; `verify_integrity` detects first tampered entry | ✅ Implemented |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@ -220,7 +220,54 @@ Default: `../kogral/.kogral` (sibling directory)
|
|||||||
|
|
||||||
## REST API
|
## REST API
|
||||||
|
|
||||||
All endpoints under `/api/v1/workflow_orchestrator`:
|
Two distinct API surfaces exist for workflows:
|
||||||
|
|
||||||
|
- **`/api/v1/workflow_orchestrator`** — live orchestration (start, approve, cancel, status)
|
||||||
|
- **`/api/v1/workflows`** — workflow CRUD with execution history and Merkle audit trail
|
||||||
|
|
||||||
|
### Workflow CRUD (`/api/v1/workflows`)
|
||||||
|
|
||||||
|
| Method | Path | Description |
|
||||||
|
|--------|------|-------------|
|
||||||
|
| `GET` | `/api/v1/workflows` | List all registered workflows |
|
||||||
|
| `POST` | `/api/v1/workflows` | Register workflow from YAML |
|
||||||
|
| `GET` | `/api/v1/workflows/:id` | Get workflow by ID |
|
||||||
|
| `POST` | `/api/v1/workflows/:id/execute` | Execute a registered workflow |
|
||||||
|
| `POST` | `/api/v1/workflows/:id/rollback` | Rollback a failed workflow |
|
||||||
|
| `GET` | `/api/v1/workflows/:id/audit` | Get tamper-evident audit trail |
|
||||||
|
|
||||||
|
**Create from YAML**:
|
||||||
|
|
||||||
|
```http
|
||||||
|
POST /api/v1/workflows
|
||||||
|
Content-Type: application/json
|
||||||
|
|
||||||
|
{
|
||||||
|
"yaml": "workflow:\n id: my-workflow\n steps: ..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Audit trail entry** (each entry is hash-chained):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"seq": 3,
|
||||||
|
"entry_id": "uuid",
|
||||||
|
"timestamp": "2026-02-26T10:00:00Z",
|
||||||
|
"workflow_id": "my-workflow",
|
||||||
|
"event_type": "stage_completed",
|
||||||
|
"actor": "developer-agent",
|
||||||
|
"details": {},
|
||||||
|
"prev_hash": "abc123...",
|
||||||
|
"block_hash": "def456..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `block_hash` covers `prev_hash|seq|entry_id|timestamp|workflow_id|event_type|actor|details_json` — modifying any field breaks the chain. Call `GET /api/v1/workflows/:id/audit` to retrieve the full chain; chain integrity is verified server-side via `AuditTrail::verify_integrity`.
|
||||||
|
|
||||||
|
> **Note**: `WorkflowService` is initialized non-fatally at startup. If `AgentCoordinator` init fails (usually a missing `agents.toml`), all `/api/v1/workflows/*` endpoints return `503 Service Unavailable` rather than crashing the backend.
|
||||||
|
|
||||||
|
### Orchestration endpoints (`/api/v1/workflow_orchestrator`)
|
||||||
|
|
||||||
### Start Workflow
|
### Start Workflow
|
||||||
|
|
||||||
@ -562,7 +609,7 @@ Cron accepts 5-field (standard shell), 6-field (with seconds), or 7-field (with
|
|||||||
| `PATCH` | `/api/v1/schedules/:id` | Partial update |
|
| `PATCH` | `/api/v1/schedules/:id` | Partial update |
|
||||||
| `DELETE` | `/api/v1/schedules/:id` | Remove |
|
| `DELETE` | `/api/v1/schedules/:id` | Remove |
|
||||||
| `GET` | `/api/v1/schedules/:id/runs` | Execution history (last 100) |
|
| `GET` | `/api/v1/schedules/:id/runs` | Execution history (last 100) |
|
||||||
| `POST` | `/api/v1/schedules/:id/fire` | Manual trigger bypassing cron |
|
| `POST` | `/api/v1/schedules/:id/fire` | Manual trigger bypassing cron (requires NATS) |
|
||||||
|
|
||||||
**PUT body** (all fields):
|
**PUT body** (all fields):
|
||||||
|
|
||||||
@ -578,6 +625,8 @@ Cron accepts 5-field (standard shell), 6-field (with seconds), or 7-field (with
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **`POST /fire` availability**: Requires a live NATS connection and a valid `config/workflows.toml`. If NATS is unavailable at startup, `WorkflowOrchestrator` is not initialized and `POST /fire` returns `503`. All other schedule endpoints (`GET`, `PUT`, `PATCH`, `DELETE`) remain available regardless of NATS status.
|
||||||
|
|
||||||
**PATCH body** (only changed fields):
|
**PATCH body** (only changed fields):
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
|||||||
26
migrations/013_audit_merkle.surql
Normal file
26
migrations/013_audit_merkle.surql
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
-- Merkle audit trail: tamper-evident append-only log with SHA256 block chaining.
|
||||||
|
-- Each entry stores prev_hash (previous block's hash) and block_hash
|
||||||
|
-- (SHA256 of canonical entry data including prev_hash), forming a chain
|
||||||
|
-- where tampering any entry invalidates all subsequent hashes.
|
||||||
|
|
||||||
|
DEFINE TABLE audit_entries SCHEMAFULL;
|
||||||
|
|
||||||
|
DEFINE FIELD seq ON TABLE audit_entries TYPE int;
|
||||||
|
DEFINE FIELD entry_id ON TABLE audit_entries TYPE string;
|
||||||
|
DEFINE FIELD timestamp ON TABLE audit_entries TYPE datetime;
|
||||||
|
DEFINE FIELD workflow_id ON TABLE audit_entries TYPE string;
|
||||||
|
DEFINE FIELD event_type ON TABLE audit_entries TYPE string;
|
||||||
|
DEFINE FIELD actor ON TABLE audit_entries TYPE string;
|
||||||
|
DEFINE FIELD details ON TABLE audit_entries FLEXIBLE TYPE object;
|
||||||
|
DEFINE FIELD prev_hash ON TABLE audit_entries TYPE string;
|
||||||
|
DEFINE FIELD block_hash ON TABLE audit_entries TYPE string;
|
||||||
|
|
||||||
|
-- seq UNIQUE enforces monotonic ordering and prevents duplicate sequence numbers
|
||||||
|
DEFINE INDEX audit_seq_idx ON TABLE audit_entries COLUMNS seq UNIQUE;
|
||||||
|
-- entry_id UNIQUE for idempotent inserts
|
||||||
|
DEFINE INDEX audit_entry_id_idx ON TABLE audit_entries COLUMNS entry_id UNIQUE;
|
||||||
|
-- block_hash UNIQUE enforces Merkle chain integrity at the DB level
|
||||||
|
DEFINE INDEX audit_block_hash_idx ON TABLE audit_entries COLUMNS block_hash UNIQUE;
|
||||||
|
DEFINE INDEX audit_workflow_idx ON TABLE audit_entries COLUMNS workflow_id;
|
||||||
|
DEFINE INDEX audit_event_type_idx ON TABLE audit_entries COLUMNS event_type;
|
||||||
|
DEFINE INDEX audit_actor_idx ON TABLE audit_entries COLUMNS actor;
|
||||||
Loading…
x
Reference in New Issue
Block a user