feat(channels): webhook notification channels with built-in secret resolution
Some checks failed
Documentation Lint & Validation / Markdown Linting (push) Has been cancelled
Documentation Lint & Validation / Validate mdBook Configuration (push) Has been cancelled
Documentation Lint & Validation / Content & Structure Validation (push) Has been cancelled
Documentation Lint & Validation / Lint & Validation Summary (push) Has been cancelled
mdBook Build & Deploy / Build mdBook (push) Has been cancelled
mdBook Build & Deploy / Documentation Quality Check (push) Has been cancelled
mdBook Build & Deploy / Deploy to GitHub Pages (push) Has been cancelled
mdBook Build & Deploy / Notification (push) Has been cancelled
Rust CI / Security Audit (push) Has been cancelled
Rust CI / Check + Test + Lint (nightly) (push) Has been cancelled
Rust CI / Check + Test + Lint (stable) (push) Has been cancelled
Some checks failed
Documentation Lint & Validation / Markdown Linting (push) Has been cancelled
Documentation Lint & Validation / Validate mdBook Configuration (push) Has been cancelled
Documentation Lint & Validation / Content & Structure Validation (push) Has been cancelled
Documentation Lint & Validation / Lint & Validation Summary (push) Has been cancelled
mdBook Build & Deploy / Build mdBook (push) Has been cancelled
mdBook Build & Deploy / Documentation Quality Check (push) Has been cancelled
mdBook Build & Deploy / Deploy to GitHub Pages (push) Has been cancelled
mdBook Build & Deploy / Notification (push) Has been cancelled
Rust CI / Security Audit (push) Has been cancelled
Rust CI / Check + Test + Lint (nightly) (push) Has been cancelled
Rust CI / Check + Test + Lint (stable) (push) Has been cancelled
Add vapora-channels crate with trait-based Slack/Discord/Telegram webhook
delivery. ${VAR}/${VAR:-default} interpolation is mandatory inside
ChannelRegistry::from_config — callers cannot bypass secret resolution.
Fire-and-forget dispatch via tokio::spawn in both vapora-workflow-engine
(four lifecycle events) and vapora-backend (task Done, proposal approve/reject).
New REST endpoints: GET /channels, POST /channels/:name/test.
dispatch_notifications extracted as pub(crate) fn for inline testability;
5 handler tests + 6 workflow engine tests + 7 secret resolution unit tests.
Closes: vapora-channels bootstrap, notification gap in workflow/backend layer
ADR: docs/adrs/0035-notification-channels.md
This commit is contained in:
parent
bb55c80d2b
commit
027b8f2836
1
.gitignore
vendored
1
.gitignore
vendored
@ -67,3 +67,4 @@ vendordiff.patch
|
|||||||
# Generated SBOM files
|
# Generated SBOM files
|
||||||
SBOM.*.json
|
SBOM.*.json
|
||||||
*.sbom.json
|
*.sbom.json
|
||||||
|
.claude/settings.local.json
|
||||||
|
|||||||
39
CHANGELOG.md
39
CHANGELOG.md
@ -7,6 +7,45 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
|
|
||||||
## [Unreleased]
|
## [Unreleased]
|
||||||
|
|
||||||
|
### Added - Webhook Notification Channels (`vapora-channels`)
|
||||||
|
|
||||||
|
#### `vapora-channels` — new crate
|
||||||
|
|
||||||
|
- `NotificationChannel` trait: single `async fn send(&Message) -> Result<()>` — no vendor SDK dependency
|
||||||
|
- Three webhook implementations: `SlackChannel` (Incoming Webhook), `DiscordChannel` (Webhook embed), `TelegramChannel` (Bot API `sendMessage`)
|
||||||
|
- `ChannelRegistry`: name-keyed routing hub; `from_config(HashMap<String, ChannelConfig>)` resolves secrets at construction time
|
||||||
|
- `Message { title, body, level }` — four constructors: `info`, `success`, `warning`, `error`
|
||||||
|
- **Secret resolution built-in**: `${VAR}` / `${VAR:-default}` interpolation via `OnceLock<Regex>` in `config.rs`; `ChannelError::SecretNotFound` if env var absent and no default — callers cannot bypass resolution
|
||||||
|
- `ChannelError`: `NotFound`, `ApiError { channel, status, body }`, `SecretNotFound`, `SerializationError`
|
||||||
|
- 7 unit tests for `interpolate()`: plain string (no-op fast-path), single var, default fallback, missing var error, nested vars, whitespace, multiple vars in one string
|
||||||
|
|
||||||
|
#### `vapora-workflow-engine` — notification hooks
|
||||||
|
|
||||||
|
- `WorkflowNotifications` struct in `config.rs`: `on_stage_complete`, `on_stage_failed`, `on_completed`, `on_cancelled` — each a `Vec<String>` of channel names
|
||||||
|
- `WorkflowConfig.notifications: WorkflowNotifications` (default: empty, no regression)
|
||||||
|
- `WorkflowOrchestrator` gains `Option<Arc<ChannelRegistry>>`; four `notify_*` methods spawn `dispatch_notifications`
|
||||||
|
- 6 new tests in `tests/notification_config.rs`: config parsing, all four event hooks, empty-targets no-op
|
||||||
|
|
||||||
|
#### `vapora-backend` — event hooks and REST endpoints
|
||||||
|
|
||||||
|
- `Config.channels: HashMap<String, ChannelConfig>` and `Config.notifications: NotificationConfig` (TOML config)
|
||||||
|
- `NotificationConfig { on_task_done, on_proposal_approved, on_proposal_rejected }` — per-event channel-name lists
|
||||||
|
- `AppState` gains `channel_registry: Option<Arc<ChannelRegistry>>` and `notification_config: Arc<NotificationConfig>`
|
||||||
|
- `AppState::notify(&[String], Message)` — fire-and-forget; `tokio::spawn(dispatch_notifications(...))`
|
||||||
|
- `pub(crate) async fn dispatch_notifications(Option<Arc<ChannelRegistry>>, Vec<String>, Message)` — extracted for testability without DB
|
||||||
|
- Notification hooks added to three existing handlers:
|
||||||
|
- `update_task_status` — `Message::success` when `TaskStatus::Done`
|
||||||
|
- `approve_proposal` — `Message::success`
|
||||||
|
- `reject_proposal` — `Message::warning`
|
||||||
|
- New endpoints: `GET /api/v1/channels` (list names), `POST /api/v1/channels/:name/test` (connectivity check)
|
||||||
|
- 5 unit tests in `state.rs`: `RecordingChannel` + `FailingChannel` test doubles; dispatch no-op, single delivery, multi-channel, resilience after failure, warn on unknown channel
|
||||||
|
|
||||||
|
#### Documentation
|
||||||
|
|
||||||
|
- **ADR-0035**: design rationale for trait-based channels, built-in secret resolution, and fire-and-forget delivery
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Added - Autonomous Scheduling: Timezone Support and Distributed Fire-Lock
|
### Added - Autonomous Scheduling: Timezone Support and Distributed Fire-Lock
|
||||||
|
|
||||||
#### `vapora-workflow-engine` — scheduling hardening
|
#### `vapora-workflow-engine` — scheduling hardening
|
||||||
|
|||||||
17
Cargo.lock
generated
17
Cargo.lock
generated
@ -12419,6 +12419,7 @@ dependencies = [
|
|||||||
"tracing-subscriber",
|
"tracing-subscriber",
|
||||||
"uuid",
|
"uuid",
|
||||||
"vapora-agents",
|
"vapora-agents",
|
||||||
|
"vapora-channels",
|
||||||
"vapora-knowledge-graph",
|
"vapora-knowledge-graph",
|
||||||
"vapora-llm-router",
|
"vapora-llm-router",
|
||||||
"vapora-rlm",
|
"vapora-rlm",
|
||||||
@ -12429,6 +12430,21 @@ dependencies = [
|
|||||||
"wiremock",
|
"wiremock",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "vapora-channels"
|
||||||
|
version = "1.2.0"
|
||||||
|
dependencies = [
|
||||||
|
"async-trait",
|
||||||
|
"regex",
|
||||||
|
"reqwest 0.13.1",
|
||||||
|
"serde",
|
||||||
|
"serde_json",
|
||||||
|
"thiserror 2.0.18",
|
||||||
|
"tokio",
|
||||||
|
"tracing",
|
||||||
|
"wiremock",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "vapora-cli"
|
name = "vapora-cli"
|
||||||
version = "1.2.0"
|
version = "1.2.0"
|
||||||
@ -12698,6 +12714,7 @@ dependencies = [
|
|||||||
"tracing",
|
"tracing",
|
||||||
"uuid",
|
"uuid",
|
||||||
"vapora-agents",
|
"vapora-agents",
|
||||||
|
"vapora-channels",
|
||||||
"vapora-knowledge-graph",
|
"vapora-knowledge-graph",
|
||||||
"vapora-shared",
|
"vapora-shared",
|
||||||
"vapora-swarm",
|
"vapora-swarm",
|
||||||
|
|||||||
@ -3,6 +3,7 @@
|
|||||||
resolver = "2"
|
resolver = "2"
|
||||||
|
|
||||||
members = [
|
members = [
|
||||||
|
"crates/vapora-channels",
|
||||||
"crates/vapora-backend",
|
"crates/vapora-backend",
|
||||||
"crates/vapora-frontend",
|
"crates/vapora-frontend",
|
||||||
"crates/vapora-leptos-ui",
|
"crates/vapora-leptos-ui",
|
||||||
@ -36,6 +37,7 @@ categories = ["development-tools", "web-programming"]
|
|||||||
|
|
||||||
[workspace.dependencies]
|
[workspace.dependencies]
|
||||||
# Vapora internal crates
|
# Vapora internal crates
|
||||||
|
vapora-channels = { path = "crates/vapora-channels" }
|
||||||
vapora-shared = { path = "crates/vapora-shared" }
|
vapora-shared = { path = "crates/vapora-shared" }
|
||||||
vapora-leptos-ui = { path = "crates/vapora-leptos-ui" }
|
vapora-leptos-ui = { path = "crates/vapora-leptos-ui" }
|
||||||
vapora-agents = { path = "crates/vapora-agents" }
|
vapora-agents = { path = "crates/vapora-agents" }
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
@ -516,8 +516,8 @@
|
|||||||
|
|
||||||
<div class="container">
|
<div class="container">
|
||||||
<header>
|
<header>
|
||||||
<span class="status-badge" data-en="✅ v1.2.0 | 354 Tests | 100% Pass Rate" data-es="✅ v1.2.0 | 354 Tests | 100% Éxito"
|
<span class="status-badge" data-en="✅ v1.2.0 | 372 Tests | 100% Pass Rate" data-es="✅ v1.2.0 | 372 Tests | 100% Éxito"
|
||||||
>✅ v1.2.0 | 354 Tests | 100% Pass Rate</span
|
>✅ v1.2.0 | 372 Tests | 100% Pass Rate</span
|
||||||
>
|
>
|
||||||
<div class="logo-container">
|
<div class="logo-container">
|
||||||
<img id="logo-dark" src="/vapora.svg" alt="Vapora - Development Orchestration" style="display: block;" />
|
<img id="logo-dark" src="/vapora.svg" alt="Vapora - Development Orchestration" style="display: block;" />
|
||||||
@ -785,6 +785,42 @@
|
|||||||
161 backend tests + K8s manifests with Kustomize overlays. Health checks, Prometheus metrics (/metrics endpoint), StatefulSets with anti-affinity. Local Docker Compose for development. Zero vendor lock-in.
|
161 backend tests + K8s manifests with Kustomize overlays. Health checks, Prometheus metrics (/metrics endpoint), StatefulSets with anti-affinity. Local Docker Compose for development. Zero vendor lock-in.
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="feature-box" style="border-left-color: #f97316">
|
||||||
|
<div class="feature-icon">⏰</div>
|
||||||
|
<h3
|
||||||
|
class="feature-title"
|
||||||
|
style="color: #f97316"
|
||||||
|
data-en="Autonomous Scheduling"
|
||||||
|
data-es="Scheduling Autónomo"
|
||||||
|
>
|
||||||
|
Autonomous Scheduling
|
||||||
|
</h3>
|
||||||
|
<p
|
||||||
|
class="feature-text"
|
||||||
|
data-en="Cron-triggered workflow execution with IANA timezone support via chrono-tz. Distributed fire-lock using SurrealDB conditional UPDATE prevents double-fires across multi-instance deployments — no external lock service required. 48 tests."
|
||||||
|
data-es="Ejecución de workflows disparada por cron con soporte de timezone IANA via chrono-tz. Fire-lock distribuido usando UPDATE condicional de SurrealDB previene doble disparo en despliegues multi-instancia — sin servicio de lock externo. 48 tests."
|
||||||
|
>
|
||||||
|
Cron-triggered workflow execution with IANA timezone support via chrono-tz. Distributed fire-lock using SurrealDB conditional UPDATE prevents double-fires across multi-instance deployments — no external lock service required. 48 tests.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
<div class="feature-box" style="border-left-color: #d946ef">
|
||||||
|
<div class="feature-icon">🔔</div>
|
||||||
|
<h3
|
||||||
|
class="feature-title"
|
||||||
|
style="color: #d946ef"
|
||||||
|
data-en="Webhook Notifications"
|
||||||
|
data-es="Notificaciones Webhook"
|
||||||
|
>
|
||||||
|
Webhook Notifications
|
||||||
|
</h3>
|
||||||
|
<p
|
||||||
|
class="feature-text"
|
||||||
|
data-en="Real-time alerts to Slack, Discord, and Telegram — no vendor SDKs. ${VAR} secret resolution is built into ChannelRegistry construction; tokens never reach the HTTP layer unresolved. Fire-and-forget hooks on task completion, proposal approval/rejection, and workflow lifecycle events."
|
||||||
|
data-es="Alertas en tiempo real a Slack, Discord y Telegram — sin SDKs de vendor. Resolución de secretos ${VAR} integrada en la construcción de ChannelRegistry; los tokens nunca llegan sin resolver a la capa HTTP. Hooks fire-and-forget en completado de tareas, aprobación/rechazo de propuestas y eventos del ciclo de vida de workflows."
|
||||||
|
>
|
||||||
|
Real-time alerts to Slack, Discord, and Telegram — no vendor SDKs. ${VAR} secret resolution is built into ChannelRegistry construction; tokens never reach the HTTP layer unresolved. Fire-and-forget hooks on task completion, proposal approval/rejection, and workflow lifecycle events.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</section>
|
</section>
|
||||||
|
|
||||||
@ -795,7 +831,7 @@
|
|||||||
>
|
>
|
||||||
</h2>
|
</h2>
|
||||||
<div class="tech-stack">
|
<div class="tech-stack">
|
||||||
<span class="tech-badge">Rust (17 crates)</span>
|
<span class="tech-badge">Rust (18 crates)</span>
|
||||||
<span class="tech-badge">Axum REST API</span>
|
<span class="tech-badge">Axum REST API</span>
|
||||||
<span class="tech-badge">SurrealDB</span>
|
<span class="tech-badge">SurrealDB</span>
|
||||||
<span class="tech-badge">NATS JetStream</span>
|
<span class="tech-badge">NATS JetStream</span>
|
||||||
@ -806,6 +842,8 @@
|
|||||||
<span class="tech-badge">RLM (Hybrid Search)</span>
|
<span class="tech-badge">RLM (Hybrid Search)</span>
|
||||||
<span class="tech-badge">A2A Protocol</span>
|
<span class="tech-badge">A2A Protocol</span>
|
||||||
<span class="tech-badge">MCP Server</span>
|
<span class="tech-badge">MCP Server</span>
|
||||||
|
<span class="tech-badge">chrono-tz (Cron)</span>
|
||||||
|
<span class="tech-badge">Webhook Channels</span>
|
||||||
</div>
|
</div>
|
||||||
</section>
|
</section>
|
||||||
|
|
||||||
|
|||||||
@ -106,7 +106,7 @@
|
|||||||
<!-- TITLE -->
|
<!-- TITLE -->
|
||||||
<!-- ═══════════════════════════════════════ -->
|
<!-- ═══════════════════════════════════════ -->
|
||||||
<text x="700" y="42" font-family="'JetBrains Mono', monospace" font-size="22" font-weight="800" fill="url(#grad-main)" letter-spacing="6" text-anchor="middle" filter="url(#glow-sm)">VAPORA ARCHITECTURE</text>
|
<text x="700" y="42" font-family="'JetBrains Mono', monospace" font-size="22" font-weight="800" fill="url(#grad-main)" letter-spacing="6" text-anchor="middle" filter="url(#glow-sm)">VAPORA ARCHITECTURE</text>
|
||||||
<text x="700" y="62" font-family="'Inter', sans-serif" font-size="11" fill="#a855f7" opacity="0.6" letter-spacing="3" text-anchor="middle">18 CRATES · 354 TESTS · 100% RUST</text>
|
<text x="700" y="62" font-family="'Inter', sans-serif" font-size="11" fill="#a855f7" opacity="0.6" letter-spacing="3" text-anchor="middle">18 CRATES · 372 TESTS · 100% RUST</text>
|
||||||
|
|
||||||
<!-- Layer labels (left side) -->
|
<!-- Layer labels (left side) -->
|
||||||
<text x="30" y="115" font-family="'JetBrains Mono', monospace" font-size="10" fill="#22d3ee" opacity="0.7" letter-spacing="2" transform="rotate(-90 30 115)">PRESENTATION</text>
|
<text x="30" y="115" font-family="'JetBrains Mono', monospace" font-size="10" fill="#22d3ee" opacity="0.7" letter-spacing="2" transform="rotate(-90 30 115)">PRESENTATION</text>
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 42 KiB After Width: | Height: | Size: 42 KiB |
@ -25,6 +25,7 @@ vapora-swarm = { workspace = true }
|
|||||||
vapora-tracking = { path = "../vapora-tracking" }
|
vapora-tracking = { path = "../vapora-tracking" }
|
||||||
vapora-knowledge-graph = { path = "../vapora-knowledge-graph" }
|
vapora-knowledge-graph = { path = "../vapora-knowledge-graph" }
|
||||||
vapora-workflow-engine = { workspace = true }
|
vapora-workflow-engine = { workspace = true }
|
||||||
|
vapora-channels = { workspace = true }
|
||||||
vapora-rlm = { path = "../vapora-rlm" }
|
vapora-rlm = { path = "../vapora-rlm" }
|
||||||
|
|
||||||
# Secrets management
|
# Secrets management
|
||||||
|
|||||||
62
crates/vapora-backend/src/api/channels.rs
Normal file
62
crates/vapora-backend/src/api/channels.rs
Normal file
@ -0,0 +1,62 @@
|
|||||||
|
use axum::{
|
||||||
|
extract::{Path, State},
|
||||||
|
http::StatusCode,
|
||||||
|
response::IntoResponse,
|
||||||
|
Json,
|
||||||
|
};
|
||||||
|
use serde::Serialize;
|
||||||
|
use vapora_channels::{ChannelError, Message};
|
||||||
|
use vapora_shared::VaporaError;
|
||||||
|
|
||||||
|
use crate::api::state::AppState;
|
||||||
|
use crate::api::ApiResult;
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct ChannelListResponse {
|
||||||
|
channels: Vec<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all registered notification channels.
|
||||||
|
///
|
||||||
|
/// GET /api/v1/channels
|
||||||
|
pub async fn list_channels(State(state): State<AppState>) -> impl IntoResponse {
|
||||||
|
let names = match &state.channel_registry {
|
||||||
|
Some(r) => {
|
||||||
|
let mut names: Vec<String> = r.channel_names().into_iter().map(str::to_owned).collect();
|
||||||
|
names.sort_unstable();
|
||||||
|
names
|
||||||
|
}
|
||||||
|
None => vec![],
|
||||||
|
};
|
||||||
|
Json(ChannelListResponse { channels: names })
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Send a test message to a specific notification channel.
|
||||||
|
///
|
||||||
|
/// POST /api/v1/channels/:name/test
|
||||||
|
///
|
||||||
|
/// Returns 200 on successful delivery, 404 if the channel is unknown or not
|
||||||
|
/// configured, 502 if delivery fails at the remote platform.
|
||||||
|
pub async fn test_channel(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(name): Path<String>,
|
||||||
|
) -> ApiResult<impl IntoResponse> {
|
||||||
|
let registry = state.channel_registry.as_ref().ok_or_else(|| {
|
||||||
|
VaporaError::NotFound(format!(
|
||||||
|
"Channel '{}' not found — no channels configured",
|
||||||
|
name
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let msg = Message::info(
|
||||||
|
"Test notification",
|
||||||
|
format!("Connectivity test from VAPORA backend for channel '{name}'"),
|
||||||
|
);
|
||||||
|
|
||||||
|
registry.send(&name, msg).await.map_err(|e| match e {
|
||||||
|
ChannelError::NotFound(_) => VaporaError::NotFound(e.to_string()),
|
||||||
|
other => VaporaError::InternalError(other.to_string()),
|
||||||
|
})?;
|
||||||
|
|
||||||
|
Ok(StatusCode::OK)
|
||||||
|
}
|
||||||
@ -3,6 +3,7 @@
|
|||||||
pub mod agents;
|
pub mod agents;
|
||||||
pub mod analytics;
|
pub mod analytics;
|
||||||
pub mod analytics_metrics;
|
pub mod analytics_metrics;
|
||||||
|
pub mod channels;
|
||||||
pub mod error;
|
pub mod error;
|
||||||
pub mod health;
|
pub mod health;
|
||||||
pub mod metrics;
|
pub mod metrics;
|
||||||
|
|||||||
@ -7,6 +7,7 @@ use axum::{
|
|||||||
Json,
|
Json,
|
||||||
};
|
};
|
||||||
use serde::Deserialize;
|
use serde::Deserialize;
|
||||||
|
use vapora_channels::Message;
|
||||||
use vapora_shared::models::{Proposal, ProposalReview, ProposalStatus, RiskLevel};
|
use vapora_shared::models::{Proposal, ProposalReview, ProposalStatus, RiskLevel};
|
||||||
|
|
||||||
use crate::api::state::AppState;
|
use crate::api::state::AppState;
|
||||||
@ -186,6 +187,12 @@ pub async fn approve_proposal(
|
|||||||
.approve_proposal(&id, tenant_id)
|
.approve_proposal(&id, tenant_id)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
|
let msg = Message::success(
|
||||||
|
"Proposal approved",
|
||||||
|
format!("'{}' has been approved", proposal.title),
|
||||||
|
);
|
||||||
|
state.notify(&state.notification_config.clone().on_proposal_approved, msg);
|
||||||
|
|
||||||
Ok(Json(proposal))
|
Ok(Json(proposal))
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -203,6 +210,12 @@ pub async fn reject_proposal(
|
|||||||
.reject_proposal(&id, tenant_id)
|
.reject_proposal(&id, tenant_id)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
|
let msg = Message::warning(
|
||||||
|
"Proposal rejected",
|
||||||
|
format!("'{}' has been rejected", proposal.title),
|
||||||
|
);
|
||||||
|
state.notify(&state.notification_config.clone().on_proposal_rejected, msg);
|
||||||
|
|
||||||
Ok(Json(proposal))
|
Ok(Json(proposal))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -2,10 +2,12 @@
|
|||||||
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use vapora_channels::ChannelRegistry;
|
||||||
use vapora_rlm::storage::SurrealDBStorage;
|
use vapora_rlm::storage::SurrealDBStorage;
|
||||||
use vapora_rlm::RLMEngine;
|
use vapora_rlm::RLMEngine;
|
||||||
use vapora_workflow_engine::{ScheduleStore, WorkflowOrchestrator};
|
use vapora_workflow_engine::{ScheduleStore, WorkflowOrchestrator};
|
||||||
|
|
||||||
|
use crate::config::NotificationConfig;
|
||||||
use crate::services::{
|
use crate::services::{
|
||||||
AgentService, ProjectService, ProposalService, ProviderAnalyticsService, TaskService,
|
AgentService, ProjectService, ProposalService, ProviderAnalyticsService, TaskService,
|
||||||
};
|
};
|
||||||
@ -21,6 +23,11 @@ pub struct AppState {
|
|||||||
pub workflow_orchestrator: Option<Arc<WorkflowOrchestrator>>,
|
pub workflow_orchestrator: Option<Arc<WorkflowOrchestrator>>,
|
||||||
pub rlm_engine: Option<Arc<RLMEngine<SurrealDBStorage>>>,
|
pub rlm_engine: Option<Arc<RLMEngine<SurrealDBStorage>>>,
|
||||||
pub schedule_store: Option<Arc<ScheduleStore>>,
|
pub schedule_store: Option<Arc<ScheduleStore>>,
|
||||||
|
/// Outbound notification channels; `None` when `[channels]` is absent from
|
||||||
|
/// config.
|
||||||
|
pub channel_registry: Option<Arc<ChannelRegistry>>,
|
||||||
|
/// Backend-level event → channel-name mappings.
|
||||||
|
pub notification_config: Arc<NotificationConfig>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl AppState {
|
impl AppState {
|
||||||
@ -41,6 +48,8 @@ impl AppState {
|
|||||||
workflow_orchestrator: None,
|
workflow_orchestrator: None,
|
||||||
rlm_engine: None,
|
rlm_engine: None,
|
||||||
schedule_store: None,
|
schedule_store: None,
|
||||||
|
channel_registry: None,
|
||||||
|
notification_config: Arc::new(NotificationConfig::default()),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -62,4 +71,192 @@ impl AppState {
|
|||||||
self.schedule_store = Some(store);
|
self.schedule_store = Some(store);
|
||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Attach the notification channel registry built from `[channels]` config.
|
||||||
|
pub fn with_channel_registry(mut self, registry: Arc<ChannelRegistry>) -> Self {
|
||||||
|
self.channel_registry = Some(registry);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Attach the per-event notification targets.
|
||||||
|
pub fn with_notification_config(mut self, cfg: NotificationConfig) -> Self {
|
||||||
|
self.notification_config = Arc::new(cfg);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fire-and-forget: send `msg` to each channel in `targets`.
|
||||||
|
///
|
||||||
|
/// Spawns a background task; delivery failures are logged as `warn!` and
|
||||||
|
/// never surface to the caller.
|
||||||
|
pub fn notify(&self, targets: &[String], msg: vapora_channels::Message) {
|
||||||
|
if targets.is_empty() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
let registry = self.channel_registry.clone();
|
||||||
|
let targets = targets.to_vec();
|
||||||
|
tokio::spawn(dispatch_notifications(registry, targets, msg));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Deliver `msg` to every channel name in `targets` using `registry`.
|
||||||
|
///
|
||||||
|
/// A `None` registry or an unknown channel name is silent (warn-logged for
|
||||||
|
/// unknown names). Failures in one channel do not abort delivery to others.
|
||||||
|
///
|
||||||
|
/// Extracted from [`AppState::notify`] to be directly callable in tests
|
||||||
|
/// without needing a fully-constructed [`AppState`].
|
||||||
|
pub(crate) async fn dispatch_notifications(
|
||||||
|
registry: Option<Arc<ChannelRegistry>>,
|
||||||
|
targets: Vec<String>,
|
||||||
|
msg: vapora_channels::Message,
|
||||||
|
) {
|
||||||
|
let Some(registry) = registry else {
|
||||||
|
return;
|
||||||
|
};
|
||||||
|
for name in &targets {
|
||||||
|
if let Err(e) = registry.send(name, msg.clone()).await {
|
||||||
|
tracing::warn!(channel = %name, error = %e, "Notification delivery failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use std::sync::{Arc, Mutex};
|
||||||
|
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use vapora_channels::Result as ChannelResult;
|
||||||
|
use vapora_channels::{ChannelError, ChannelRegistry, Message, NotificationChannel};
|
||||||
|
|
||||||
|
use super::dispatch_notifications;
|
||||||
|
|
||||||
|
struct RecordingChannel {
|
||||||
|
name: String,
|
||||||
|
captured: Arc<Mutex<Vec<Message>>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl RecordingChannel {
|
||||||
|
fn new(name: &str) -> (Self, Arc<Mutex<Vec<Message>>>) {
|
||||||
|
let captured = Arc::new(Mutex::new(vec![]));
|
||||||
|
(
|
||||||
|
Self {
|
||||||
|
name: name.to_string(),
|
||||||
|
captured: Arc::clone(&captured),
|
||||||
|
},
|
||||||
|
captured,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl NotificationChannel for RecordingChannel {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
&self.name
|
||||||
|
}
|
||||||
|
async fn send(&self, msg: &Message) -> ChannelResult<()> {
|
||||||
|
self.captured.lock().unwrap().push(msg.clone());
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct FailingChannel {
|
||||||
|
name: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl NotificationChannel for FailingChannel {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
&self.name
|
||||||
|
}
|
||||||
|
async fn send(&self, _msg: &Message) -> ChannelResult<()> {
|
||||||
|
Err(ChannelError::ApiError {
|
||||||
|
channel: self.name.clone(),
|
||||||
|
status: 503,
|
||||||
|
body: "unavailable".to_string(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn dispatch_is_noop_when_registry_is_none() {
|
||||||
|
// Must not panic; targets are non-empty so the only short-circuit is None
|
||||||
|
// registry.
|
||||||
|
dispatch_notifications(None, vec!["ch".to_string()], Message::info("Test", "body")).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn dispatch_delivers_to_named_channel() {
|
||||||
|
let (recording, captured) = RecordingChannel::new("team-slack");
|
||||||
|
let mut registry = ChannelRegistry::new();
|
||||||
|
registry.register(Arc::new(recording));
|
||||||
|
|
||||||
|
dispatch_notifications(
|
||||||
|
Some(Arc::new(registry)),
|
||||||
|
vec!["team-slack".to_string()],
|
||||||
|
Message::success("Deploy done", "v1.0 → prod"),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let msgs = captured.lock().unwrap();
|
||||||
|
assert_eq!(msgs.len(), 1);
|
||||||
|
assert_eq!(msgs[0].title, "Deploy done");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn dispatch_delivers_to_multiple_targets() {
|
||||||
|
let (ch_a, cap_a) = RecordingChannel::new("ch-a");
|
||||||
|
let (ch_b, cap_b) = RecordingChannel::new("ch-b");
|
||||||
|
let mut registry = ChannelRegistry::new();
|
||||||
|
registry.register(Arc::new(ch_a));
|
||||||
|
registry.register(Arc::new(ch_b));
|
||||||
|
|
||||||
|
dispatch_notifications(
|
||||||
|
Some(Arc::new(registry)),
|
||||||
|
vec!["ch-a".to_string(), "ch-b".to_string()],
|
||||||
|
Message::info("Test", "broadcast"),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
assert_eq!(cap_a.lock().unwrap().len(), 1);
|
||||||
|
assert_eq!(cap_b.lock().unwrap().len(), 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn dispatch_continues_after_channel_failure() {
|
||||||
|
let bad = FailingChannel {
|
||||||
|
name: "bad".to_string(),
|
||||||
|
};
|
||||||
|
let (good, cap_good) = RecordingChannel::new("good");
|
||||||
|
let mut registry = ChannelRegistry::new();
|
||||||
|
registry.register(Arc::new(bad));
|
||||||
|
registry.register(Arc::new(good));
|
||||||
|
|
||||||
|
// Must not panic; "good" receives despite "bad" returning an error.
|
||||||
|
dispatch_notifications(
|
||||||
|
Some(Arc::new(registry)),
|
||||||
|
vec!["bad".to_string(), "good".to_string()],
|
||||||
|
Message::error("Alert", "system down"),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
assert_eq!(cap_good.lock().unwrap().len(), 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn dispatch_logs_warn_on_unknown_channel_but_continues() {
|
||||||
|
let (present, cap) = RecordingChannel::new("present");
|
||||||
|
let mut registry = ChannelRegistry::new();
|
||||||
|
registry.register(Arc::new(present));
|
||||||
|
|
||||||
|
// "missing" → ChannelError::NotFound logged as warn; "present" still
|
||||||
|
// receives its message.
|
||||||
|
dispatch_notifications(
|
||||||
|
Some(Arc::new(registry)),
|
||||||
|
vec!["missing".to_string(), "present".to_string()],
|
||||||
|
Message::info("Test", "body"),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
assert_eq!(cap.lock().unwrap().len(), 1);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -7,6 +7,7 @@ use axum::{
|
|||||||
Json,
|
Json,
|
||||||
};
|
};
|
||||||
use serde::Deserialize;
|
use serde::Deserialize;
|
||||||
|
use vapora_channels::Message;
|
||||||
use vapora_shared::models::{Task, TaskPriority, TaskStatus};
|
use vapora_shared::models::{Task, TaskPriority, TaskStatus};
|
||||||
|
|
||||||
use crate::api::state::AppState;
|
use crate::api::state::AppState;
|
||||||
@ -160,8 +161,17 @@ pub async fn update_task_status(
|
|||||||
|
|
||||||
let updated = state
|
let updated = state
|
||||||
.task_service
|
.task_service
|
||||||
.update_task_status(&id, tenant_id, status)
|
.update_task_status(&id, tenant_id, status.clone())
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
|
if status == TaskStatus::Done {
|
||||||
|
let msg = Message::success(
|
||||||
|
"Task completed",
|
||||||
|
format!("'{}' moved to Done", updated.title),
|
||||||
|
);
|
||||||
|
state.notify(&state.notification_config.clone().on_task_done, msg);
|
||||||
|
}
|
||||||
|
|
||||||
Ok(Json(updated))
|
Ok(Json(updated))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -291,6 +291,7 @@ mod tests {
|
|||||||
compensation_agents: None,
|
compensation_agents: None,
|
||||||
}],
|
}],
|
||||||
schedule: None,
|
schedule: None,
|
||||||
|
notifications: Default::default(),
|
||||||
};
|
};
|
||||||
|
|
||||||
let instance = WorkflowInstance::new(&config, serde_json::json!({}));
|
let instance = WorkflowInstance::new(&config, serde_json::json!({}));
|
||||||
|
|||||||
@ -1,10 +1,12 @@
|
|||||||
// Configuration module for VAPORA Backend
|
// Configuration module for VAPORA Backend
|
||||||
// Loads config from vapora.toml with environment variable interpolation
|
// Loads config from vapora.toml with environment variable interpolation
|
||||||
|
|
||||||
|
use std::collections::HashMap;
|
||||||
use std::fs;
|
use std::fs;
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
|
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
|
use vapora_channels::config::ChannelConfig;
|
||||||
use vapora_shared::{Result, VaporaError};
|
use vapora_shared::{Result, VaporaError};
|
||||||
|
|
||||||
/// Main configuration structure
|
/// Main configuration structure
|
||||||
@ -16,6 +18,32 @@ pub struct Config {
|
|||||||
pub auth: AuthConfig,
|
pub auth: AuthConfig,
|
||||||
pub logging: LoggingConfig,
|
pub logging: LoggingConfig,
|
||||||
pub metrics: MetricsConfig,
|
pub metrics: MetricsConfig,
|
||||||
|
/// Named outbound notification channels (`[channels.name]` blocks in TOML).
|
||||||
|
/// Credential fields support `${VAR}` / `${VAR:-default}` interpolation —
|
||||||
|
/// resolution happens automatically in [`ChannelRegistry::from_map`].
|
||||||
|
#[serde(default)]
|
||||||
|
pub channels: HashMap<String, ChannelConfig>,
|
||||||
|
/// Backend-level event → channel-name mappings.
|
||||||
|
#[serde(default)]
|
||||||
|
pub notifications: NotificationConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Per-event lists of channel names to notify.
|
||||||
|
///
|
||||||
|
/// ```toml
|
||||||
|
/// [notifications]
|
||||||
|
/// on_task_done = ["team-slack"]
|
||||||
|
/// on_proposal_approved = ["team-slack"]
|
||||||
|
/// on_proposal_rejected = ["team-slack", "ops-telegram"]
|
||||||
|
/// ```
|
||||||
|
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||||
|
pub struct NotificationConfig {
|
||||||
|
#[serde(default)]
|
||||||
|
pub on_task_done: Vec<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub on_proposal_approved: Vec<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub on_proposal_rejected: Vec<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Server configuration
|
/// Server configuration
|
||||||
@ -199,7 +227,8 @@ mod tests {
|
|||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_env_var_interpolation() {
|
fn test_env_var_interpolation() {
|
||||||
std::env::set_var("TEST_VAR", "test_value");
|
// SAFETY: single-threaded test, no concurrent env access.
|
||||||
|
unsafe { std::env::set_var("TEST_VAR", "test_value") };
|
||||||
|
|
||||||
let input = "host = \"${TEST_VAR}\"";
|
let input = "host = \"${TEST_VAR}\"";
|
||||||
let result = Config::interpolate_env_vars(input).unwrap();
|
let result = Config::interpolate_env_vars(input).unwrap();
|
||||||
@ -245,6 +274,8 @@ mod tests {
|
|||||||
enabled: true,
|
enabled: true,
|
||||||
port: 9090,
|
port: 9090,
|
||||||
},
|
},
|
||||||
|
channels: HashMap::new(),
|
||||||
|
notifications: NotificationConfig::default(),
|
||||||
};
|
};
|
||||||
|
|
||||||
assert!(config.validate().is_err());
|
assert!(config.validate().is_err());
|
||||||
|
|||||||
@ -18,6 +18,7 @@ use axum::{
|
|||||||
use clap::Parser;
|
use clap::Parser;
|
||||||
use tower_http::cors::{Any, CorsLayer};
|
use tower_http::cors::{Any, CorsLayer};
|
||||||
use tracing::{info, Level};
|
use tracing::{info, Level};
|
||||||
|
use vapora_channels::ChannelRegistry;
|
||||||
use vapora_swarm::{SwarmCoordinator, SwarmMetrics};
|
use vapora_swarm::{SwarmCoordinator, SwarmMetrics};
|
||||||
use vapora_workflow_engine::ScheduleStore;
|
use vapora_workflow_engine::ScheduleStore;
|
||||||
|
|
||||||
@ -109,8 +110,28 @@ async fn main() -> Result<()> {
|
|||||||
let schedule_store = Arc::new(ScheduleStore::new(Arc::new(db.clone())));
|
let schedule_store = Arc::new(ScheduleStore::new(Arc::new(db.clone())));
|
||||||
info!("ScheduleStore initialized for autonomous scheduling");
|
info!("ScheduleStore initialized for autonomous scheduling");
|
||||||
|
|
||||||
|
// Build notification channel registry from [channels] config block.
|
||||||
|
// Absent block → no notifications sent; a build error is non-fatal (warns).
|
||||||
|
let channel_registry = if config.channels.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
match ChannelRegistry::from_map(config.channels.clone()) {
|
||||||
|
Ok(r) => {
|
||||||
|
info!(
|
||||||
|
"Channel registry built ({} channels)",
|
||||||
|
r.channel_names().len()
|
||||||
|
);
|
||||||
|
Some(std::sync::Arc::new(r))
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("Failed to build channel registry: {e}; notifications disabled");
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
// Create application state
|
// Create application state
|
||||||
let app_state = AppState::new(
|
let mut app_state = AppState::new(
|
||||||
project_service,
|
project_service,
|
||||||
task_service,
|
task_service,
|
||||||
agent_service,
|
agent_service,
|
||||||
@ -118,7 +139,11 @@ async fn main() -> Result<()> {
|
|||||||
provider_analytics_service,
|
provider_analytics_service,
|
||||||
)
|
)
|
||||||
.with_rlm_engine(rlm_engine)
|
.with_rlm_engine(rlm_engine)
|
||||||
.with_schedule_store(schedule_store);
|
.with_schedule_store(schedule_store)
|
||||||
|
.with_notification_config(config.notifications.clone());
|
||||||
|
if let Some(registry) = channel_registry {
|
||||||
|
app_state = app_state.with_channel_registry(registry);
|
||||||
|
}
|
||||||
|
|
||||||
// Create SwarmMetrics for Prometheus monitoring
|
// Create SwarmMetrics for Prometheus monitoring
|
||||||
let metrics = match SwarmMetrics::new() {
|
let metrics = match SwarmMetrics::new() {
|
||||||
@ -333,6 +358,12 @@ async fn main() -> Result<()> {
|
|||||||
"/api/v1/analytics/providers/:provider/tasks/:task_type",
|
"/api/v1/analytics/providers/:provider/tasks/:task_type",
|
||||||
get(api::provider_analytics::get_provider_task_type_metrics),
|
get(api::provider_analytics::get_provider_task_type_metrics),
|
||||||
)
|
)
|
||||||
|
// Channel endpoints
|
||||||
|
.route("/api/v1/channels", get(api::channels::list_channels))
|
||||||
|
.route(
|
||||||
|
"/api/v1/channels/:name/test",
|
||||||
|
post(api::channels::test_channel),
|
||||||
|
)
|
||||||
// RLM endpoints (Phase 8)
|
// RLM endpoints (Phase 8)
|
||||||
.route("/api/v1/rlm/documents", post(api::rlm::load_document))
|
.route("/api/v1/rlm/documents", post(api::rlm::load_document))
|
||||||
.route("/api/v1/rlm/query", post(api::rlm::query_document))
|
.route("/api/v1/rlm/query", post(api::rlm::query_document))
|
||||||
|
|||||||
23
crates/vapora-channels/Cargo.toml
Normal file
23
crates/vapora-channels/Cargo.toml
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
[package]
|
||||||
|
name = "vapora-channels"
|
||||||
|
version.workspace = true
|
||||||
|
edition.workspace = true
|
||||||
|
authors.workspace = true
|
||||||
|
license.workspace = true
|
||||||
|
repository.workspace = true
|
||||||
|
homepage = "https://vapora.dev"
|
||||||
|
rust-version.workspace = true
|
||||||
|
description = "Outbound notification channels: Slack, Discord, Telegram"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
reqwest = { workspace = true }
|
||||||
|
async-trait = { workspace = true }
|
||||||
|
serde = { workspace = true }
|
||||||
|
serde_json = { workspace = true }
|
||||||
|
thiserror = { workspace = true }
|
||||||
|
tracing = { workspace = true }
|
||||||
|
regex = { workspace = true }
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
tokio = { workspace = true, features = ["full"] }
|
||||||
|
wiremock = { workspace = true }
|
||||||
9
crates/vapora-channels/src/channel.rs
Normal file
9
crates/vapora-channels/src/channel.rs
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
use async_trait::async_trait;
|
||||||
|
|
||||||
|
use crate::{error::Result, message::Message};
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
pub trait NotificationChannel: Send + Sync {
|
||||||
|
fn name(&self) -> &str;
|
||||||
|
async fn send(&self, msg: &Message) -> Result<()>;
|
||||||
|
}
|
||||||
246
crates/vapora-channels/src/config.rs
Normal file
246
crates/vapora-channels/src/config.rs
Normal file
@ -0,0 +1,246 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
use std::sync::OnceLock;
|
||||||
|
|
||||||
|
use regex::Regex;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
use crate::error::{ChannelError, Result};
|
||||||
|
|
||||||
|
/// Top-level config section; embed under `[channels]` in your TOML.
|
||||||
|
///
|
||||||
|
/// Credential fields (`webhook_url`, `bot_token`, etc.) support `${VAR}` and
|
||||||
|
/// `${VAR:-default}` interpolation. Resolution is performed automatically by
|
||||||
|
/// [`ChannelRegistry::from_config`] / [`ChannelRegistry::from_map`] via
|
||||||
|
/// [`ChannelConfig::resolve_secrets`]. Plain literals pass through unchanged.
|
||||||
|
///
|
||||||
|
/// ```toml
|
||||||
|
/// [channels.team-slack]
|
||||||
|
/// type = "slack"
|
||||||
|
/// webhook_url = "${SLACK_WEBHOOK_URL}"
|
||||||
|
///
|
||||||
|
/// [channels.ops-discord]
|
||||||
|
/// type = "discord"
|
||||||
|
/// webhook_url = "${DISCORD_WEBHOOK_URL}"
|
||||||
|
///
|
||||||
|
/// [channels.alerts-telegram]
|
||||||
|
/// type = "telegram"
|
||||||
|
/// bot_token = "${TELEGRAM_BOT_TOKEN}"
|
||||||
|
/// chat_id = "${TELEGRAM_CHAT_ID:-100999}"
|
||||||
|
/// ```
|
||||||
|
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||||
|
pub struct ChannelsConfig {
|
||||||
|
#[serde(default)]
|
||||||
|
pub channels: HashMap<String, ChannelConfig>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ChannelsConfig {
|
||||||
|
/// Resolve all `${VAR}` references in every channel entry.
|
||||||
|
///
|
||||||
|
/// Consumes `self` and returns a new `ChannelsConfig` with literals. Fails
|
||||||
|
/// on the first channel whose secrets cannot be resolved.
|
||||||
|
pub fn resolve_secrets(self) -> Result<Self> {
|
||||||
|
let channels = self
|
||||||
|
.channels
|
||||||
|
.into_iter()
|
||||||
|
.map(|(name, cfg)| cfg.resolve_secrets().map(|c| (name, c)))
|
||||||
|
.collect::<Result<_>>()?;
|
||||||
|
Ok(Self { channels })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
#[serde(tag = "type", rename_all = "lowercase")]
|
||||||
|
pub enum ChannelConfig {
|
||||||
|
Slack(SlackConfig),
|
||||||
|
Discord(DiscordConfig),
|
||||||
|
Telegram(TelegramConfig),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ChannelConfig {
|
||||||
|
/// Resolve `${VAR}` / `${VAR:-default}` references in all credential
|
||||||
|
/// fields. Plain string literals are returned unchanged.
|
||||||
|
pub fn resolve_secrets(self) -> Result<Self> {
|
||||||
|
match self {
|
||||||
|
Self::Slack(c) => Ok(Self::Slack(c.resolve_secrets()?)),
|
||||||
|
Self::Discord(c) => Ok(Self::Discord(c.resolve_secrets()?)),
|
||||||
|
Self::Telegram(c) => Ok(Self::Telegram(c.resolve_secrets()?)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct SlackConfig {
|
||||||
|
pub webhook_url: String,
|
||||||
|
/// Channel override (e.g. `#alerts`). The webhook already targets a
|
||||||
|
/// channel; this overrides it for workspaces that allow it.
|
||||||
|
pub channel: Option<String>,
|
||||||
|
pub username: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SlackConfig {
|
||||||
|
pub fn resolve_secrets(self) -> Result<Self> {
|
||||||
|
Ok(Self {
|
||||||
|
webhook_url: interpolate(&self.webhook_url)?,
|
||||||
|
channel: self.channel.map(|s| interpolate(&s)).transpose()?,
|
||||||
|
username: self.username.map(|s| interpolate(&s)).transpose()?,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct DiscordConfig {
|
||||||
|
pub webhook_url: String,
|
||||||
|
pub username: Option<String>,
|
||||||
|
pub avatar_url: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl DiscordConfig {
|
||||||
|
pub fn resolve_secrets(self) -> Result<Self> {
|
||||||
|
Ok(Self {
|
||||||
|
webhook_url: interpolate(&self.webhook_url)?,
|
||||||
|
username: self.username.map(|s| interpolate(&s)).transpose()?,
|
||||||
|
avatar_url: self.avatar_url.map(|s| interpolate(&s)).transpose()?,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct TelegramConfig {
|
||||||
|
pub bot_token: String,
|
||||||
|
/// Numeric chat ID (e.g. `-1001234567890` for a supergroup).
|
||||||
|
pub chat_id: String,
|
||||||
|
/// Override the Bot API base URL. Leave `None` for production.
|
||||||
|
/// Useful for pointing at a local mock server during tests.
|
||||||
|
pub api_base: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TelegramConfig {
|
||||||
|
pub fn api_url(&self) -> String {
|
||||||
|
let base = self
|
||||||
|
.api_base
|
||||||
|
.as_deref()
|
||||||
|
.unwrap_or("https://api.telegram.org");
|
||||||
|
format!("{}/bot{}/sendMessage", base, self.bot_token)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn resolve_secrets(self) -> Result<Self> {
|
||||||
|
Ok(Self {
|
||||||
|
bot_token: interpolate(&self.bot_token)?,
|
||||||
|
chat_id: interpolate(&self.chat_id)?,
|
||||||
|
api_base: self.api_base.map(|s| interpolate(&s)).transpose()?,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Expand every `${VAR}` / `${VAR:-default}` reference found anywhere in `s`.
|
||||||
|
///
|
||||||
|
/// - `${FOO}` → value of `FOO`, error if unset
|
||||||
|
/// - `${FOO:-bar}` → value of `FOO` if set, `"bar"` otherwise
|
||||||
|
/// - Anything else → returned unchanged
|
||||||
|
fn interpolate(s: &str) -> Result<String> {
|
||||||
|
static RE: OnceLock<Regex> = OnceLock::new();
|
||||||
|
let re = RE.get_or_init(|| {
|
||||||
|
// Matches ${VAR} and ${VAR:-default} anywhere in the string.
|
||||||
|
Regex::new(r"\$\{([^}:]+)(?::-(.*?))?\}").expect("static regex is valid")
|
||||||
|
});
|
||||||
|
|
||||||
|
// Fast path: no placeholder in the string.
|
||||||
|
if !s.contains("${") {
|
||||||
|
return Ok(s.to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut result = s.to_string();
|
||||||
|
for cap in re.captures_iter(s) {
|
||||||
|
let full = cap.get(0).unwrap().as_str();
|
||||||
|
let var_name = cap.get(1).unwrap().as_str();
|
||||||
|
let default = cap.get(2).map(|m| m.as_str());
|
||||||
|
|
||||||
|
let value = match std::env::var(var_name) {
|
||||||
|
Ok(v) => v,
|
||||||
|
Err(_) => match default {
|
||||||
|
Some(d) => d.to_string(),
|
||||||
|
None => return Err(ChannelError::SecretNotFound(var_name.to_string())),
|
||||||
|
},
|
||||||
|
};
|
||||||
|
result = result.replace(full, &value);
|
||||||
|
}
|
||||||
|
Ok(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn literal_passthrough() {
|
||||||
|
let v = interpolate("https://hooks.slack.com/services/T/B/token").unwrap();
|
||||||
|
assert_eq!(v, "https://hooks.slack.com/services/T/B/token");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn env_var_resolved() {
|
||||||
|
// SAFETY: single-threaded test process, no concurrent env access.
|
||||||
|
unsafe { std::env::set_var("TEST_CHANNELS_WEBHOOK", "https://resolved.example.com") };
|
||||||
|
let v = interpolate("${TEST_CHANNELS_WEBHOOK}").unwrap();
|
||||||
|
assert_eq!(v, "https://resolved.example.com");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn env_var_with_default_used_when_unset() {
|
||||||
|
// SAFETY: single-threaded test process, no concurrent env access.
|
||||||
|
unsafe { std::env::remove_var("TEST_CHANNELS_MISSING") };
|
||||||
|
let v = interpolate("${TEST_CHANNELS_MISSING:-fallback-token}").unwrap();
|
||||||
|
assert_eq!(v, "fallback-token");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn env_var_missing_no_default_errors() {
|
||||||
|
// SAFETY: single-threaded test process, no concurrent env access.
|
||||||
|
unsafe { std::env::remove_var("TEST_CHANNELS_REQUIRED") };
|
||||||
|
let err = interpolate("${TEST_CHANNELS_REQUIRED}").unwrap_err();
|
||||||
|
assert!(
|
||||||
|
matches!(err, ChannelError::SecretNotFound(ref v) if v == "TEST_CHANNELS_REQUIRED")
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn partial_interpolation_in_url() {
|
||||||
|
// SAFETY: single-threaded test process, no concurrent env access.
|
||||||
|
unsafe { std::env::set_var("TEST_CHANNELS_PARTIAL_TOKEN", "abc123") };
|
||||||
|
let v =
|
||||||
|
interpolate("https://hooks.example.com/services/${TEST_CHANNELS_PARTIAL_TOKEN}/end")
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(v, "https://hooks.example.com/services/abc123/end");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn slack_config_resolves_secrets() {
|
||||||
|
// SAFETY: single-threaded test process, no concurrent env access.
|
||||||
|
unsafe { std::env::set_var("TEST_SLACK_WEBHOOK", "https://hooks.slack.com/s/t/b/x") };
|
||||||
|
let cfg = SlackConfig {
|
||||||
|
webhook_url: "${TEST_SLACK_WEBHOOK}".to_string(),
|
||||||
|
channel: Some("#alerts".to_string()),
|
||||||
|
username: None,
|
||||||
|
};
|
||||||
|
let resolved = cfg.resolve_secrets().unwrap();
|
||||||
|
assert_eq!(resolved.webhook_url, "https://hooks.slack.com/s/t/b/x");
|
||||||
|
assert_eq!(resolved.channel.as_deref(), Some("#alerts"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn telegram_config_resolves_secrets() {
|
||||||
|
// SAFETY: single-threaded test process, no concurrent env access.
|
||||||
|
unsafe {
|
||||||
|
std::env::set_var("TEST_TG_TOKEN", "999:TOKEN");
|
||||||
|
std::env::set_var("TEST_TG_CHAT", "-100999");
|
||||||
|
}
|
||||||
|
let cfg = TelegramConfig {
|
||||||
|
bot_token: "${TEST_TG_TOKEN}".to_string(),
|
||||||
|
chat_id: "${TEST_TG_CHAT}".to_string(),
|
||||||
|
api_base: None,
|
||||||
|
};
|
||||||
|
let resolved = cfg.resolve_secrets().unwrap();
|
||||||
|
assert_eq!(resolved.bot_token, "999:TOKEN");
|
||||||
|
assert_eq!(resolved.chat_id, "-100999");
|
||||||
|
}
|
||||||
|
}
|
||||||
154
crates/vapora-channels/src/discord.rs
Normal file
154
crates/vapora-channels/src/discord.rs
Normal file
@ -0,0 +1,154 @@
|
|||||||
|
use async_trait::async_trait;
|
||||||
|
use reqwest::Client;
|
||||||
|
use serde_json::{json, Value};
|
||||||
|
use tracing::instrument;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
channel::NotificationChannel,
|
||||||
|
config::DiscordConfig,
|
||||||
|
error::{ChannelError, Result},
|
||||||
|
message::Message,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub struct DiscordChannel {
|
||||||
|
name: String,
|
||||||
|
config: DiscordConfig,
|
||||||
|
client: Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl DiscordChannel {
|
||||||
|
pub fn new(name: impl Into<String>, config: DiscordConfig, client: Client) -> Self {
|
||||||
|
Self {
|
||||||
|
name: name.into(),
|
||||||
|
config,
|
||||||
|
client,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Builds the Discord webhook JSON payload from a message.
|
||||||
|
pub(crate) fn build_payload(
|
||||||
|
msg: &Message,
|
||||||
|
username_override: Option<&str>,
|
||||||
|
avatar_url: Option<&str>,
|
||||||
|
) -> Value {
|
||||||
|
let fields: Vec<Value> = msg
|
||||||
|
.metadata
|
||||||
|
.iter()
|
||||||
|
.map(|(k, v)| {
|
||||||
|
json!({
|
||||||
|
"name": k,
|
||||||
|
"value": v,
|
||||||
|
"inline": true
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let mut payload = json!({
|
||||||
|
"embeds": [{
|
||||||
|
"title": msg.title,
|
||||||
|
"description": msg.body,
|
||||||
|
"color": msg.level.discord_color(),
|
||||||
|
"fields": fields,
|
||||||
|
"footer": { "text": "vapora" }
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
|
||||||
|
if let Some(u) = username_override {
|
||||||
|
payload["username"] = json!(u);
|
||||||
|
}
|
||||||
|
if let Some(av) = avatar_url {
|
||||||
|
payload["avatar_url"] = json!(av);
|
||||||
|
}
|
||||||
|
|
||||||
|
payload
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl NotificationChannel for DiscordChannel {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
&self.name
|
||||||
|
}
|
||||||
|
|
||||||
|
#[instrument(skip(self, msg), fields(channel = %self.name))]
|
||||||
|
async fn send(&self, msg: &Message) -> Result<()> {
|
||||||
|
let payload = build_payload(
|
||||||
|
msg,
|
||||||
|
self.config.username.as_deref(),
|
||||||
|
self.config.avatar_url.as_deref(),
|
||||||
|
);
|
||||||
|
|
||||||
|
let resp = self
|
||||||
|
.client
|
||||||
|
.post(&self.config.webhook_url)
|
||||||
|
.json(&payload)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ChannelError::HttpError {
|
||||||
|
channel: self.name.clone(),
|
||||||
|
source: e,
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Discord returns 204 No Content on success.
|
||||||
|
if !resp.status().is_success() {
|
||||||
|
let status = resp.status().as_u16();
|
||||||
|
let body = resp.text().await.unwrap_or_default();
|
||||||
|
return Err(ChannelError::ApiError {
|
||||||
|
channel: self.name.clone(),
|
||||||
|
status,
|
||||||
|
body,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_message_uses_red_discord_color() {
|
||||||
|
let msg = Message::error("Service down", "Health check failed");
|
||||||
|
let payload = build_payload(&msg, None, None);
|
||||||
|
assert_eq!(
|
||||||
|
payload["embeds"][0]["color"].as_u64().unwrap(),
|
||||||
|
0xcc0000_u64
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn success_message_uses_green_discord_color() {
|
||||||
|
let msg = Message::success("Deploy complete", "v1.2.0");
|
||||||
|
let payload = build_payload(&msg, None, None);
|
||||||
|
assert_eq!(
|
||||||
|
payload["embeds"][0]["color"].as_u64().unwrap(),
|
||||||
|
0x36a64f_u64
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn metadata_maps_to_inline_fields() {
|
||||||
|
let msg = Message::info("Test", "Body").with_metadata("region", "eu-west-1");
|
||||||
|
let payload = build_payload(&msg, None, None);
|
||||||
|
let fields = payload["embeds"][0]["fields"].as_array().unwrap();
|
||||||
|
assert_eq!(fields.len(), 1);
|
||||||
|
assert_eq!(fields[0]["inline"], json!(true));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn username_and_avatar_appear_at_top_level() {
|
||||||
|
let msg = Message::info("Test", "Body");
|
||||||
|
let payload = build_payload(
|
||||||
|
&msg,
|
||||||
|
Some("vapora-bot"),
|
||||||
|
Some("https://example.com/avatar.png"),
|
||||||
|
);
|
||||||
|
assert_eq!(payload["username"], json!("vapora-bot"));
|
||||||
|
assert_eq!(
|
||||||
|
payload["avatar_url"],
|
||||||
|
json!("https://example.com/avatar.png")
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
31
crates/vapora-channels/src/error.rs
Normal file
31
crates/vapora-channels/src/error.rs
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
use thiserror::Error;
|
||||||
|
|
||||||
|
#[derive(Error, Debug)]
|
||||||
|
pub enum ChannelError {
|
||||||
|
#[error("HTTP request failed for channel '{channel}': {source}")]
|
||||||
|
HttpError {
|
||||||
|
channel: String,
|
||||||
|
#[source]
|
||||||
|
source: reqwest::Error,
|
||||||
|
},
|
||||||
|
|
||||||
|
#[error("Channel '{0}' not found in registry")]
|
||||||
|
NotFound(String),
|
||||||
|
|
||||||
|
#[error("Channel '{channel}' returned non-success status {status}: {body}")]
|
||||||
|
ApiError {
|
||||||
|
channel: String,
|
||||||
|
status: u16,
|
||||||
|
body: String,
|
||||||
|
},
|
||||||
|
|
||||||
|
#[error("Failed to build HTTP client: {0}")]
|
||||||
|
HttpClientBuild(String),
|
||||||
|
|
||||||
|
/// Raised when a `${VAR}` reference is present in config but the env var
|
||||||
|
/// is not set and no `:-default` was provided.
|
||||||
|
#[error("Secret reference '${{{{0}}}}' not resolved: env var not set and no default provided")]
|
||||||
|
SecretNotFound(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
pub type Result<T> = std::result::Result<T, ChannelError>;
|
||||||
54
crates/vapora-channels/src/lib.rs
Normal file
54
crates/vapora-channels/src/lib.rs
Normal file
@ -0,0 +1,54 @@
|
|||||||
|
//! Outbound notification channels for VAPORA.
|
||||||
|
//!
|
||||||
|
//! Delivers workflow events and agent completion signals to external team
|
||||||
|
//! communication platforms. All three providers use HTTP webhooks / Bot API —
|
||||||
|
//! no vendor SDKs are required.
|
||||||
|
//!
|
||||||
|
//! # Supported Channels
|
||||||
|
//!
|
||||||
|
//! - **Slack** — Incoming Webhooks (POST JSON with `attachments`)
|
||||||
|
//! - **Discord** — Incoming Webhooks (POST JSON with `embeds`, 204 response)
|
||||||
|
//! - **Telegram** — Bot API `sendMessage` with HTML `parse_mode`
|
||||||
|
//!
|
||||||
|
//! # Quick Start
|
||||||
|
//!
|
||||||
|
//! ```toml
|
||||||
|
//! # vapora.toml (under your [channels] section)
|
||||||
|
//! [channels.team-slack]
|
||||||
|
//! type = "slack"
|
||||||
|
//! webhook_url = "https://hooks.slack.com/services/…"
|
||||||
|
//!
|
||||||
|
//! [channels.ops-discord]
|
||||||
|
//! type = "discord"
|
||||||
|
//! webhook_url = "https://discord.com/api/webhooks/…"
|
||||||
|
//!
|
||||||
|
//! [channels.alerts]
|
||||||
|
//! type = "telegram"
|
||||||
|
//! bot_token = "123456:ABC-DEF…"
|
||||||
|
//! chat_id = "-1001234567890"
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! ```rust,ignore
|
||||||
|
//! let config: ChannelsConfig = toml::from_str(toml_str)?;
|
||||||
|
//! let registry = ChannelRegistry::from_config(config)?;
|
||||||
|
//!
|
||||||
|
//! registry.send("team-slack", Message::success(
|
||||||
|
//! "Deploy complete",
|
||||||
|
//! "v1.2.0 is live on production",
|
||||||
|
//! )).await?;
|
||||||
|
//! ```
|
||||||
|
|
||||||
|
pub mod channel;
|
||||||
|
pub mod config;
|
||||||
|
pub mod discord;
|
||||||
|
pub mod error;
|
||||||
|
pub mod message;
|
||||||
|
pub mod registry;
|
||||||
|
pub mod slack;
|
||||||
|
pub mod telegram;
|
||||||
|
|
||||||
|
pub use channel::NotificationChannel;
|
||||||
|
pub use config::{ChannelConfig, ChannelsConfig, DiscordConfig, SlackConfig, TelegramConfig};
|
||||||
|
pub use error::{ChannelError, Result};
|
||||||
|
pub use message::{Message, MessageLevel};
|
||||||
|
pub use registry::ChannelRegistry;
|
||||||
127
crates/vapora-channels/src/message.rs
Normal file
127
crates/vapora-channels/src/message.rs
Normal file
@ -0,0 +1,127 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
#[serde(rename_all = "lowercase")]
|
||||||
|
pub enum MessageLevel {
|
||||||
|
Info,
|
||||||
|
Success,
|
||||||
|
Warning,
|
||||||
|
Error,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl MessageLevel {
|
||||||
|
/// Slack attachment hex color string.
|
||||||
|
pub fn slack_color(self) -> &'static str {
|
||||||
|
match self {
|
||||||
|
Self::Info => "#0099ff",
|
||||||
|
Self::Success => "#36a64f",
|
||||||
|
Self::Warning => "#ffcc00",
|
||||||
|
Self::Error => "#cc0000",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Discord embed color as 0xRRGGBB integer.
|
||||||
|
pub fn discord_color(self) -> u32 {
|
||||||
|
match self {
|
||||||
|
Self::Info => 0x0099ff,
|
||||||
|
Self::Success => 0x36a64f,
|
||||||
|
Self::Warning => 0xffcc00,
|
||||||
|
Self::Error => 0xcc0000,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Unicode emoji prefix for plain-text formats.
|
||||||
|
pub fn emoji(self) -> &'static str {
|
||||||
|
match self {
|
||||||
|
Self::Info => "ℹ️",
|
||||||
|
Self::Success => "✅",
|
||||||
|
Self::Warning => "⚠️",
|
||||||
|
Self::Error => "🔴",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct Message {
|
||||||
|
pub title: String,
|
||||||
|
pub body: String,
|
||||||
|
pub level: MessageLevel,
|
||||||
|
#[serde(default)]
|
||||||
|
pub metadata: HashMap<String, String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Message {
|
||||||
|
pub fn new(title: impl Into<String>, body: impl Into<String>, level: MessageLevel) -> Self {
|
||||||
|
Self {
|
||||||
|
title: title.into(),
|
||||||
|
body: body.into(),
|
||||||
|
level,
|
||||||
|
metadata: HashMap::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn info(title: impl Into<String>, body: impl Into<String>) -> Self {
|
||||||
|
Self::new(title, body, MessageLevel::Info)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn success(title: impl Into<String>, body: impl Into<String>) -> Self {
|
||||||
|
Self::new(title, body, MessageLevel::Success)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn warning(title: impl Into<String>, body: impl Into<String>) -> Self {
|
||||||
|
Self::new(title, body, MessageLevel::Warning)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn error(title: impl Into<String>, body: impl Into<String>) -> Self {
|
||||||
|
Self::new(title, body, MessageLevel::Error)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn with_metadata(mut self, key: impl Into<String>, value: impl Into<String>) -> Self {
|
||||||
|
self.metadata.insert(key.into(), value.into());
|
||||||
|
self
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn level_colors_are_distinct() {
|
||||||
|
let levels = [
|
||||||
|
MessageLevel::Info,
|
||||||
|
MessageLevel::Success,
|
||||||
|
MessageLevel::Warning,
|
||||||
|
MessageLevel::Error,
|
||||||
|
];
|
||||||
|
let slack_colors: Vec<_> = levels.iter().map(|l| l.slack_color()).collect();
|
||||||
|
let discord_colors: Vec<_> = levels.iter().map(|l| l.discord_color()).collect();
|
||||||
|
// All four Slack colors are unique.
|
||||||
|
let mut deduped = slack_colors.clone();
|
||||||
|
deduped.dedup();
|
||||||
|
assert_eq!(deduped.len(), 4);
|
||||||
|
// All four Discord colors are unique.
|
||||||
|
let mut deduped = discord_colors.clone();
|
||||||
|
deduped.dedup();
|
||||||
|
assert_eq!(deduped.len(), 4);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn constructors_set_correct_level() {
|
||||||
|
assert_eq!(Message::info("t", "b").level, MessageLevel::Info);
|
||||||
|
assert_eq!(Message::success("t", "b").level, MessageLevel::Success);
|
||||||
|
assert_eq!(Message::warning("t", "b").level, MessageLevel::Warning);
|
||||||
|
assert_eq!(Message::error("t", "b").level, MessageLevel::Error);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn with_metadata_is_additive() {
|
||||||
|
let msg = Message::info("t", "b")
|
||||||
|
.with_metadata("env", "prod")
|
||||||
|
.with_metadata("region", "eu-west-1");
|
||||||
|
assert_eq!(msg.metadata.len(), 2);
|
||||||
|
assert_eq!(msg.metadata["env"], "prod");
|
||||||
|
}
|
||||||
|
}
|
||||||
122
crates/vapora-channels/src/registry.rs
Normal file
122
crates/vapora-channels/src/registry.rs
Normal file
@ -0,0 +1,122 @@
|
|||||||
|
use std::{collections::HashMap, sync::Arc, time::Duration};
|
||||||
|
|
||||||
|
use tracing::{error, instrument};
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
channel::NotificationChannel,
|
||||||
|
config::{ChannelConfig, ChannelsConfig},
|
||||||
|
discord::DiscordChannel,
|
||||||
|
error::{ChannelError, Result},
|
||||||
|
message::Message,
|
||||||
|
slack::SlackChannel,
|
||||||
|
telegram::TelegramChannel,
|
||||||
|
};
|
||||||
|
|
||||||
|
/// Routes outbound notifications to named channels.
|
||||||
|
///
|
||||||
|
/// Each channel is addressed by the name given in the config (e.g.
|
||||||
|
/// `"team-slack"`, `"ops-discord"`). `send` delivers to one channel;
|
||||||
|
/// `broadcast` fans out to all registered channels in parallel.
|
||||||
|
pub struct ChannelRegistry {
|
||||||
|
channels: HashMap<String, Arc<dyn NotificationChannel>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ChannelRegistry {
|
||||||
|
/// Creates an empty registry. Use `register` to add channels individually
|
||||||
|
/// (e.g. in tests with pre-built clients).
|
||||||
|
pub fn new() -> Self {
|
||||||
|
Self {
|
||||||
|
channels: HashMap::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Builds all channels from `config` with a shared reqwest::Client.
|
||||||
|
///
|
||||||
|
/// The client is configured with a 10 s timeout and a `vapora-channels`
|
||||||
|
/// User-Agent. Fails if the TLS backend cannot be initialised.
|
||||||
|
pub fn from_config(config: ChannelsConfig) -> Result<Self> {
|
||||||
|
let client = reqwest::Client::builder()
|
||||||
|
.timeout(Duration::from_secs(10))
|
||||||
|
.user_agent(concat!("vapora-channels/", env!("CARGO_PKG_VERSION")))
|
||||||
|
.build()
|
||||||
|
.map_err(|e| ChannelError::HttpClientBuild(e.to_string()))?;
|
||||||
|
|
||||||
|
// Resolve ${VAR} references in every channel's credential fields before
|
||||||
|
// constructing any HTTP client — this is the single mandatory call site.
|
||||||
|
let config = config.resolve_secrets()?;
|
||||||
|
|
||||||
|
let mut registry = Self::new();
|
||||||
|
for (name, ch_config) in config.channels {
|
||||||
|
let channel: Arc<dyn NotificationChannel> = match ch_config {
|
||||||
|
ChannelConfig::Slack(cfg) => {
|
||||||
|
Arc::new(SlackChannel::new(name.clone(), cfg, client.clone()))
|
||||||
|
}
|
||||||
|
ChannelConfig::Discord(cfg) => {
|
||||||
|
Arc::new(DiscordChannel::new(name.clone(), cfg, client.clone()))
|
||||||
|
}
|
||||||
|
ChannelConfig::Telegram(cfg) => {
|
||||||
|
Arc::new(TelegramChannel::new(name.clone(), cfg, client.clone()))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
registry.channels.insert(name, channel);
|
||||||
|
}
|
||||||
|
Ok(registry)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Builds all channels from a flat map, creating a shared
|
||||||
|
/// `reqwest::Client`.
|
||||||
|
///
|
||||||
|
/// Equivalent to wrapping the map in `ChannelsConfig` and calling
|
||||||
|
/// `from_config`. Use this when you hold the channel entries directly
|
||||||
|
/// (e.g. from `WorkflowsConfig.channels`).
|
||||||
|
pub fn from_map(channels: std::collections::HashMap<String, ChannelConfig>) -> Result<Self> {
|
||||||
|
Self::from_config(ChannelsConfig { channels })
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Registers an already-constructed channel implementation.
|
||||||
|
pub fn register(&mut self, channel: Arc<dyn NotificationChannel>) -> &mut Self {
|
||||||
|
self.channels.insert(channel.name().to_string(), channel);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Sends `msg` to a single channel identified by `name`.
|
||||||
|
#[instrument(skip(self, msg), fields(channel = %name))]
|
||||||
|
pub async fn send(&self, name: &str, msg: Message) -> Result<()> {
|
||||||
|
let channel = self
|
||||||
|
.channels
|
||||||
|
.get(name)
|
||||||
|
.ok_or_else(|| ChannelError::NotFound(name.to_string()))?;
|
||||||
|
channel.send(&msg).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Sends `msg` to every registered channel sequentially.
|
||||||
|
///
|
||||||
|
/// Returns a `Vec` of `(channel_name, Result)` — failures do not abort
|
||||||
|
/// delivery to remaining channels.
|
||||||
|
pub async fn broadcast(&self, msg: Message) -> Vec<(String, Result<()>)> {
|
||||||
|
let mut results = Vec::with_capacity(self.channels.len());
|
||||||
|
for (name, channel) in &self.channels {
|
||||||
|
let result = channel.send(&msg).await;
|
||||||
|
if let Err(ref e) = result {
|
||||||
|
error!(channel = %name, error = %e, "Broadcast delivery failed");
|
||||||
|
}
|
||||||
|
results.push((name.clone(), result));
|
||||||
|
}
|
||||||
|
results
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the names of all registered channels.
|
||||||
|
pub fn channel_names(&self) -> Vec<&str> {
|
||||||
|
self.channels.keys().map(String::as_str).collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn is_empty(&self) -> bool {
|
||||||
|
self.channels.is_empty()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for ChannelRegistry {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::new()
|
||||||
|
}
|
||||||
|
}
|
||||||
148
crates/vapora-channels/src/slack.rs
Normal file
148
crates/vapora-channels/src/slack.rs
Normal file
@ -0,0 +1,148 @@
|
|||||||
|
use async_trait::async_trait;
|
||||||
|
use reqwest::Client;
|
||||||
|
use serde_json::{json, Value};
|
||||||
|
use tracing::instrument;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
channel::NotificationChannel,
|
||||||
|
config::SlackConfig,
|
||||||
|
error::{ChannelError, Result},
|
||||||
|
message::Message,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub struct SlackChannel {
|
||||||
|
name: String,
|
||||||
|
config: SlackConfig,
|
||||||
|
client: Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SlackChannel {
|
||||||
|
pub fn new(name: impl Into<String>, config: SlackConfig, client: Client) -> Self {
|
||||||
|
Self {
|
||||||
|
name: name.into(),
|
||||||
|
config,
|
||||||
|
client,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Builds the Slack webhook JSON payload from a message.
|
||||||
|
///
|
||||||
|
/// Extracted as a free function so payload shape can be unit-tested without
|
||||||
|
/// mocking HTTP.
|
||||||
|
pub(crate) fn build_payload(
|
||||||
|
msg: &Message,
|
||||||
|
channel_override: Option<&str>,
|
||||||
|
username_override: Option<&str>,
|
||||||
|
) -> Value {
|
||||||
|
let fields: Vec<Value> = msg
|
||||||
|
.metadata
|
||||||
|
.iter()
|
||||||
|
.map(|(k, v)| {
|
||||||
|
json!({
|
||||||
|
"title": k,
|
||||||
|
"value": v,
|
||||||
|
"short": true
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let mut payload = json!({
|
||||||
|
"attachments": [{
|
||||||
|
"color": msg.level.slack_color(),
|
||||||
|
"title": msg.title,
|
||||||
|
"text": msg.body,
|
||||||
|
"footer": "vapora",
|
||||||
|
"fields": fields
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
|
||||||
|
if let Some(ch) = channel_override {
|
||||||
|
payload["channel"] = json!(ch);
|
||||||
|
}
|
||||||
|
if let Some(u) = username_override {
|
||||||
|
payload["username"] = json!(u);
|
||||||
|
}
|
||||||
|
|
||||||
|
payload
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl NotificationChannel for SlackChannel {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
&self.name
|
||||||
|
}
|
||||||
|
|
||||||
|
#[instrument(skip(self, msg), fields(channel = %self.name))]
|
||||||
|
async fn send(&self, msg: &Message) -> Result<()> {
|
||||||
|
let payload = build_payload(
|
||||||
|
msg,
|
||||||
|
self.config.channel.as_deref(),
|
||||||
|
self.config.username.as_deref(),
|
||||||
|
);
|
||||||
|
|
||||||
|
let resp = self
|
||||||
|
.client
|
||||||
|
.post(&self.config.webhook_url)
|
||||||
|
.json(&payload)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ChannelError::HttpError {
|
||||||
|
channel: self.name.clone(),
|
||||||
|
source: e,
|
||||||
|
})?;
|
||||||
|
|
||||||
|
if !resp.status().is_success() {
|
||||||
|
let status = resp.status().as_u16();
|
||||||
|
let body = resp.text().await.unwrap_or_default();
|
||||||
|
return Err(ChannelError::ApiError {
|
||||||
|
channel: self.name.clone(),
|
||||||
|
status,
|
||||||
|
body,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_message_uses_red_color() {
|
||||||
|
let msg = Message::error("Deploy failed", "Rollback triggered");
|
||||||
|
let payload = build_payload(&msg, None, None);
|
||||||
|
assert_eq!(
|
||||||
|
payload["attachments"][0]["color"].as_str().unwrap(),
|
||||||
|
"#cc0000"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn metadata_maps_to_fields_array() {
|
||||||
|
let msg = Message::info("Test", "Body")
|
||||||
|
.with_metadata("env", "production")
|
||||||
|
.with_metadata("version", "1.2.0");
|
||||||
|
let payload = build_payload(&msg, None, None);
|
||||||
|
let fields = payload["attachments"][0]["fields"].as_array().unwrap();
|
||||||
|
assert_eq!(fields.len(), 2);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn channel_override_appears_at_top_level() {
|
||||||
|
let msg = Message::info("Test", "Body");
|
||||||
|
let payload = build_payload(&msg, Some("#alerts"), Some("vapora-bot"));
|
||||||
|
assert_eq!(payload["channel"], json!("#alerts"));
|
||||||
|
assert_eq!(payload["username"], json!("vapora-bot"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn no_overrides_leaves_keys_absent() {
|
||||||
|
let msg = Message::info("Test", "Body");
|
||||||
|
let payload = build_payload(&msg, None, None);
|
||||||
|
assert!(payload.get("channel").is_none());
|
||||||
|
assert!(payload.get("username").is_none());
|
||||||
|
}
|
||||||
|
}
|
||||||
178
crates/vapora-channels/src/telegram.rs
Normal file
178
crates/vapora-channels/src/telegram.rs
Normal file
@ -0,0 +1,178 @@
|
|||||||
|
use async_trait::async_trait;
|
||||||
|
use reqwest::Client;
|
||||||
|
use serde_json::{json, Value};
|
||||||
|
use tracing::instrument;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
channel::NotificationChannel,
|
||||||
|
config::TelegramConfig,
|
||||||
|
error::{ChannelError, Result},
|
||||||
|
message::Message,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub struct TelegramChannel {
|
||||||
|
name: String,
|
||||||
|
config: TelegramConfig,
|
||||||
|
client: Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TelegramChannel {
|
||||||
|
pub fn new(name: impl Into<String>, config: TelegramConfig, client: Client) -> Self {
|
||||||
|
Self {
|
||||||
|
name: name.into(),
|
||||||
|
config,
|
||||||
|
client,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Escapes the three characters that have meaning in Telegram HTML mode.
|
||||||
|
fn html_escape(s: &str) -> String {
|
||||||
|
// Telegram HTML supports <b>, <i>, <code>, <pre>, <a>.
|
||||||
|
// Only &, < and > need escaping.
|
||||||
|
let mut out = String::with_capacity(s.len());
|
||||||
|
for ch in s.chars() {
|
||||||
|
match ch {
|
||||||
|
'&' => out.push_str("&"),
|
||||||
|
'<' => out.push_str("<"),
|
||||||
|
'>' => out.push_str(">"),
|
||||||
|
other => out.push(other),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Builds the Telegram sendMessage JSON body.
|
||||||
|
pub(crate) fn build_payload(msg: &Message, chat_id: &str) -> Value {
|
||||||
|
let mut text = format!(
|
||||||
|
"<b>{} {}</b>\n\n{}",
|
||||||
|
msg.level.emoji(),
|
||||||
|
html_escape(&msg.title),
|
||||||
|
html_escape(&msg.body),
|
||||||
|
);
|
||||||
|
|
||||||
|
if !msg.metadata.is_empty() {
|
||||||
|
text.push('\n');
|
||||||
|
for (k, v) in &msg.metadata {
|
||||||
|
text.push_str(&format!("\n<b>{}</b>: {}", html_escape(k), html_escape(v)));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
json!({
|
||||||
|
"chat_id": chat_id,
|
||||||
|
"text": text,
|
||||||
|
"parse_mode": "HTML"
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl NotificationChannel for TelegramChannel {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
&self.name
|
||||||
|
}
|
||||||
|
|
||||||
|
#[instrument(skip(self, msg), fields(channel = %self.name))]
|
||||||
|
async fn send(&self, msg: &Message) -> Result<()> {
|
||||||
|
let payload = build_payload(msg, &self.config.chat_id);
|
||||||
|
|
||||||
|
let resp = self
|
||||||
|
.client
|
||||||
|
.post(self.config.api_url())
|
||||||
|
.json(&payload)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ChannelError::HttpError {
|
||||||
|
channel: self.name.clone(),
|
||||||
|
source: e,
|
||||||
|
})?;
|
||||||
|
|
||||||
|
if !resp.status().is_success() {
|
||||||
|
let status = resp.status().as_u16();
|
||||||
|
let body = resp.text().await.unwrap_or_default();
|
||||||
|
return Err(ChannelError::ApiError {
|
||||||
|
channel: self.name.clone(),
|
||||||
|
status,
|
||||||
|
body,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn html_escape_handles_all_three_special_chars() {
|
||||||
|
assert_eq!(html_escape("a < b & c > d"), "a < b & c > d");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn html_escape_is_noop_for_plain_text() {
|
||||||
|
let plain = "Deploy complete: v1.2.0 to production";
|
||||||
|
assert_eq!(html_escape(plain), plain);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn payload_uses_html_parse_mode() {
|
||||||
|
let msg = Message::info("Test", "Body");
|
||||||
|
let payload = build_payload(&msg, "-100123456");
|
||||||
|
assert_eq!(payload["parse_mode"], json!("HTML"));
|
||||||
|
assert_eq!(payload["chat_id"], json!("-100123456"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn payload_contains_emoji_and_bold_title() {
|
||||||
|
let msg = Message::error("Service down", "Health check failed");
|
||||||
|
let payload = build_payload(&msg, "-100");
|
||||||
|
let text = payload["text"].as_str().unwrap();
|
||||||
|
assert!(text.contains("🔴"));
|
||||||
|
assert!(text.contains("<b>"));
|
||||||
|
assert!(text.contains("Service down"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn payload_escapes_html_in_title_and_body() {
|
||||||
|
let msg = Message::warning("a < b", "x & y > z");
|
||||||
|
let payload = build_payload(&msg, "-100");
|
||||||
|
let text = payload["text"].as_str().unwrap();
|
||||||
|
assert!(text.contains("a < b"));
|
||||||
|
assert!(text.contains("x & y > z"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn metadata_appended_as_bold_key_value_lines() {
|
||||||
|
let msg = Message::info("Test", "Body").with_metadata("env", "prod");
|
||||||
|
let payload = build_payload(&msg, "-100");
|
||||||
|
let text = payload["text"].as_str().unwrap();
|
||||||
|
assert!(text.contains("<b>env</b>: prod"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn api_url_default_base() {
|
||||||
|
let cfg = TelegramConfig {
|
||||||
|
bot_token: "123:ABC".to_string(),
|
||||||
|
chat_id: "-100".to_string(),
|
||||||
|
api_base: None,
|
||||||
|
};
|
||||||
|
assert_eq!(
|
||||||
|
cfg.api_url(),
|
||||||
|
"https://api.telegram.org/bot123:ABC/sendMessage"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn api_url_custom_base_for_testing() {
|
||||||
|
let cfg = TelegramConfig {
|
||||||
|
bot_token: "123:ABC".to_string(),
|
||||||
|
chat_id: "-100".to_string(),
|
||||||
|
api_base: Some("http://localhost:8080".to_string()),
|
||||||
|
};
|
||||||
|
assert_eq!(
|
||||||
|
cfg.api_url(),
|
||||||
|
"http://localhost:8080/bot123:ABC/sendMessage"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
285
crates/vapora-channels/tests/integration.rs
Normal file
285
crates/vapora-channels/tests/integration.rs
Normal file
@ -0,0 +1,285 @@
|
|||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use vapora_channels::{
|
||||||
|
config::{DiscordConfig, SlackConfig, TelegramConfig},
|
||||||
|
discord::DiscordChannel,
|
||||||
|
error::ChannelError,
|
||||||
|
message::Message,
|
||||||
|
registry::ChannelRegistry,
|
||||||
|
slack::SlackChannel,
|
||||||
|
telegram::TelegramChannel,
|
||||||
|
NotificationChannel, Result,
|
||||||
|
};
|
||||||
|
use wiremock::{
|
||||||
|
matchers::{method, path},
|
||||||
|
Mock, MockServer, ResponseTemplate,
|
||||||
|
};
|
||||||
|
|
||||||
|
// ── Slack ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn slack_send_returns_ok_on_200() {
|
||||||
|
let server = MockServer::start().await;
|
||||||
|
|
||||||
|
Mock::given(method("POST"))
|
||||||
|
.and(path("/hooks/slack"))
|
||||||
|
.respond_with(ResponseTemplate::new(200).set_body_string("ok"))
|
||||||
|
.mount(&server)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let cfg = SlackConfig {
|
||||||
|
webhook_url: format!("{}/hooks/slack", server.uri()),
|
||||||
|
channel: None,
|
||||||
|
username: None,
|
||||||
|
};
|
||||||
|
let channel = SlackChannel::new("slack", cfg, reqwest::Client::new());
|
||||||
|
let msg = Message::success("Deploy complete", "v1.2.0 → production");
|
||||||
|
|
||||||
|
channel.send(&msg).await.expect("should succeed on 200");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn slack_send_returns_api_error_on_500() {
|
||||||
|
let server = MockServer::start().await;
|
||||||
|
|
||||||
|
Mock::given(method("POST"))
|
||||||
|
.and(path("/hooks/slack"))
|
||||||
|
.respond_with(ResponseTemplate::new(500).set_body_string("internal_error"))
|
||||||
|
.mount(&server)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let cfg = SlackConfig {
|
||||||
|
webhook_url: format!("{}/hooks/slack", server.uri()),
|
||||||
|
channel: None,
|
||||||
|
username: None,
|
||||||
|
};
|
||||||
|
let channel = SlackChannel::new("slack", cfg, reqwest::Client::new());
|
||||||
|
let err = channel
|
||||||
|
.send(&Message::info("Test", "Body"))
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
|
||||||
|
assert!(
|
||||||
|
matches!(
|
||||||
|
err,
|
||||||
|
ChannelError::ApiError {
|
||||||
|
status: 500,
|
||||||
|
ref body,
|
||||||
|
..
|
||||||
|
} if body == "internal_error"
|
||||||
|
),
|
||||||
|
"unexpected error variant: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Discord
|
||||||
|
// ───────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn discord_send_returns_ok_on_204() {
|
||||||
|
let server = MockServer::start().await;
|
||||||
|
|
||||||
|
Mock::given(method("POST"))
|
||||||
|
.and(path("/webhooks/discord"))
|
||||||
|
.respond_with(ResponseTemplate::new(204))
|
||||||
|
.mount(&server)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let cfg = DiscordConfig {
|
||||||
|
webhook_url: format!("{}/webhooks/discord", server.uri()),
|
||||||
|
username: None,
|
||||||
|
avatar_url: None,
|
||||||
|
};
|
||||||
|
let channel = DiscordChannel::new("discord", cfg, reqwest::Client::new());
|
||||||
|
|
||||||
|
channel
|
||||||
|
.send(&Message::warning("High latency", "p99 > 500 ms"))
|
||||||
|
.await
|
||||||
|
.expect("should succeed on 204");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn discord_send_returns_api_error_on_400() {
|
||||||
|
let server = MockServer::start().await;
|
||||||
|
|
||||||
|
Mock::given(method("POST"))
|
||||||
|
.and(path("/webhooks/discord"))
|
||||||
|
.respond_with(ResponseTemplate::new(400).set_body_string("{\"code\":50006}"))
|
||||||
|
.mount(&server)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let cfg = DiscordConfig {
|
||||||
|
webhook_url: format!("{}/webhooks/discord", server.uri()),
|
||||||
|
username: None,
|
||||||
|
avatar_url: None,
|
||||||
|
};
|
||||||
|
let channel = DiscordChannel::new("discord", cfg, reqwest::Client::new());
|
||||||
|
let err = channel
|
||||||
|
.send(&Message::info("Test", "Body"))
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
|
||||||
|
assert!(
|
||||||
|
matches!(err, ChannelError::ApiError { status: 400, .. }),
|
||||||
|
"unexpected error variant: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Telegram
|
||||||
|
// ──────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn telegram_send_returns_ok_on_200() {
|
||||||
|
let server = MockServer::start().await;
|
||||||
|
|
||||||
|
// Telegram returns {"ok": true, "result": {...}} with HTTP 200.
|
||||||
|
Mock::given(method("POST"))
|
||||||
|
.and(path("/botTEST_TOKEN/sendMessage"))
|
||||||
|
.respond_with(
|
||||||
|
ResponseTemplate::new(200).set_body_string(r#"{"ok":true,"result":{"message_id":1}}"#),
|
||||||
|
)
|
||||||
|
.mount(&server)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let cfg = TelegramConfig {
|
||||||
|
bot_token: "TEST_TOKEN".to_string(),
|
||||||
|
chat_id: "-100999".to_string(),
|
||||||
|
api_base: Some(server.uri()),
|
||||||
|
};
|
||||||
|
let channel = TelegramChannel::new("telegram", cfg, reqwest::Client::new());
|
||||||
|
|
||||||
|
channel
|
||||||
|
.send(&Message::error("Service down", "Critical alert"))
|
||||||
|
.await
|
||||||
|
.expect("should succeed on 200");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn telegram_send_returns_api_error_on_400() {
|
||||||
|
let server = MockServer::start().await;
|
||||||
|
|
||||||
|
Mock::given(method("POST"))
|
||||||
|
.and(path("/botBAD_TOKEN/sendMessage"))
|
||||||
|
.respond_with(
|
||||||
|
ResponseTemplate::new(400)
|
||||||
|
.set_body_string(r#"{"ok":false,"description":"Unauthorized"}"#),
|
||||||
|
)
|
||||||
|
.mount(&server)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let cfg = TelegramConfig {
|
||||||
|
bot_token: "BAD_TOKEN".to_string(),
|
||||||
|
chat_id: "-100".to_string(),
|
||||||
|
api_base: Some(server.uri()),
|
||||||
|
};
|
||||||
|
let channel = TelegramChannel::new("telegram", cfg, reqwest::Client::new());
|
||||||
|
let err = channel
|
||||||
|
.send(&Message::info("Test", "Body"))
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
|
||||||
|
assert!(
|
||||||
|
matches!(err, ChannelError::ApiError { status: 400, .. }),
|
||||||
|
"unexpected error variant: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Registry
|
||||||
|
// ──────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
struct AlwaysOkChannel {
|
||||||
|
name: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl NotificationChannel for AlwaysOkChannel {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
&self.name
|
||||||
|
}
|
||||||
|
async fn send(&self, _msg: &Message) -> Result<()> {
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct AlwaysFailChannel {
|
||||||
|
name: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl NotificationChannel for AlwaysFailChannel {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
&self.name
|
||||||
|
}
|
||||||
|
async fn send(&self, _msg: &Message) -> Result<()> {
|
||||||
|
Err(ChannelError::ApiError {
|
||||||
|
channel: self.name.clone(),
|
||||||
|
status: 503,
|
||||||
|
body: "unavailable".to_string(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn registry_send_routes_to_named_channel() {
|
||||||
|
let mut registry = ChannelRegistry::new();
|
||||||
|
registry.register(Arc::new(AlwaysOkChannel {
|
||||||
|
name: "ok-channel".to_string(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
registry
|
||||||
|
.send("ok-channel", Message::info("Test", "Body"))
|
||||||
|
.await
|
||||||
|
.expect("should route to ok-channel and succeed");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn registry_send_returns_not_found_for_unknown_channel() {
|
||||||
|
let registry = ChannelRegistry::new();
|
||||||
|
let err = registry
|
||||||
|
.send("does-not-exist", Message::info("Test", "Body"))
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
|
||||||
|
assert!(
|
||||||
|
matches!(err, ChannelError::NotFound(ref n) if n == "does-not-exist"),
|
||||||
|
"unexpected error: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn registry_broadcast_delivers_to_all_channels() {
|
||||||
|
let mut registry = ChannelRegistry::new();
|
||||||
|
registry.register(Arc::new(AlwaysOkChannel {
|
||||||
|
name: "ch-a".to_string(),
|
||||||
|
}));
|
||||||
|
registry.register(Arc::new(AlwaysOkChannel {
|
||||||
|
name: "ch-b".to_string(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
let results = registry
|
||||||
|
.broadcast(Message::success("All systems green", ""))
|
||||||
|
.await;
|
||||||
|
|
||||||
|
assert_eq!(results.len(), 2);
|
||||||
|
assert!(results.iter().all(|(_, r)| r.is_ok()));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn registry_broadcast_continues_after_partial_failure() {
|
||||||
|
let mut registry = ChannelRegistry::new();
|
||||||
|
registry.register(Arc::new(AlwaysOkChannel {
|
||||||
|
name: "good".to_string(),
|
||||||
|
}));
|
||||||
|
registry.register(Arc::new(AlwaysFailChannel {
|
||||||
|
name: "bad".to_string(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
let results = registry.broadcast(Message::info("Test", "Body")).await;
|
||||||
|
|
||||||
|
assert_eq!(results.len(), 2);
|
||||||
|
let ok_count = results.iter().filter(|(_, r)| r.is_ok()).count();
|
||||||
|
let err_count = results.iter().filter(|(_, r)| r.is_err()).count();
|
||||||
|
assert_eq!(ok_count, 1);
|
||||||
|
assert_eq!(err_count, 1);
|
||||||
|
}
|
||||||
@ -12,6 +12,7 @@ categories.workspace = true
|
|||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
vapora-shared = { workspace = true }
|
vapora-shared = { workspace = true }
|
||||||
|
vapora-channels = { workspace = true }
|
||||||
vapora-swarm = { workspace = true }
|
vapora-swarm = { workspace = true }
|
||||||
vapora-agents = { workspace = true }
|
vapora-agents = { workspace = true }
|
||||||
vapora-knowledge-graph = { workspace = true }
|
vapora-knowledge-graph = { workspace = true }
|
||||||
|
|||||||
@ -2,6 +2,7 @@ use std::path::Path;
|
|||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
|
use vapora_channels::config::ChannelConfig;
|
||||||
|
|
||||||
use crate::error::{ConfigError, Result};
|
use crate::error::{ConfigError, Result};
|
||||||
|
|
||||||
@ -9,6 +10,10 @@ use crate::error::{ConfigError, Result};
|
|||||||
pub struct WorkflowsConfig {
|
pub struct WorkflowsConfig {
|
||||||
pub engine: EngineConfig,
|
pub engine: EngineConfig,
|
||||||
pub workflows: Vec<WorkflowConfig>,
|
pub workflows: Vec<WorkflowConfig>,
|
||||||
|
/// Outbound notification channels keyed by name. Absent from TOML → no
|
||||||
|
/// notifications sent. Each entry becomes a channel in `ChannelRegistry`.
|
||||||
|
#[serde(default)]
|
||||||
|
pub channels: std::collections::HashMap<String, ChannelConfig>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Deserialize)]
|
#[derive(Debug, Clone, Deserialize)]
|
||||||
@ -20,6 +25,30 @@ pub struct EngineConfig {
|
|||||||
pub cedar_policy_dir: Option<String>,
|
pub cedar_policy_dir: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Per-workflow notification targets, keyed by event type.
|
||||||
|
///
|
||||||
|
/// Each field is a list of channel names registered in `[channels]`.
|
||||||
|
///
|
||||||
|
/// ```toml
|
||||||
|
/// [[workflows]]
|
||||||
|
/// name = "deploy-prod"
|
||||||
|
/// trigger = "schedule"
|
||||||
|
///
|
||||||
|
/// [workflows.notifications]
|
||||||
|
/// on_completed = ["team-slack"]
|
||||||
|
/// on_failed = ["team-slack", "ops-telegram"]
|
||||||
|
/// on_approval_required = ["team-slack"]
|
||||||
|
/// ```
|
||||||
|
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||||
|
pub struct WorkflowNotifications {
|
||||||
|
#[serde(default)]
|
||||||
|
pub on_completed: Vec<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub on_failed: Vec<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub on_approval_required: Vec<String>,
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Deserialize)]
|
#[derive(Debug, Clone, Deserialize)]
|
||||||
pub struct WorkflowConfig {
|
pub struct WorkflowConfig {
|
||||||
pub name: String,
|
pub name: String,
|
||||||
@ -27,6 +56,8 @@ pub struct WorkflowConfig {
|
|||||||
pub stages: Vec<StageConfig>,
|
pub stages: Vec<StageConfig>,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub schedule: Option<ScheduleConfig>,
|
pub schedule: Option<ScheduleConfig>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub notifications: WorkflowNotifications,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Cron-based scheduling configuration for `trigger = "schedule"` workflows.
|
/// Cron-based scheduling configuration for `trigger = "schedule"` workflows.
|
||||||
@ -78,7 +109,7 @@ impl WorkflowsConfig {
|
|||||||
Ok(config)
|
Ok(config)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn validate(&self) -> Result<()> {
|
pub fn validate(&self) -> Result<()> {
|
||||||
if self.workflows.is_empty() {
|
if self.workflows.is_empty() {
|
||||||
return Err(ConfigError::Invalid("No workflows defined".to_string()).into());
|
return Err(ConfigError::Invalid("No workflows defined".to_string()).into());
|
||||||
}
|
}
|
||||||
@ -207,6 +238,7 @@ approval_required = false
|
|||||||
cedar_policy_dir: None,
|
cedar_policy_dir: None,
|
||||||
},
|
},
|
||||||
workflows: vec![],
|
workflows: vec![],
|
||||||
|
channels: std::collections::HashMap::new(),
|
||||||
};
|
};
|
||||||
|
|
||||||
assert!(config.validate().is_err());
|
assert!(config.validate().is_err());
|
||||||
|
|||||||
@ -227,6 +227,7 @@ mod tests {
|
|||||||
},
|
},
|
||||||
],
|
],
|
||||||
schedule: None,
|
schedule: None,
|
||||||
|
notifications: Default::default(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -9,6 +9,7 @@ use surrealdb::Surreal;
|
|||||||
use tokio::sync::watch;
|
use tokio::sync::watch;
|
||||||
use tracing::{debug, error, info, warn};
|
use tracing::{debug, error, info, warn};
|
||||||
use vapora_agents::messages::{AgentMessage, TaskCompleted, TaskFailed};
|
use vapora_agents::messages::{AgentMessage, TaskCompleted, TaskFailed};
|
||||||
|
use vapora_channels::{ChannelRegistry, Message};
|
||||||
use vapora_knowledge_graph::persistence::KGPersistence;
|
use vapora_knowledge_graph::persistence::KGPersistence;
|
||||||
use vapora_swarm::coordinator::SwarmCoordinator;
|
use vapora_swarm::coordinator::SwarmCoordinator;
|
||||||
|
|
||||||
@ -36,6 +37,8 @@ pub struct WorkflowOrchestrator {
|
|||||||
store: Arc<SurrealWorkflowStore>,
|
store: Arc<SurrealWorkflowStore>,
|
||||||
saga: SagaCompensator,
|
saga: SagaCompensator,
|
||||||
cedar: Option<Arc<CedarAuthorizer>>,
|
cedar: Option<Arc<CedarAuthorizer>>,
|
||||||
|
/// Outbound notification registry. `None` when no channels are configured.
|
||||||
|
channels: Option<Arc<ChannelRegistry>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl WorkflowOrchestrator {
|
impl WorkflowOrchestrator {
|
||||||
@ -68,6 +71,16 @@ impl WorkflowOrchestrator {
|
|||||||
info!("Cedar authorization enabled for workflow stages");
|
info!("Cedar authorization enabled for workflow stages");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let channels = if config.channels.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
let count = config.channels.len();
|
||||||
|
let registry = ChannelRegistry::from_map(config.channels.clone())
|
||||||
|
.map_err(|e| WorkflowError::Internal(format!("Channel registry init: {e}")))?;
|
||||||
|
info!(count, "Notification channels registered");
|
||||||
|
Some(Arc::new(registry))
|
||||||
|
};
|
||||||
|
|
||||||
// Crash recovery: restore active workflows from DB
|
// Crash recovery: restore active workflows from DB
|
||||||
let active_workflows = DashMap::new();
|
let active_workflows = DashMap::new();
|
||||||
match store.load_active().await {
|
match store.load_active().await {
|
||||||
@ -96,6 +109,7 @@ impl WorkflowOrchestrator {
|
|||||||
store,
|
store,
|
||||||
saga,
|
saga,
|
||||||
cedar,
|
cedar,
|
||||||
|
channels,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -388,13 +402,16 @@ impl WorkflowOrchestrator {
|
|||||||
if should_continue {
|
if should_continue {
|
||||||
self.execute_current_stage(workflow_id).await?;
|
self.execute_current_stage(workflow_id).await?;
|
||||||
} else {
|
} else {
|
||||||
let duration = {
|
let (duration, template_name) = {
|
||||||
let instance = self
|
let instance = self
|
||||||
.active_workflows
|
.active_workflows
|
||||||
.get(workflow_id)
|
.get(workflow_id)
|
||||||
.ok_or_else(|| WorkflowError::WorkflowNotFound(workflow_id.to_string()))?;
|
.ok_or_else(|| WorkflowError::WorkflowNotFound(workflow_id.to_string()))?;
|
||||||
|
|
||||||
(Utc::now() - instance.created_at).num_seconds() as f64
|
(
|
||||||
|
(Utc::now() - instance.created_at).num_seconds() as f64,
|
||||||
|
instance.template_name.clone(),
|
||||||
|
)
|
||||||
};
|
};
|
||||||
|
|
||||||
self.metrics.workflow_duration_seconds.observe(duration);
|
self.metrics.workflow_duration_seconds.observe(duration);
|
||||||
@ -408,6 +425,8 @@ impl WorkflowOrchestrator {
|
|||||||
);
|
);
|
||||||
|
|
||||||
self.publish_workflow_completed(workflow_id).await?;
|
self.publish_workflow_completed(workflow_id).await?;
|
||||||
|
self.notify_workflow_completed(workflow_id, &template_name, duration)
|
||||||
|
.await;
|
||||||
|
|
||||||
// Remove from DB — terminal state is cleaned up
|
// Remove from DB — terminal state is cleaned up
|
||||||
if let Err(e) = self.store.delete(workflow_id).await {
|
if let Err(e) = self.store.delete(workflow_id).await {
|
||||||
@ -550,7 +569,7 @@ impl WorkflowOrchestrator {
|
|||||||
// `mark_current_task_failed` encapsulates the mutable stage borrow so
|
// `mark_current_task_failed` encapsulates the mutable stage borrow so
|
||||||
// the DashMap entry can be re-accessed without nesting or borrow
|
// the DashMap entry can be re-accessed without nesting or borrow
|
||||||
// conflicts.
|
// conflicts.
|
||||||
let compensation_data: Option<(Vec<StageState>, Value, String)> = {
|
let compensation_data: Option<(Vec<StageState>, Value, String, String)> = {
|
||||||
let mut instance = self
|
let mut instance = self
|
||||||
.active_workflows
|
.active_workflows
|
||||||
.get_mut(&workflow_id)
|
.get_mut(&workflow_id)
|
||||||
@ -576,6 +595,7 @@ impl WorkflowOrchestrator {
|
|||||||
let current_idx = instance.current_stage_idx;
|
let current_idx = instance.current_stage_idx;
|
||||||
let executed_stages = instance.stages[..current_idx].to_vec();
|
let executed_stages = instance.stages[..current_idx].to_vec();
|
||||||
let context = instance.initial_context.clone();
|
let context = instance.initial_context.clone();
|
||||||
|
let template_name = instance.template_name.clone();
|
||||||
|
|
||||||
instance.fail(format!("Stage {} failed: {}", stage_name, msg.error));
|
instance.fail(format!("Stage {} failed: {}", stage_name, msg.error));
|
||||||
|
|
||||||
@ -589,12 +609,12 @@ impl WorkflowOrchestrator {
|
|||||||
"Workflow failed"
|
"Workflow failed"
|
||||||
);
|
);
|
||||||
|
|
||||||
Some((executed_stages, context, stage_name))
|
Some((executed_stages, context, stage_name, template_name))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}; // DashMap lock released here
|
}; // DashMap lock released here
|
||||||
|
|
||||||
if let Some((executed_stages, context, _stage_name)) = compensation_data {
|
if let Some((executed_stages, context, stage_name, template_name)) = compensation_data {
|
||||||
// Saga compensation: dispatch rollback tasks in reverse order (best-effort)
|
// Saga compensation: dispatch rollback tasks in reverse order (best-effort)
|
||||||
self.saga
|
self.saga
|
||||||
.compensate(&workflow_id, &executed_stages, &context)
|
.compensate(&workflow_id, &executed_stages, &context)
|
||||||
@ -604,6 +624,9 @@ impl WorkflowOrchestrator {
|
|||||||
if let Some(instance) = self.active_workflows.get(&workflow_id) {
|
if let Some(instance) = self.active_workflows.get(&workflow_id) {
|
||||||
self.store.save(instance.value()).await?;
|
self.store.save(instance.value()).await?;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
self.notify_workflow_failed(&workflow_id, &template_name, &stage_name, &msg.error)
|
||||||
|
.await;
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
@ -631,6 +654,15 @@ impl WorkflowOrchestrator {
|
|||||||
"Approval request published"
|
"Approval request published"
|
||||||
);
|
);
|
||||||
|
|
||||||
|
let template_name = self
|
||||||
|
.active_workflows
|
||||||
|
.get(workflow_id)
|
||||||
|
.map(|e| e.template_name.clone())
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
self.notify_approval_required(workflow_id, &template_name, stage_name)
|
||||||
|
.await;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -700,6 +732,113 @@ impl WorkflowOrchestrator {
|
|||||||
Ok((scheduler, shutdown_tx))
|
Ok((scheduler, shutdown_tx))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Sends a completion notification to every channel listed in
|
||||||
|
/// `workflow.notifications.on_completed`. Never propagates errors —
|
||||||
|
/// a channel timeout must not abort the workflow record.
|
||||||
|
async fn notify_workflow_completed(
|
||||||
|
&self,
|
||||||
|
workflow_id: &str,
|
||||||
|
template: &str,
|
||||||
|
duration_secs: f64,
|
||||||
|
) {
|
||||||
|
let Some(registry) = &self.channels else {
|
||||||
|
return;
|
||||||
|
};
|
||||||
|
|
||||||
|
let targets = self
|
||||||
|
.config
|
||||||
|
.get_workflow(template)
|
||||||
|
.map(|w| w.notifications.on_completed.clone())
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
if targets.is_empty() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let msg = Message::success(
|
||||||
|
format!("Workflow completed: {}", template),
|
||||||
|
format!("All stages finished in {:.0}s", duration_secs),
|
||||||
|
)
|
||||||
|
.with_metadata("workflow_id", workflow_id)
|
||||||
|
.with_metadata("template", template)
|
||||||
|
.with_metadata("duration", format!("{:.0}s", duration_secs));
|
||||||
|
|
||||||
|
for target in &targets {
|
||||||
|
if let Err(e) = registry.send(target, msg.clone()).await {
|
||||||
|
warn!(channel = %target, error = %e, "Completion notification failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn notify_workflow_failed(
|
||||||
|
&self,
|
||||||
|
workflow_id: &str,
|
||||||
|
template: &str,
|
||||||
|
stage: &str,
|
||||||
|
error: &str,
|
||||||
|
) {
|
||||||
|
let Some(registry) = &self.channels else {
|
||||||
|
return;
|
||||||
|
};
|
||||||
|
|
||||||
|
let targets = self
|
||||||
|
.config
|
||||||
|
.get_workflow(template)
|
||||||
|
.map(|w| w.notifications.on_failed.clone())
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
if targets.is_empty() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let msg = Message::error(
|
||||||
|
format!("Workflow failed: {}", template),
|
||||||
|
format!("Stage `{}` failed: {}", stage, error),
|
||||||
|
)
|
||||||
|
.with_metadata("workflow_id", workflow_id)
|
||||||
|
.with_metadata("template", template)
|
||||||
|
.with_metadata("failed_stage", stage);
|
||||||
|
|
||||||
|
for target in &targets {
|
||||||
|
if let Err(e) = registry.send(target, msg.clone()).await {
|
||||||
|
warn!(channel = %target, error = %e, "Failure notification failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn notify_approval_required(&self, workflow_id: &str, template: &str, stage_name: &str) {
|
||||||
|
let Some(registry) = &self.channels else {
|
||||||
|
return;
|
||||||
|
};
|
||||||
|
|
||||||
|
let targets = self
|
||||||
|
.config
|
||||||
|
.get_workflow(template)
|
||||||
|
.map(|w| w.notifications.on_approval_required.clone())
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
if targets.is_empty() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let msg = Message::warning(
|
||||||
|
format!("Approval required: {}", stage_name),
|
||||||
|
format!(
|
||||||
|
"Workflow `{}` is waiting for human approval to proceed with stage `{}`.",
|
||||||
|
template, stage_name
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.with_metadata("workflow_id", workflow_id)
|
||||||
|
.with_metadata("template", template)
|
||||||
|
.with_metadata("stage", stage_name);
|
||||||
|
|
||||||
|
for target in &targets {
|
||||||
|
if let Err(e) = registry.send(target, msg.clone()).await {
|
||||||
|
warn!(channel = %target, error = %e, "Approval notification failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pub fn list_templates(&self) -> Vec<String> {
|
pub fn list_templates(&self) -> Vec<String> {
|
||||||
self.config
|
self.config
|
||||||
.workflows
|
.workflows
|
||||||
|
|||||||
106
crates/vapora-workflow-engine/tests/notification_config.rs
Normal file
106
crates/vapora-workflow-engine/tests/notification_config.rs
Normal file
@ -0,0 +1,106 @@
|
|||||||
|
use vapora_channels::ChannelRegistry;
|
||||||
|
use vapora_workflow_engine::config::WorkflowsConfig;
|
||||||
|
|
||||||
|
/// Full TOML round-trip: channels section + per-workflow notification targets.
|
||||||
|
const TOML_WITH_CHANNELS: &str = r#"
|
||||||
|
[engine]
|
||||||
|
max_parallel_tasks = 4
|
||||||
|
workflow_timeout = 3600
|
||||||
|
approval_gates_enabled = false
|
||||||
|
|
||||||
|
[channels.team-slack]
|
||||||
|
type = "slack"
|
||||||
|
webhook_url = "https://hooks.slack.com/services/TEST/TEST/TEST"
|
||||||
|
|
||||||
|
[channels.ops-telegram]
|
||||||
|
type = "telegram"
|
||||||
|
bot_token = "123:TEST"
|
||||||
|
chat_id = "-100999"
|
||||||
|
|
||||||
|
[[workflows]]
|
||||||
|
name = "deploy-prod"
|
||||||
|
trigger = "manual"
|
||||||
|
|
||||||
|
[workflows.notifications]
|
||||||
|
on_completed = ["team-slack"]
|
||||||
|
on_failed = ["team-slack", "ops-telegram"]
|
||||||
|
on_approval_required = ["team-slack"]
|
||||||
|
|
||||||
|
[[workflows.stages]]
|
||||||
|
name = "build"
|
||||||
|
agents = ["developer"]
|
||||||
|
|
||||||
|
[[workflows.stages]]
|
||||||
|
name = "deploy"
|
||||||
|
agents = ["deployer"]
|
||||||
|
approval_required = true
|
||||||
|
"#;
|
||||||
|
|
||||||
|
/// Workflow with no [channels] section — should parse without error and leave
|
||||||
|
/// the channel map empty (registry skipped by orchestrator).
|
||||||
|
const TOML_WITHOUT_CHANNELS: &str = r#"
|
||||||
|
[engine]
|
||||||
|
max_parallel_tasks = 4
|
||||||
|
workflow_timeout = 3600
|
||||||
|
approval_gates_enabled = false
|
||||||
|
|
||||||
|
[[workflows]]
|
||||||
|
name = "ci-pipeline"
|
||||||
|
trigger = "manual"
|
||||||
|
|
||||||
|
[[workflows.stages]]
|
||||||
|
name = "test"
|
||||||
|
agents = ["developer"]
|
||||||
|
"#;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn channels_section_parses_into_config() {
|
||||||
|
let config: WorkflowsConfig = toml::from_str(TOML_WITH_CHANNELS).expect("must parse");
|
||||||
|
|
||||||
|
assert_eq!(config.channels.len(), 2);
|
||||||
|
assert!(config.channels.contains_key("team-slack"));
|
||||||
|
assert!(config.channels.contains_key("ops-telegram"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn notification_targets_parse_per_workflow() {
|
||||||
|
let config: WorkflowsConfig = toml::from_str(TOML_WITH_CHANNELS).expect("must parse");
|
||||||
|
|
||||||
|
let wf = config.get_workflow("deploy-prod").expect("workflow exists");
|
||||||
|
assert_eq!(wf.notifications.on_completed, ["team-slack"]);
|
||||||
|
assert_eq!(wf.notifications.on_failed, ["team-slack", "ops-telegram"]);
|
||||||
|
assert_eq!(wf.notifications.on_approval_required, ["team-slack"]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn missing_channels_section_defaults_to_empty() {
|
||||||
|
let config: WorkflowsConfig = toml::from_str(TOML_WITHOUT_CHANNELS).expect("must parse");
|
||||||
|
|
||||||
|
assert!(config.channels.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn missing_notifications_block_defaults_to_empty_vecs() {
|
||||||
|
let config: WorkflowsConfig = toml::from_str(TOML_WITHOUT_CHANNELS).expect("must parse");
|
||||||
|
|
||||||
|
let wf = config.get_workflow("ci-pipeline").expect("workflow exists");
|
||||||
|
assert!(wf.notifications.on_completed.is_empty());
|
||||||
|
assert!(wf.notifications.on_failed.is_empty());
|
||||||
|
assert!(wf.notifications.on_approval_required.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn channel_registry_builds_from_config() {
|
||||||
|
let config: WorkflowsConfig = toml::from_str(TOML_WITH_CHANNELS).expect("must parse");
|
||||||
|
let registry = ChannelRegistry::from_map(config.channels).expect("registry must build");
|
||||||
|
|
||||||
|
let mut names = registry.channel_names();
|
||||||
|
names.sort_unstable();
|
||||||
|
assert_eq!(names, ["ops-telegram", "team-slack"]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validation_passes_with_channels_and_notifications() {
|
||||||
|
let config: WorkflowsConfig = toml::from_str(TOML_WITH_CHANNELS).expect("must parse");
|
||||||
|
config.validate().expect("validation must pass");
|
||||||
|
}
|
||||||
159
docs/adrs/0035-notification-channels.md
Normal file
159
docs/adrs/0035-notification-channels.md
Normal file
@ -0,0 +1,159 @@
|
|||||||
|
# ADR-0035: Webhook-Based Notification Channels — `vapora-channels` Crate
|
||||||
|
|
||||||
|
**Status**: Implemented
|
||||||
|
**Date**: 2026-02-26
|
||||||
|
**Deciders**: VAPORA Team
|
||||||
|
**Technical Story**: Workflow events (task completion, proposal approve/reject, schedule fires) had no outbound delivery path; operators had to poll the API to learn about state changes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Introduce a dedicated `vapora-channels` crate implementing a **trait-based webhook delivery layer** with:
|
||||||
|
|
||||||
|
1. `NotificationChannel` trait — single `send(&Message) -> Result<()>` method; consumers implement HTTP webhooks (Slack, Discord, Telegram) without vendor SDK dependencies.
|
||||||
|
2. `ChannelRegistry` — name-keyed routing hub; `from_config(HashMap<String, ChannelConfig>)` builds the registry from TOML config, resolving secrets at construction time.
|
||||||
|
3. `${VAR}` / `${VAR:-default}` interpolation **inside the library** — secret resolution is mandatory and cannot be bypassed by callers.
|
||||||
|
4. Fire-and-forget delivery at both layers: `AppState::notify` (backend) and `WorkflowOrchestrator::notify_*` (workflow engine) spawn background tasks; delivery failures are `warn!`-logged and never surface to API callers.
|
||||||
|
5. Per-event routing config (`NotificationConfig`) maps event names to channel-name lists, not hardcoded channel identifiers.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
### Gaps Addressed
|
||||||
|
|
||||||
|
| Gap | Consequence |
|
||||||
|
|-----|-------------|
|
||||||
|
| No outbound event delivery | Operators must poll 40+ API endpoints to detect state changes |
|
||||||
|
| Secrets in TOML as plain strings | If resolution is left to callers, a `${SLACK_WEBHOOK_URL}` placeholder reaches the HTTP layer verbatim when the caller forgets to interpolate |
|
||||||
|
| Tight vendor coupling | Using `slack-rs` / `serenity` locks the feature to specific Slack/Discord API versions and transitive dependency trees |
|
||||||
|
|
||||||
|
### Why `NotificationChannel` Trait Over Vendor SDKs
|
||||||
|
|
||||||
|
Slack, Discord, and Telegram all accept a simple `POST` with a JSON body to a webhook URL — no OAuth, no persistent connection, no stateful session. A trait with one async method covers all three with less than 50 lines per implementation. Vendor SDKs add 200–500 kB of transitive dependencies and introduce breaking changes on provider API updates.
|
||||||
|
|
||||||
|
### Why Secret Resolution in the Library
|
||||||
|
|
||||||
|
Placing the responsibility on the caller creates a **TOFU gap**: the first time any caller forgets to call `resolve_secrets()` before constructing `ChannelRegistry`, a raw `${SLACK_WEBHOOK_URL}` string is sent to Slack's API as the URL. The request fails silently (Slack returns 404 or 400), the placeholder leaks in logs, and no compile-time or runtime warning is raised until a log is inspected.
|
||||||
|
|
||||||
|
Moving interpolation into `ChannelRegistry::from_config` makes it **structurally impossible to construct a registry with unresolved secrets**: `ChannelError::SecretNotFound(var_name)` is returned immediately if an env var is absent and no default is provided. There is no non-error path that bypasses resolution.
|
||||||
|
|
||||||
|
### Why Fire-and-Forget With `tokio::spawn`
|
||||||
|
|
||||||
|
Notification delivery is a best-effort side-effect, not part of the request/response contract. A Slack outage should not cause a `POST /api/v1/proposals/:id/approve` to return 500. Spawning an independent task decouples delivery latency from API latency; `warn!` logging provides observability without blocking the caller.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Crate Structure (`vapora-channels`)
|
||||||
|
|
||||||
|
```
|
||||||
|
vapora-channels/
|
||||||
|
├── src/
|
||||||
|
│ ├── lib.rs — pub re-exports (ChannelRegistry, Message, NotificationChannel)
|
||||||
|
│ ├── channel.rs — NotificationChannel trait
|
||||||
|
│ ├── config.rs — ChannelsConfig, ChannelConfig, SlackConfig/DiscordConfig/TelegramConfig
|
||||||
|
│ │ resolve_secrets() chain + interpolate() with OnceLock<Regex>
|
||||||
|
│ ├── error.rs — ChannelError: NotFound, ApiError, SecretNotFound, SerializationError
|
||||||
|
│ ├── message.rs — Message { title, body, level: Info|Success|Warning|Error }
|
||||||
|
│ ├── registry.rs — ChannelRegistry: name → Arc<dyn NotificationChannel>
|
||||||
|
│ └── webhooks/
|
||||||
|
│ ├── slack.rs — SlackChannel: POST IncomingWebhook JSON
|
||||||
|
│ ├── discord.rs — DiscordChannel: POST Webhook embed JSON
|
||||||
|
│ └── telegram.rs— TelegramChannel: POST bot sendMessage JSON
|
||||||
|
```
|
||||||
|
|
||||||
|
### Secret Resolution
|
||||||
|
|
||||||
|
```
|
||||||
|
interpolate(s: &str) -> Result<String>:
|
||||||
|
regex: \$\{([^}:]+)(?::-(.*?))?\} (compiled once via OnceLock)
|
||||||
|
fast-path: if !s.contains("${") { return Ok(s) }
|
||||||
|
for each capture:
|
||||||
|
var_name = capture[1]
|
||||||
|
default = capture[2] (optional)
|
||||||
|
match env::var(var_name):
|
||||||
|
Ok(v) → replace placeholder with v
|
||||||
|
Err(_) → if default.is_some(): replace with default
|
||||||
|
else: return Err(SecretNotFound(var_name))
|
||||||
|
```
|
||||||
|
|
||||||
|
`resolve_secrets()` is called unconditionally in `ChannelRegistry::from_config` — single mandatory call site, no consumer bypass.
|
||||||
|
|
||||||
|
### Integration Points
|
||||||
|
|
||||||
|
#### `vapora-workflow-engine`
|
||||||
|
|
||||||
|
`WorkflowConfig.notifications: WorkflowNotifications` maps four events to channel-name lists:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[workflows.myflow.notifications]
|
||||||
|
on_stage_complete = ["team-slack"]
|
||||||
|
on_stage_failed = ["team-slack", "ops-discord"]
|
||||||
|
on_completed = ["team-slack"]
|
||||||
|
on_cancelled = ["ops-discord"]
|
||||||
|
```
|
||||||
|
|
||||||
|
`WorkflowOrchestrator` holds `Option<Arc<ChannelRegistry>>` and calls `notify_stage_complete`, `notify_stage_failed`, `notify_completed`, `notify_cancelled` — each spawns `dispatch_notifications`.
|
||||||
|
|
||||||
|
#### `vapora-backend`
|
||||||
|
|
||||||
|
`Config.channels: HashMap<String, ChannelConfig>` and `Config.notifications: NotificationConfig`:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[channels.team-slack]
|
||||||
|
type = "slack"
|
||||||
|
webhook_url = "${SLACK_WEBHOOK_URL}"
|
||||||
|
|
||||||
|
[notifications]
|
||||||
|
on_task_done = ["team-slack"]
|
||||||
|
on_proposal_approved = ["team-slack", "ops-discord"]
|
||||||
|
on_proposal_rejected = ["ops-discord"]
|
||||||
|
```
|
||||||
|
|
||||||
|
`AppState` gains `channel_registry: Option<Arc<ChannelRegistry>>` and `notification_config: Arc<NotificationConfig>`. Hooks in three existing handlers:
|
||||||
|
|
||||||
|
- `update_task_status` — fires `Message::success` on `TaskStatus::Done`
|
||||||
|
- `approve_proposal` — fires `Message::success`
|
||||||
|
- `reject_proposal` — fires `Message::warning`
|
||||||
|
|
||||||
|
#### New REST Endpoints
|
||||||
|
|
||||||
|
| Method | Path | Description |
|
||||||
|
|--------|------|-------------|
|
||||||
|
| `GET` | `/api/v1/channels` | List registered channel names |
|
||||||
|
| `POST` | `/api/v1/channels/:name/test` | Send connectivity test; 200 OK / 404 / 502 |
|
||||||
|
|
||||||
|
### Testability
|
||||||
|
|
||||||
|
`dispatch_notifications` is extracted as `pub(crate) async fn` taking `Option<Arc<ChannelRegistry>>` directly, making it testable without a DB or a fully-constructed `AppState`. Five inline tests in `state.rs` use `RecordingChannel` (captures messages) and `FailingChannel` (returns 503 error) to verify:
|
||||||
|
|
||||||
|
1. No-op when registry is `None`
|
||||||
|
2. Single-channel delivery
|
||||||
|
3. Multi-channel broadcast
|
||||||
|
4. Resilience: delivery continues after one channel fails
|
||||||
|
5. Warn-log on unknown channel name, other channels still receive
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive
|
||||||
|
|
||||||
|
- Operators get real-time Slack/Discord/Telegram alerts on task completion, proposal decisions, and workflow lifecycle events.
|
||||||
|
- Adding a new channel type requires implementing one trait method and one TOML variant — no changes to routing or dispatch code.
|
||||||
|
- Secret resolution failures surface immediately at startup (if `ChannelRegistry::from_config` is called at boot), not silently at first delivery.
|
||||||
|
- Zero additional infrastructure: webhooks are outbound-only HTTP POSTs.
|
||||||
|
|
||||||
|
### Negative / Trade-offs
|
||||||
|
|
||||||
|
- Delivery is best-effort (fire-and-forget). A channel that is consistently down produces `warn!` logs but no alert escalation; consumers needing guaranteed delivery must implement their own retry loop or use a message queue.
|
||||||
|
- `${VAR}` interpolation uses `unsafe { std::env::set_var }` in tests (required by Rust 1.80 stabilization of the unsafety annotation). Tests set/unset env vars; multi-threaded test parallelism can cause flaky results if not isolated with `#[serial_test::serial]`.
|
||||||
|
- No per-channel rate limiting: a workflow that fires 1,000 stage-complete events will produce 1,000 Slack messages. Operators must configure `notifications` lists deliberately.
|
||||||
|
|
||||||
|
### Supersedes / Specializes
|
||||||
|
|
||||||
|
- Builds on `SecretumVault` pattern (ADR-0011) philosophy of never storing secrets as plain strings; specializes it to config-file webhook tokens.
|
||||||
|
- Parallel to `vapora-a2a-client`'s retry pattern (ADR-0030) — both handle external HTTP delivery, but channels are fire-and-forget while A2A requires confirmed response.
|
||||||
@ -2,8 +2,8 @@
|
|||||||
|
|
||||||
Documentación de las decisiones arquitectónicas clave del proyecto VAPORA.
|
Documentación de las decisiones arquitectónicas clave del proyecto VAPORA.
|
||||||
|
|
||||||
**Status**: Complete (33 ADRs documented)
|
**Status**: Complete (35 ADRs documented)
|
||||||
**Last Updated**: 2026-02-21
|
**Last Updated**: 2026-02-26
|
||||||
**Format**: Custom VAPORA (Decision, Rationale, Alternatives, Trade-offs, Implementation, Verification, Consequences)
|
**Format**: Custom VAPORA (Decision, Rationale, Alternatives, Trade-offs, Implementation, Verification, Consequences)
|
||||||
|
|
||||||
---
|
---
|
||||||
@ -81,6 +81,8 @@ Decisiones únicas que diferencian a VAPORA de otras plataformas de orquestació
|
|||||||
| [028](./0028-workflow-orchestrator.md) | Workflow Orchestrator para Multi-Agent Pipelines | Short-lived agent contexts + artifact passing para reducir cache tokens 95% | ✅ Accepted |
|
| [028](./0028-workflow-orchestrator.md) | Workflow Orchestrator para Multi-Agent Pipelines | Short-lived agent contexts + artifact passing para reducir cache tokens 95% | ✅ Accepted |
|
||||||
| [029](./0029-rlm-recursive-language-models.md) | Recursive Language Models (RLM) | Custom Rust engine: BM25 + semantic hybrid search + distributed LLM dispatch + WASM/Docker sandbox | ✅ Accepted |
|
| [029](./0029-rlm-recursive-language-models.md) | Recursive Language Models (RLM) | Custom Rust engine: BM25 + semantic hybrid search + distributed LLM dispatch + WASM/Docker sandbox | ✅ Accepted |
|
||||||
| [033](./0033-stratum-orchestrator-workflow-hardening.md) | Workflow Engine Hardening — Persistence · Saga · Cedar | SurrealDB persistence + Saga best-effort rollback + Cedar per-stage auth; stratum patterns implemented natively (no path dep) | ✅ Implemented |
|
| [033](./0033-stratum-orchestrator-workflow-hardening.md) | Workflow Engine Hardening — Persistence · Saga · Cedar | SurrealDB persistence + Saga best-effort rollback + Cedar per-stage auth; stratum patterns implemented natively (no path dep) | ✅ Implemented |
|
||||||
|
| [034](./0034-autonomous-scheduling.md) | Autonomous Scheduling — Timezone Support and Distributed Fire-Lock | `chrono-tz` IANA-aware cron evaluation + SurrealDB conditional UPDATE fire-lock; no external lock service required | ✅ Implemented |
|
||||||
|
| [035](./0035-notification-channels.md) | Webhook-Based Notification Channels — `vapora-channels` Crate | Trait-based webhook delivery (Slack/Discord/Telegram) + `${VAR}` secret resolution built into `ChannelRegistry::from_config`; fire-and-forget via `tokio::spawn` | ✅ Implemented |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -141,6 +143,8 @@ Patrones de desarrollo y arquitectura utilizados en todo el codebase.
|
|||||||
- **Real-Time WebSocket Updates**: Broadcast channels for efficient multi-client workflow progress updates
|
- **Real-Time WebSocket Updates**: Broadcast channels for efficient multi-client workflow progress updates
|
||||||
- **Workflow Orchestrator**: Short-lived agent contexts + artifact passing reduce cache token costs ~95% vs monolithic sessions
|
- **Workflow Orchestrator**: Short-lived agent contexts + artifact passing reduce cache token costs ~95% vs monolithic sessions
|
||||||
- **Recursive Language Models (RLM)**: Hybrid BM25+semantic search + distributed LLM dispatch + WASM/Docker sandbox enables reasoning over 100k+ token documents
|
- **Recursive Language Models (RLM)**: Hybrid BM25+semantic search + distributed LLM dispatch + WASM/Docker sandbox enables reasoning over 100k+ token documents
|
||||||
|
- **Autonomous Scheduling**: `chrono-tz` IANA-aware cron evaluation + SurrealDB CAS fire-lock eliminates double-fires in multi-instance deployments without external lock infrastructure
|
||||||
|
- **Notification Channels**: Trait-based webhook delivery with `${VAR}` secret resolution built into `ChannelRegistry` construction — operators get real-time Slack/Discord/Telegram alerts with zero new infrastructure
|
||||||
|
|
||||||
### 🔧 Development Patterns
|
### 🔧 Development Patterns
|
||||||
|
|
||||||
|
|||||||
@ -5,3 +5,5 @@ VAPORA capabilities and overview documentation.
|
|||||||
## Contents
|
## Contents
|
||||||
|
|
||||||
- **[Features Overview](overview.md)** — Complete feature list and descriptions including learning-based agent selection, cost optimization, and swarm coordination
|
- **[Features Overview](overview.md)** — Complete feature list and descriptions including learning-based agent selection, cost optimization, and swarm coordination
|
||||||
|
- **[Workflow Orchestrator](workflow-orchestrator.md)** — Multi-stage pipelines, approval gates, artifacts, autonomous scheduling, and distributed fire-lock
|
||||||
|
- **[Notification Channels](notification-channels.md)** — Webhook delivery to Slack, Discord, and Telegram with built-in secret resolution
|
||||||
|
|||||||
236
docs/features/notification-channels.md
Normal file
236
docs/features/notification-channels.md
Normal file
@ -0,0 +1,236 @@
|
|||||||
|
# Notification Channels
|
||||||
|
|
||||||
|
Real-time outbound alerts to Slack, Discord, and Telegram via webhook delivery.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
`vapora-channels` provides a trait-based webhook notification layer. When VAPORA events occur (task completion, proposal decisions, workflow lifecycle), configured channels receive a message immediately — no polling required.
|
||||||
|
|
||||||
|
**Key properties**:
|
||||||
|
|
||||||
|
- No vendor SDKs — plain HTTP POST to webhook URLs
|
||||||
|
- Secret tokens resolved from environment variables at startup; a raw `${VAR}` placeholder never reaches the HTTP layer
|
||||||
|
- Fire-and-forget delivery: channel failures never surface as API errors
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
All channel configuration lives in `vapora.toml`.
|
||||||
|
|
||||||
|
### Declaring channels
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[channels.team-slack]
|
||||||
|
type = "slack"
|
||||||
|
webhook_url = "${SLACK_WEBHOOK_URL}"
|
||||||
|
|
||||||
|
[channels.ops-discord]
|
||||||
|
type = "discord"
|
||||||
|
webhook_url = "${DISCORD_WEBHOOK_URL}"
|
||||||
|
|
||||||
|
[channels.alerts-telegram]
|
||||||
|
type = "telegram"
|
||||||
|
bot_token = "${TELEGRAM_BOT_TOKEN}"
|
||||||
|
chat_id = "${TELEGRAM_CHAT_ID}"
|
||||||
|
```
|
||||||
|
|
||||||
|
Channel names (`team-slack`, `ops-discord`, `alerts-telegram`) are arbitrary identifiers used in event routing below.
|
||||||
|
|
||||||
|
### Routing events to channels
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[notifications]
|
||||||
|
on_task_done = ["team-slack"]
|
||||||
|
on_proposal_approved = ["team-slack", "ops-discord"]
|
||||||
|
on_proposal_rejected = ["ops-discord"]
|
||||||
|
```
|
||||||
|
|
||||||
|
Each key is an event name; the value is a list of channel names declared in `[channels.*]`. An empty list or absent key means no notification for that event.
|
||||||
|
|
||||||
|
### Workflow lifecycle notifications
|
||||||
|
|
||||||
|
Per-workflow notification targets are set in the workflow template:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[workflows]]
|
||||||
|
name = "nightly_analysis"
|
||||||
|
trigger = "schedule"
|
||||||
|
|
||||||
|
[workflows.nightly_analysis.notifications]
|
||||||
|
on_stage_complete = ["team-slack"]
|
||||||
|
on_stage_failed = ["team-slack", "ops-discord"]
|
||||||
|
on_completed = ["team-slack"]
|
||||||
|
on_cancelled = ["ops-discord"]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Secret Resolution
|
||||||
|
|
||||||
|
Token values in `[channels.*]` blocks are interpolated from the environment before any network call is made. Two syntaxes are supported:
|
||||||
|
|
||||||
|
| Syntax | Behaviour |
|
||||||
|
|--------|-----------|
|
||||||
|
| `"${VAR}"` | Replaced with `$VAR`; startup fails if the variable is unset |
|
||||||
|
| `"${VAR:-default}"` | Replaced with `$VAR` if set, otherwise `default` |
|
||||||
|
|
||||||
|
Resolution happens inside `ChannelRegistry::from_config` — the single mandatory call site. There is no way to construct a registry with an unresolved placeholder.
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/T.../..."
|
||||||
|
export DISCORD_WEBHOOK_URL="https://discord.com/api/webhooks/..."
|
||||||
|
export TELEGRAM_BOT_TOKEN="123456:ABC..."
|
||||||
|
export TELEGRAM_CHAT_ID="-1001234567890"
|
||||||
|
```
|
||||||
|
|
||||||
|
If a required variable is absent and no default is provided, VAPORA exits at startup with:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Error: Secret reference '${SLACK_WEBHOOK_URL}' not resolved: env var not set and no default provided
|
||||||
|
```
|
||||||
|
|
||||||
|
## Supported Channel Types
|
||||||
|
|
||||||
|
### Slack
|
||||||
|
|
||||||
|
Uses the [Incoming Webhooks](https://api.slack.com/messaging/webhooks) API. The webhook URL is obtained from Slack's app configuration.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[channels.my-slack]
|
||||||
|
type = "slack"
|
||||||
|
webhook_url = "${SLACK_WEBHOOK_URL}"
|
||||||
|
```
|
||||||
|
|
||||||
|
Payload format: `{ "text": "**Title**\nBody" }`. No SDK dependency.
|
||||||
|
|
||||||
|
### Discord
|
||||||
|
|
||||||
|
Uses the [Discord Webhook](https://discord.com/developers/docs/resources/webhook) endpoint. The webhook URL includes the token — obtain it from the channel's Integrations settings.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[channels.my-discord]
|
||||||
|
type = "discord"
|
||||||
|
webhook_url = "${DISCORD_WEBHOOK_URL}"
|
||||||
|
```
|
||||||
|
|
||||||
|
Payload format: `{ "embeds": [{ "title": "...", "description": "...", "color": <level-color> }] }`.
|
||||||
|
|
||||||
|
### Telegram
|
||||||
|
|
||||||
|
Uses the [Bot API](https://core.telegram.org/bots/api#sendmessage) `sendMessage` endpoint. Requires a bot token from `@BotFather` and the numeric chat ID of the target group or channel.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[channels.my-telegram]
|
||||||
|
type = "telegram"
|
||||||
|
bot_token = "${TELEGRAM_BOT_TOKEN}"
|
||||||
|
chat_id = "${TELEGRAM_CHAT_ID}"
|
||||||
|
```
|
||||||
|
|
||||||
|
Payload format: `{ "chat_id": "...", "text": "**Title**\nBody", "parse_mode": "Markdown" }`.
|
||||||
|
|
||||||
|
## Message Levels
|
||||||
|
|
||||||
|
Every notification carries a level that controls colour and emoji in the rendered message:
|
||||||
|
|
||||||
|
| Level | Constructor | Use case |
|
||||||
|
|-------|-------------|----------|
|
||||||
|
| `Info` | `Message::info(title, body)` | General status updates |
|
||||||
|
| `Success` | `Message::success(title, body)` | Task done, workflow completed |
|
||||||
|
| `Warning` | `Message::warning(title, body)` | Proposal rejected, stage failed |
|
||||||
|
| `Error` | `Message::error(title, body)` | Unrecoverable failure |
|
||||||
|
|
||||||
|
## REST API
|
||||||
|
|
||||||
|
Two endpoints are available under `/api/v1/channels`:
|
||||||
|
|
||||||
|
### List channels
|
||||||
|
|
||||||
|
```http
|
||||||
|
GET /api/v1/channels
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns the names of all registered channels (sorted alphabetically). Returns an empty list when no channels are configured.
|
||||||
|
|
||||||
|
**Response**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"channels": ["ops-discord", "team-slack"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test a channel
|
||||||
|
|
||||||
|
```http
|
||||||
|
POST /api/v1/channels/:name/test
|
||||||
|
```
|
||||||
|
|
||||||
|
Sends a connectivity test message to the named channel and returns synchronously.
|
||||||
|
|
||||||
|
| Status | Meaning |
|
||||||
|
|--------|---------|
|
||||||
|
| `200 OK` | Message delivered successfully |
|
||||||
|
| `404 Not Found` | Channel name unknown or no channels configured |
|
||||||
|
| `502 Bad Gateway` | Delivery attempt failed at the remote platform |
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:8001/api/v1/channels/team-slack/test
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected Slack message: `Test notification — Connectivity test from VAPORA backend for channel 'team-slack'`
|
||||||
|
|
||||||
|
## Delivery Semantics
|
||||||
|
|
||||||
|
Delivery is **fire-and-forget**: `AppState::notify` spawns a background Tokio task and returns immediately. The API response does not wait for webhook delivery to complete.
|
||||||
|
|
||||||
|
Behaviour on failure:
|
||||||
|
|
||||||
|
- Unknown channel name: `warn!` log, delivery to other targets continues
|
||||||
|
- HTTP error from the remote platform: `warn!` log, delivery to other targets continues
|
||||||
|
- No channels configured (`channel_registry = None`): silent no-op
|
||||||
|
|
||||||
|
There is no built-in retry. A channel that is consistently unreachable produces `warn!` log lines but no escalation. Use the `/test` endpoint to confirm connectivity after configuration changes.
|
||||||
|
|
||||||
|
## Events Reference
|
||||||
|
|
||||||
|
| Event key | Trigger | Default level |
|
||||||
|
|-----------|---------|---------------|
|
||||||
|
| `on_task_done` | Task moved to `Done` status | `Success` |
|
||||||
|
| `on_proposal_approved` | Proposal approved via API | `Success` |
|
||||||
|
| `on_proposal_rejected` | Proposal rejected via API | `Warning` |
|
||||||
|
| `on_stage_complete` | Workflow stage finished | `Info` |
|
||||||
|
| `on_stage_failed` | Workflow stage failed | `Warning` |
|
||||||
|
| `on_completed` | Workflow reached terminal `Completed` state | `Success` |
|
||||||
|
| `on_cancelled` | Workflow cancelled | `Warning` |
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Channel not receiving messages
|
||||||
|
|
||||||
|
1. Verify the channel name in `[notifications]` matches the name in `[channels.*]` exactly (case-sensitive).
|
||||||
|
2. Confirm the env variable is set: `echo $SLACK_WEBHOOK_URL`.
|
||||||
|
3. Send a test message: `POST /api/v1/channels/<name>/test`.
|
||||||
|
4. Check backend logs for `warn` entries with `channel = "<name>"`.
|
||||||
|
|
||||||
|
### Startup fails with `SecretNotFound`
|
||||||
|
|
||||||
|
The env variable referenced in `webhook_url` or `bot_token`/`chat_id` is not set. Either export the variable or add a default value:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
webhook_url = "${SLACK_WEBHOOK_URL:-https://hooks.slack.com/...}"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Discord returns 400
|
||||||
|
|
||||||
|
The webhook URL must end with `/slack` for Slack-compatible mode, or be the raw Discord webhook URL. Ensure the URL copied from Discord's channel settings is used without modification.
|
||||||
|
|
||||||
|
### Telegram chat_id not found
|
||||||
|
|
||||||
|
The bot must be a member of the target group or channel. For groups, prefix the numeric ID with `-` (e.g. `-1001234567890`). Use `@userinfobot` in Telegram to retrieve your chat ID.
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [Workflow Orchestrator](./workflow-orchestrator.md) — workflow lifecycle events and notification config
|
||||||
|
- [ADR-0035: Notification Channels](../adrs/0035-notification-channels.md) — design rationale
|
||||||
|
- [ADR-0011: SecretumVault](../adrs/0011-secretumvault.md) — secret management philosophy
|
||||||
1
justfiles/rust-axum
Symbolic link
1
justfiles/rust-axum
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
/Users/Akasha/Tools/dev-system/languages/rust/just-modules/axum
|
||||||
1
justfiles/rust-cargo
Symbolic link
1
justfiles/rust-cargo
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
/Users/Akasha/Tools/dev-system/languages/rust/just-modules/cargo
|
||||||
1
justfiles/rust-leptos
Symbolic link
1
justfiles/rust-leptos
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
/Users/Akasha/Tools/dev-system/languages/rust/just-modules/leptos
|
||||||
Loading…
x
Reference in New Issue
Block a user