Some checks failed
Documentation Lint & Validation / Markdown Linting (push) Has been cancelled
Documentation Lint & Validation / Validate mdBook Configuration (push) Has been cancelled
Documentation Lint & Validation / Content & Structure Validation (push) Has been cancelled
mdBook Build & Deploy / Build mdBook (push) Has been cancelled
Nickel Type Check / Nickel Type Checking (push) Has been cancelled
Rust CI / Security Audit (push) Has been cancelled
Rust CI / Check + Test + Lint (nightly) (push) Has been cancelled
Rust CI / Check + Test + Lint (stable) (push) Has been cancelled
Documentation Lint & Validation / Lint & Validation Summary (push) Has been cancelled
mdBook Build & Deploy / Documentation Quality Check (push) Has been cancelled
mdBook Build & Deploy / Deploy to GitHub Pages (push) Has been cancelled
mdBook Build & Deploy / Notification (push) Has been cancelled
on+re:
- core.ncl: 5 new Practice nodes (notification-channels,
vapora-capabilities, agent-hot-reload-stable-identity,
merkle-audit-trail, notification-channels) + 5 new edges;
knowledge-graph-execution-history updated with HNSW+BM25+RRF
- state.ncl: production-readiness blocker/catalyst updated (hot-reload
complete, BudgetManager/LLMRouter still require restart);
ontoref-integration catalyst updated (vapora-ontology/reflection
crates, api-catalog.json, nickel contracts)
ADRs (NCL):
- adr-013: KG hybrid search — HNSW+BM25+RRF, rejected in-process scan
- adr-014: capability packages — AgentDefinition→vapora-shared,
DashMap shard-before-await constraint
- adr-015: Merkle audit trail — SHA-256 hash chain, rejected HMAC
- adr-016: agent hot-reload — stable_id=role, learning_profiles survive
drain, BudgetManager excluded from reload scope
landing page:
- 2 new feature boxes: VCS-Agnostic Worktree (jj/git), Ontology Protocol
- KG box: 20→28 tests, HNSW+BM25+RRF description
- Agents box: 71→82 tests, hot-reload + stable_id
- tech stack: Rust 21→23 crates, added jj, Radicle, ontoref badges
- status badge: 620→691 tests
86 lines
5.8 KiB
Text
86 lines
5.8 KiB
Text
let d = import "adr-defaults.ncl" in
|
|
|
|
d.make_adr {
|
|
id = "adr-012",
|
|
title = "SSRF Protection and Prompt Injection Scanning at API Boundary",
|
|
status = 'Accepted,
|
|
date = "2026-02-26",
|
|
|
|
context = "Competitive analysis against OpenFang revealed that vapora had no defenses against SSRF via misconfigured webhook URLs and no prompt injection scanning before user input reached LLM providers. The original SSRF check in main.rs logged a warning but did NOT remove the unsafe channel from the registry — channels with SSRF-risky URLs were fully operational despite the log claiming 'channel will be disabled'. Both attack surfaces were confirmed exploitable before this ADR.",
|
|
|
|
decision = "A security module (vapora-backend/src/security/) with two sub-modules: (1) ssrf.rs — validates outbound URLs against a deny list of private/reserved/cloud-metadata address ranges before any HTTP request is dispatched; (2) prompt_injection.rs — pattern-based scanner that rejects known injection payloads at the API boundary before input reaches an LLM provider. Four integration points: channel webhook URL filtering at startup, RLM endpoints (load/query/analyze), task creation/update (title and description fields). Security rejections return 400 Bad Request, not 500.",
|
|
|
|
rationale = [
|
|
{
|
|
claim = "Channel SSRF must be enforced by dropping the channel, not logging a warning",
|
|
detail = "The original warn!() + register pattern was a documentation bug masquerading as security. A warning that allows the operation to proceed is not a security control. Dropping unsafe channels before ChannelRegistry::from_map is the correct enforcement model.",
|
|
},
|
|
{
|
|
claim = "Prompt injection must be scanned at the API boundary, not inside the LLM router",
|
|
detail = "Scanning at the LLM router is too late — the payload has already been accepted, persisted to the task table, and is in motion. API boundary scanning rejects the request before any persistence occurs, which is the correct defense point.",
|
|
},
|
|
{
|
|
claim = "400 Bad Request for security rejections prevents information disclosure",
|
|
detail = "A 500 Internal Server Error on prompt injection detection reveals that injection scanning is present and active, giving attackers feedback to tune their payloads. 400 Bad Request is ambiguous — it could be any validation failure.",
|
|
},
|
|
],
|
|
|
|
consequences = {
|
|
positive = [
|
|
"Channel webhook URLs from compromised config are rejected at startup, not silently registered",
|
|
"User-supplied text in RLM and task endpoints is scanned before reaching any LLM provider",
|
|
"Security rejections are observable via 400 response codes and security audit log entries",
|
|
"ssrf.rs and prompt_injection.rs are independently testable without spinning up the full Axum server",
|
|
],
|
|
negative = [
|
|
"Pattern-based prompt injection scanning has false positive and false negative rates — adversarial inputs may bypass regex patterns",
|
|
"SSRF deny list must be maintained as cloud providers add new metadata endpoints (e.g. GCP 169.254.169.254, AWS 169.254.169.254, Azure 169.254.169.254)",
|
|
],
|
|
},
|
|
|
|
alternatives_considered = [
|
|
{
|
|
option = "WAF (Web Application Firewall) at the infrastructure layer",
|
|
why_rejected = "An external WAF cannot inspect the semantic content of LLM prompts or validate webhook URLs against cloud metadata ranges. Application-level scanning is required for these semantically rich validations.",
|
|
},
|
|
{
|
|
option = "Sandboxed agent execution",
|
|
why_rejected = "Sandboxing prevents prompt injection effects from escaping the execution environment but does not prevent the injection from reaching the LLM provider. The attack surface (LLM prompt poisoning) requires input scanning, not output sandboxing.",
|
|
},
|
|
],
|
|
|
|
constraints = [
|
|
{
|
|
id = "ssrf-validator-before-channel-registry",
|
|
claim = "Channel webhook URLs must be validated via ssrf.rs before ChannelRegistry::from_map is called — unsafe channels must be dropped, not registered with a warning",
|
|
scope = "vapora-backend/src/main.rs",
|
|
severity = 'Hard,
|
|
check = { tag = 'Grep, pattern = "ssrf\\|SsrfValidator\\|validate_url", paths = ["crates/vapora-backend/src/main.rs"], must_be_empty = false },
|
|
rationale = "The original warn-and-register pattern was the vulnerability. This constraint ensures the fix stays in place.",
|
|
},
|
|
{
|
|
id = "prompt-injection-scan-at-rlm-boundary",
|
|
claim = "RLM endpoints must scan user-supplied content and query via prompt_injection.rs before indexing or dispatching to LLM",
|
|
scope = "vapora-backend/src/api/rlm.rs",
|
|
severity = 'Hard,
|
|
check = { tag = 'Grep, pattern = "prompt_injection\\|scan_for_injection\\|PromptInjection", paths = ["crates/vapora-backend/src/api/rlm.rs"], must_be_empty = false },
|
|
rationale = "RLM is the primary injection surface — it accepts arbitrary text content and forwards it to LLM providers.",
|
|
},
|
|
{
|
|
id = "security-rejections-return-400",
|
|
claim = "All security validation failures must return 400 Bad Request via VaporaError::InvalidInput — not 500",
|
|
scope = "vapora-backend/src/security/",
|
|
severity = 'Hard,
|
|
check = { tag = 'Grep, pattern = "InvalidInput", paths = ["crates/vapora-backend/src/security/"], must_be_empty = false },
|
|
rationale = "500 responses on security rejections reveal the presence and behavior of the security scanner to attackers.",
|
|
},
|
|
],
|
|
|
|
related_adrs = ["adr-003"],
|
|
|
|
ontology_check = {
|
|
decision_string = "SSRF protection in ssrf.rs + prompt injection scanning in prompt_injection.rs; channels with unsafe URLs dropped at startup; RLM and task endpoints scanned at API boundary; 400 on rejection",
|
|
invariants_at_risk = [],
|
|
verdict = 'Safe,
|
|
},
|
|
}
|