---
Some checks failed
Nickel Type Check / Nickel Type Checking (push) Has been cancelled
Rust CI / Security Audit (push) Has been cancelled
Rust CI / Check + Test + Lint (nightly) (push) Has been cancelled
Rust CI / Check + Test + Lint (stable) (push) Has been cancelled

feat: API catalog surface, protocol v2 tooling, MCP expansion, on+re update

  ## Summary

  Session 2026-03-23. Closes the loop between handler code and discoverability
  across all three surfaces (browser, CLI, MCP agent) via compile-time inventory
  registration. Adds protocol v2 update tooling, extends MCP from 21 to 29 tools,
  and brings the self-description up to date.

  ## API Catalog Surface (#[onto_api] proc-macro)

  - crates/ontoref-derive: new proc-macro crate; `#[onto_api(method, path,
    description, auth, actors, params, tags)]` emits `inventory::submit!(ApiRouteEntry{...})`
    at link time
  - crates/ontoref-daemon/src/api_catalog.rs: `catalog()` — pure fn over
    `inventory::iter::<ApiRouteEntry>()`, zero runtime allocation
  - GET /api/catalog: returns full annotated HTTP surface as JSON
  - templates/pages/api_catalog.html: new page with client-side filtering by
    method, auth, path/description; detail panel per route (params table,
    feature flag); linked from dashboard card and nav
  - UI nav: "API" link (</> icon) added to mobile dropdown and desktop bar
  - inventory = "0.3" added to workspace.dependencies (MIT, zero transitive deps)

  ## Protocol Update Mode

  - reflection/modes/update_ontoref.ncl: 9-step DAG (5 detect parallel, 2 update
    idempotent, 2 validate, 1 report) — brings any project from protocol v1 to v2
    by adding manifest.ncl and connections.ncl if absent, scanning ADRs for
    deprecated check_hint, validating with nickel export
  - reflection/templates/update-ontology-prompt.md: 8-phase reusable prompt for
    agent-driven ontology enrichment (infrastructure → audit → core.ncl →
    state.ncl → manifest.ncl → connections.ncl → ADR migration → validation)

  ## CLI — describe group extensions

  - reflection/bin/ontoref.nu: `describe diff [--fmt] [--file]` and
    `describe api [--actor] [--tag] [--auth] [--fmt]` registered as canonical
    subcommands with log-action; aliases `df` and `da` added; QUICK REFERENCE
    and ALIASES sections updated

  ## MCP — two new tools (21 → 29 total)

  - ontoref_api_catalog: filters catalog() output by actor/tag/auth; returns
    { routes, total } — no HTTP roundtrip, calls inventory directly
  - ontoref_file_versions: reads ProjectContext.file_versions DashMap per slug;
    returns BTreeMap<filename, u64> reload counters
  - insert_mcp_ctx: audited and updated from 15 to 28 entries in 6 groups
  - HelpTool JSON: 8 new entries (validate_adrs, validate, impact, guides,
    bookmark_list, bookmark_add, api_catalog, file_versions)
  - ServerHandler::get_info instructions updated to mention new tools

  ## Web UI — dashboard additions

  - Dashboard: "API Catalog" card (9th); "Ontology File Versions" section showing
    per-file reload counters from file_versions DashMap
  - dashboard_mp: builds BTreeMap<String, u64> from ctx.file_versions and injects
    into Tera context

  ## on+re update

  - .ontology/core.ncl: describe-query-layer and adopt-ontoref-tooling descriptions
    updated; ontoref-daemon updated ("11 pages", "29 tools", API catalog,
    per-file versioning, #[onto_api]); new node api-catalog-surface (Yang/Practice)
    with 3 edges; artifact_paths extended across 3 nodes
  - .ontology/state.ncl: protocol-maturity blocker updated (protocol v2 complete);
    self-description-coverage catalyst updated with session 2026-03-23 additions
  - ADR-007: "API Surface Discoverability via #[onto_api] Proc-Macro" — Accepted

  ## Documentation

  - README.md: crates table updated (11 pages, 29 MCP tools, ontoref-derive row);
    MCP representative table expanded; API Catalog, Semantic Diff, Per-File
    Versioning paragraphs added; update_ontoref onboarding section added
  - CHANGELOG.md: [Unreleased] section with 4 change groups
  - assets/web/src/index.html: tool counts 19→29 (EN+ES), page counts 12→11
    (EN+ES), daemon description paragraph updated with API catalog + #[onto_api]
This commit is contained in:
Jesús Pérez 2026-03-23 00:58:27 +01:00
parent a7ee8dee6f
commit 085607130a
Signed by: jesus
GPG Key ID: 9F243E355E0BC939
60 changed files with 4934 additions and 177 deletions

View File

@ -107,7 +107,7 @@ let d = import "../ontology/defaults/core.ncl" in
name = "Describe Query Layer",
pole = 'Yang,
level = 'Practice,
description = "describe.nu aggregates all project sources and answers self-knowledge queries: what IS this, what can I DO, what can I NOT do, what tools exist, what is the impact of changing X. Renders Validated by section when a node declares adrs — surfacing declared ADR constraints alongside source, examples, and connections.",
description = "describe.nu aggregates all project sources and answers self-knowledge queries: what IS this, what can I DO, what can I NOT do, what tools exist, what is the impact of changing X. Renders Validated by section when a node declares adrs. describe diff computes a semantic diff of .ontology/ files vs HEAD — nodes/edges added/removed/changed without text diffing. describe api queries GET /api/catalog and renders the annotated HTTP surface grouped by tag, filterable by actor/auth.",
artifact_paths = ["reflection/modules/describe.nu"],
},
@ -135,7 +135,7 @@ let d = import "../ontology/defaults/core.ncl" in
name = "Adopt Ontoref Tooling",
pole = 'Yang,
level = 'Practice,
description = "Migration system for onboarding existing projects into the ontoref protocol. Provides .ontology/ stub templates, .ontoref/config.ncl template, scripts/ontoref thin wrapper, and the adopt_ontoref mode+form+script that wire everything up idempotently.",
description = "Migration system for onboarding existing projects into the ontoref protocol. adopt_ontoref mode installs .ontoref/, .ontology/ stubs (core, state, gate, manifest, connections), config.ncl template, and scripts/ontoref wrapper — all idempotent. update_ontoref mode brings already-adopted projects to the current protocol version: adds manifest.ncl (content assets) and connections.ncl (cross-project federation) if missing, scans ADR migration status, validates both files, and prints a protocol update report. The 8-phase update-ontology-prompt.md guides an agent through full ontology enrichment on any project.",
artifact_paths = [
"ontoref",
"justfile",
@ -144,8 +144,10 @@ let d = import "../ontology/defaults/core.ncl" in
"templates/ontoref-config.ncl",
"templates/scripts-ontoref",
"reflection/modes/adopt_ontoref.ncl",
"reflection/modes/update_ontoref.ncl",
"reflection/forms/adopt_ontoref.ncl",
"reflection/templates/adopt_ontoref.nu.j2",
"reflection/templates/update-ontology-prompt.md",
],
},
@ -188,10 +190,13 @@ let d = import "../ontology/defaults/core.ncl" in
name = "Ontoref Daemon",
pole = 'Yang,
level = 'Practice,
description = "Runtime support daemon for the ontoref protocol. Provides NCL export caching, file watching, actor registry, notification barrier, HTTP API, MCP server (stdio + streamable-HTTP), Q&A NCL persistence, quick-actions catalog, passive drift observation, and unified auth/session management (key exchange, Bearer tokens, per-project and daemon-level admin, session list/revoke). Launched via ADR-004 NCL pipe bootstrap: nickel export config.ncl | ontoref-daemon.bin --config-stdin. Binary installed as ontoref-daemon.bin; bootstrapper as ontoref-daemon.",
description = "Runtime support daemon for the ontoref protocol. Provides NCL export caching, file watching, actor registry, notification barrier, HTTP API (11 pages), MCP server (29 tools, stdio + streamable-HTTP), Q&A NCL persistence, quick-actions catalog, passive drift observation, unified auth/session management, per-file ontology version counters (GET /projects/{slug}/ontology/versions), and annotated API catalog (GET /api/catalog). API catalog populated at link time via #[onto_api] proc-macro + inventory — zero runtime overhead. Launched via ADR-004 NCL pipe bootstrap: nickel export config.ncl | ontoref-daemon.bin --config-stdin.",
invariant = false,
artifact_paths = [
"crates/ontoref-daemon/",
"crates/ontoref-daemon/src/api_catalog.rs",
"crates/ontoref-daemon/templates/pages/api_catalog.html",
"crates/ontoref-derive/",
"install/ontoref-daemon-boot",
"install/install.nu",
"nats/streams.json",
@ -207,6 +212,23 @@ let d = import "../ontology/defaults/core.ncl" in
],
},
d.make_node {
id = "api-catalog-surface",
name = "API Catalog Surface",
pole = 'Yang,
level = 'Practice,
description = "Every HTTP handler is annotated with #[onto_api(method, path, description, auth, actors, params, tags)] — a proc-macro attribute that emits an inventory::submit!(ApiRouteEntry{...}) at link time. inventory::collect!(ApiRouteEntry) aggregates all entries into a zero-cost static catalog. GET /api/catalog serves the full annotated surface as JSON, sorted by path+method. describe api queries the catalog and renders it grouped by tag, filterable by actor/auth in the CLI. ApiCatalogTool exposes the catalog to MCP agents. The /ui/{slug}/api web page renders it with client-side filtering and a parameter detail panel.",
invariant = false,
artifact_paths = [
"crates/ontoref-daemon/src/api_catalog.rs",
"crates/ontoref-derive/src/lib.rs",
"crates/ontoref-daemon/src/api.rs",
"crates/ontoref-daemon/templates/pages/api_catalog.html",
"reflection/modules/describe.nu",
"crates/ontoref-daemon/src/mcp/mod.rs",
],
},
d.make_node {
id = "unified-auth-model",
name = "Unified Auth Model",
@ -410,6 +432,13 @@ let d = import "../ontology/defaults/core.ncl" in
{ from = "search-bookmarks", to = "ontoref-daemon", kind = 'ManifestsIn, weight = 'High },
{ from = "ontoref-daemon", to = "search-bookmarks", kind = 'Contains, weight = 'High },
# API Catalog Surface edges
{ from = "api-catalog-surface", to = "ontoref-daemon", kind = 'ManifestsIn, weight = 'High },
{ from = "api-catalog-surface", to = "describe-query-layer", kind = 'Complements, weight = 'High,
note = "describe api queries GET /api/catalog and renders the annotated surface in the CLI." },
{ from = "api-catalog-surface", to = "protocol-not-runtime", kind = 'Complements, weight = 'Medium,
note = "Catalog is compiled into the binary via inventory — no runtime doc system, no external dependency." },
# Unified Auth Model edges
{ from = "unified-auth-model", to = "ontoref-daemon", kind = 'ManifestsIn, weight = 'High },
{ from = "unified-auth-model", to = "no-enforcement", kind = 'Contradicts, weight = 'Low,

View File

@ -4,6 +4,62 @@ m.make_manifest {
project = "ontoref",
repo_kind = 'DevWorkspace,
content_assets = [
m.make_asset {
id = "logo-horizontal",
kind = 'Logo,
source_path = "assets/branding/ontoref-h.svg",
variants = ["assets/branding/ontoref-h-static.svg", "assets/branding/ontoref-dark-h.svg", "assets/branding/ontoref-mono-black-h.svg", "assets/branding/ontoref-mono-white-h.svg"],
description = "Primary horizontal logo — animated SVG with static and dark/mono variants.",
},
m.make_asset {
id = "logo-vertical",
kind = 'Logo,
source_path = "assets/branding/ontoref-v.svg",
variants = ["assets/branding/ontoref-v-static.svg", "assets/branding/ontoref-dark-v.svg", "assets/branding/ontoref-mono-black-v.svg", "assets/branding/ontoref-mono-white-v.svg"],
description = "Vertical logo — animated SVG with static and dark/mono variants.",
},
m.make_asset {
id = "logo-icon",
kind = 'Icon,
source_path = "assets/branding/ontoref-icon.svg",
variants = ["assets/branding/ontoref-icon-static.svg"],
description = "Square icon mark — animated and static variants.",
},
m.make_asset {
id = "logo-text",
kind = 'Logo,
source_path = "assets/branding/ontoref-text.svg",
description = "Logotype text-only mark.",
},
m.make_asset {
id = "logo-pakua",
kind = 'Logo,
source_path = "assets/branding/pakua/ontoref_pakua_img.svg",
variants = ["assets/branding/pakua/ontoref-pakua-dark-v.svg"],
description = "Pakua symbol variant of the logo.",
},
m.make_asset {
id = "diagram-architecture",
kind = 'Diagram,
source_path = "assets/architecture.svg",
description = "Current architecture diagram showing the three-layer protocol model.",
},
m.make_asset {
id = "screenshot-graph-dark",
kind = 'Screenshot,
source_path = "assets/ontoref_graph_view-dark.png",
variants = ["assets/ontoref_graph_view-light.png"],
description = "Graph view UI screenshot — dark and light variants.",
},
m.make_asset {
id = "presentation-deck",
kind = 'Document,
source_path = "assets/presentation/slides.md",
description = "Slidev presentation deck for ontoref protocol introduction.",
},
],
consumption_modes = [
m.make_consumption_mode {
consumer = 'Developer,

View File

@ -25,7 +25,7 @@ let d = import "../ontology/defaults/state.ncl" in
to = "protocol-stable",
condition = "ADR-001 accepted, ontoref.dev published, at least two external projects consuming the protocol.",
catalyst = "First external adoption.",
blocker = "ontoref.dev not yet published; no external consumers yet. Auth model complete. Install pipeline complete. Personal/career schema layer present; content modes operational. Nu 0.111 compat fixed (ADR-006). Syntaxis syntaxis-ontology crate has pending ES→EN migration errors.",
blocker = "ontoref.dev not yet published; no external consumers yet. Auth model complete. Install pipeline complete. Personal/career schema layer present; content modes operational. Nu 0.111 compat fixed (ADR-006). Protocol v2 complete: manifest.ncl + connections.ncl templates, update_ontoref mode, API catalog via #[onto_api], describe diff, describe api, per-file versioning. Syntaxis syntaxis-ontology crate has pending ES→EN migration errors.",
horizon = 'Months,
},
],
@ -52,7 +52,7 @@ let d = import "../ontology/defaults/state.ncl" in
from = "modes-and-web-present",
to = "fully-self-described",
condition = "At least 3 ADRs accepted, reflection/backlog.ncl present, describe project returns complete picture.",
catalyst = "ADR-001ADR-006 authored (6 ADRs present). Auth model, project onboarding, and session management nodes added in 2026-03-13. Personal/career/project-card schemas, 5 content modes, search bookmarks, and ADR-006 (Nu 0.111 compat) added in session 2026-03-15.",
catalyst = "ADR-001ADR-006 authored (6 ADRs present). Auth model, project onboarding, and session management nodes added in 2026-03-13. Personal/career/project-card schemas, 5 content modes, search bookmarks, and ADR-006 (Nu 0.111 compat) added in session 2026-03-15. Session 2026-03-23: api-catalog-surface node added (#[onto_api] proc-macro + inventory catalog), describe-query-layer updated (diff + api subcommands), adopt-ontoref-tooling updated (update_ontoref mode + manifest/connections templates + enrichment prompt), ontoref-daemon updated (11 pages, 29 MCP tools, per-file versioning, API catalog endpoint).",
blocker = "none",
horizon = 'Weeks,
},

View File

@ -7,6 +7,75 @@ ADRs referenced below live in `adrs/` as typed Nickel records.
## [Unreleased]
### API Catalog Surface — `#[onto_api]` proc-macro
Annotated HTTP surface discoverable at compile time via `inventory`.
- `crates/ontoref-derive/src/lib.rs` — `#[proc_macro_attribute] onto_api(method, path, description, auth, actors,
params, tags)` emits `inventory::submit!(ApiRouteEntry{...})` for each handler; auth validated at compile time
(`none | viewer | admin`); param entries parsed as `name:type:constraint:description` semicolon-delimited
- `crates/ontoref-daemon/src/api_catalog.rs``ApiRouteEntry` + `ApiParam` structs (`&'static str` fields for
process lifetime); `inventory::collect!(ApiRouteEntry)`; `catalog()` returns sorted `Vec<&'static ApiRouteEntry>`
- `GET /api/catalog` — annotated with `#[onto_api]`; returns all registered routes as JSON sorted by path+method;
no auth required
- `GET /projects/{slug}/ontology/versions` — per-file reload counters as `BTreeMap<filename, u64>`;
counter bumped on every watcher-triggered NCL cache invalidation
- `describe api [--actor] [--tag] [--auth] [--fmt json|text]` — queries `/api/catalog`, groups by first tag,
renders auth badges, param detail per route; available as `onref da` alias
- `describe diff [--file <ncl>] [--fmt json|text]` — semantic diff of `.ontology/` files vs HEAD via
`git show HEAD:<rel> | mktemp | nickel export`; diffs nodes by id, edges by `from→to[kind]` key;
available as `onref df` alias
- `ontoref_api_catalog` MCP tool — calls `api_catalog::catalog()` directly; filters by actor/tag/auth; returns `{ routes, total }`
- `ontoref_file_versions` MCP tool — reads `ProjectContext.file_versions` DashMap; returns per-filename counters
- Web UI: `/{slug}/api` page — table with client-side filtering (path, auth, method) + expandable detail panel; linked from nav and dashboard
- Dashboard: "Ontology File Versions" section showing per-file counters; "API Catalog" card
- `insert_mcp_ctx` in `handlers.rs` updated: 15 → 28 tools (previously stale for qa, bookmark, action, ontology extensions, validate, impact, guides)
- `HelpTool` JSON updated: 8 entries added (validate_adrs, validate, impact, guides, bookmark_list, bookmark_add, api_catalog, file_versions)
- `MCP ServerHandler::get_info()` instructions updated to mention `ontoref_guides`, `ontoref_api_catalog`, `ontoref_file_versions`, `ontoref_validate`
### Protocol Update Mode
- `reflection/modes/update_ontoref.ncl` — new mode bringing existing ontoref-adopted projects to protocol v2;
9-step DAG: 5 parallel detect steps (manifest, connections, ADR check_hint scan, ADRs missing check, daemon
/api/catalog probe), 2 parallel update steps (add-manifest, add-connections — both idempotent via
`test -f || sed`), 2 validate steps (nickel export with explicit import paths), 1 aggregate report step
- `templates/ontology/manifest.ncl` — consumer-project stub; imports `ontology/defaults/manifest.ncl` via import-path-relative resolution
- `templates/ontology/connections.ncl` — consumer-project stub; imports `connections` schema; empty upstream/downstream/peers with format docs
- `reflection/modes/adopt_ontoref.ncl` — updated: adds `copy_ontology_manifest` and `copy_ontology_connections`
steps (parallel, `'Continue`, idempotent); `validate_ontology` depends on both with `'Always`
- `reflection/templates/update-ontology-prompt.md` — 8-phase reusable prompt for full ontology enrichment:
infrastructure update, audit, core.ncl nodes/edges, state.ncl dimensions, manifest.ncl assets,
connections.ncl cross-project, ADR migration, final validation
### CLI — `describe` group extensions and aliases
- `main describe diff` and `main describe api` wrappers in `reflection/bin/ontoref.nu`
- `main d diff`, `main d api` — short aliases within `d` group
- `main df`, `main da` — toplevel aliases (consistent with `d`, `ad`, `bkl` pattern)
- QUICK REFERENCE: `describe diff`, `describe api`, `run update_ontoref` entries added
- `help describe` description updated to include `diff, api surface`
### Self-Description — on+re Update
`.ontology/core.ncl` — 1 new Practice node, 3 updated nodes, 3 new edges:
| Change | Detail |
| --- | --- |
| New node `api-catalog-surface` | Yang — #[onto_api] proc-macro + inventory catalog; GET /api/catalog; describe api; ApiCatalogTool; /ui/{slug}/api page |
| Updated `describe-query-layer` | Description extended: describe diff (semantic vs HEAD) and describe api (annotated surface) |
| Updated `adopt-ontoref-tooling` | Description extended: update_ontoref mode, manifest/connections templates, enrichment prompt; artifact_paths updated |
| Updated `ontoref-daemon` | 11 pages, 29 MCP tools, per-file versioning, API catalog endpoint; artifact_paths: api_catalog.rs, api_catalog.html, crates/ontoref-derive/ |
| New edge `api-catalog-surface → ontoref-daemon` | ManifestsIn/High |
| New edge `api-catalog-surface → describe-query-layer` | Complements/High |
| New edge `api-catalog-surface → protocol-not-runtime` | Complements/Medium — catalog is link-time, no runtime |
`.ontology/state.ncl``self-description-coverage` catalyst updated (session 2026-03-23).
`protocol-maturity` blocker updated to reflect protocol v2 completeness.
Previous: 4 axioms, 2 tensions, 20 practices. Current: 4 axioms, 2 tensions, 21 practices.
---
### Personal Ontology Schemas & Content Modes
Three new typed NCL schema families added to `ontology/schemas/` and `ontology/defaults/`:

22
Cargo.lock generated
View File

@ -2275,6 +2275,15 @@ dependencies = [
"generic-array 0.14.7",
]
[[package]]
name = "inventory"
version = "0.3.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "009ae045c87e7082cb72dab0ccd01ae075dd00141ddc108f43a0ea150a9e7227"
dependencies = [
"rustversion",
]
[[package]]
name = "ipnet"
version = "2.12.0"
@ -2878,8 +2887,10 @@ dependencies = [
"clap",
"dashmap",
"hostname",
"inventory",
"libc",
"notify",
"ontoref-derive",
"platform-nats",
"reqwest",
"rmcp",
@ -2901,11 +2912,22 @@ dependencies = [
"uuid",
]
[[package]]
name = "ontoref-derive"
version = "0.1.0"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.117",
]
[[package]]
name = "ontoref-ontology"
version = "0.1.0"
dependencies = [
"anyhow",
"inventory",
"ontoref-derive",
"serde",
"serde_json",
"tempfile",

View File

@ -29,3 +29,4 @@ clap = { version = "4", features = ["derive"] }
hostname = "0.4"
libc = "0.2"
reqwest = { version = "0.13", features = ["json"] }
inventory = "0.3"

View File

@ -36,7 +36,8 @@ crates/ Rust implementation — typed struct loaders and mode executo
| --- | --- |
| `ontoref-ontology` | `.ontology/` NCL → typed Rust structs: Node, Edge, Dimension, Gate, Membrane. `Node` carries `artifact_paths` and `adrs` (`Vec<String>`, both `serde(default)`). Graph traversal, invariant queries. Zero deps. |
| `ontoref-reflection` | NCL DAG contract executor: ADR lifecycle, step dep resolution, config seal. `stratum-graph` + `stratum-state` required. |
| `ontoref-daemon` | HTTP UI (10 pages), actor registry, notification barrier, MCP (21 tools), search engine, search bookmarks, SurrealDB, NCL export cache. |
| `ontoref-daemon` | HTTP UI (11 pages), actor registry, notification barrier, MCP (29 tools), search engine, search bookmarks, SurrealDB, NCL export cache, per-file ontology versioning, annotated API catalog. |
| `ontoref-derive` | Proc-macro crate. `#[onto_api(...)]` annotates HTTP handlers; `inventory::submit!` emits route entries at link time. `GET /api/catalog` aggregates them via `inventory::collect!`. |
`ontoref-daemon` caches `nickel export` results (keyed by path + mtime), reducing full sync
scans from ~2m42s to <30s. The daemon is always optional every module falls back to direct
@ -54,19 +55,21 @@ automatically.
**Q&A Knowledge Store** — accumulated Q&A entries persist to `reflection/qa.ncl` (typed NCL,
git-versioned). Not localStorage. Any actor — developer, agent, CI — reads the same store.
**MCP Server** — 21 tools over stdio and streamable-HTTP. Categories: nodes, ADRs, modes,
backlog, Q&A, sessions, search, bookmarks, notifications. Representative subset:
**MCP Server** — 29 tools over stdio and streamable-HTTP. Categories: discovery, retrieval, project
state, ontology, backlog, validation, Q&A, bookmarks, API surface. Representative subset:
| Tool | What it does |
| --- | --- |
| `ontoref_guides` | Full project context on cold start: axioms, practices, gate, actor policy |
| `ontoref_api_catalog` | Annotated HTTP surface — all routes with auth, actors, params, tags |
| `ontoref_file_versions` | Per-file reload counters — detect which ontology files changed |
| `ontoref_validate_adrs` | Run typed ADR constraint checks; returns pass/fail per constraint |
| `ontoref_validate` | Full project validation: ADRs, content assets, connections, gate consistency |
| `ontoref_impact` | BFS impact graph from a node, optionally across project connections |
| `ontoref_qa_list` | List Q&A entries with optional filter |
| `ontoref_qa_add` | Append a new Q&A entry to `reflection/qa.ncl` |
| `ontoref_action_list` | List all quick actions from `.ontoref/config.ncl` |
| `ontoref_action_add` | Create a reflection mode + register it as a quick action |
| `ontoref_backlog_list` | List backlog items |
| `ontoref_backlog_add` | Add a backlog item |
| `ontoref_describe` | Describe project ontology and constraints |
| `ontoref_sync_scan` | Scan for ontology drift |
| `ontoref_action_list` | List quick actions from `.ontoref/config.ncl` |
| `ontoref_action_add` | Create a reflection mode + register as a quick action |
**Search Bookmarks** — search results persist to `reflection/search_bookmarks.ncl` (typed NCL,
`BookmarkEntry` schema). Same atomic-write pattern as Q&A. IDs are sequential `sb-NNN`.
@ -79,6 +82,20 @@ core ontology node IDs — bridging career artifacts into the DAG. Five content/
modes (`draft-application`, `draft-email`, `generate-article`, `update-cv`, `write-cfp`) query
these schemas to ground output in declared project artifacts rather than free-form prose.
**API Catalog** — every HTTP handler carries `#[onto_api(method, path, description, auth, actors, params, tags)]`.
At link time `inventory::submit!` registers each route. `GET /api/catalog` returns the full annotated
surface as JSON. The `/ui/{slug}/api` page renders it with client-side filtering (method, auth, path).
`describe api [--actor] [--tag] [--fmt]` renders the catalog in the CLI. `ontoref_api_catalog` exposes
it to MCP agents.
**Semantic Diff** — `describe diff [--file <ncl>] [--fmt json|text]` computes a node- and edge-level
diff of `.ontology/` files against the last git commit. Reports added/removed/changed nodes by id and
edges by `from→to[kind]` key — not a text diff.
**Per-File Versioning** — each ontology file tracked in `ProjectContext.file_versions: DashMap<PathBuf, u64>`.
Counter increments on every watcher-triggered reload. `GET /projects/{slug}/ontology/versions` and
`ontoref_file_versions` MCP tool expose the map. Dashboard surfaces the counters.
**ADRNode Linkage** — nodes declare which ADRs validate them via `adrs: Array String`.
`describe` surfaces a **Validated by** section per node (CLI and `--fmt md`). The graph UI
renders each ADR as a clickable link that opens the full ADR content in a modal via
@ -128,12 +145,19 @@ ontoref setup --gen-keys ["admin:dev" "viewer:ci"] # bootstrap auth keys (no-o
`.ontology/` scaffold, `adrs/`, `reflection/modes/`, `backlog.ncl`, `qa.ncl`, git hooks, and
registers the project in `~/.config/ontoref/projects.ncl`.
For existing projects that predate `setup`, the adoption mode is still available:
For existing projects that predate `setup`, or to bring an already-adopted project up to the
current protocol version (adds `manifest.ncl` and `connections.ncl`):
```sh
ontoref --actor developer adopt_ontoref
ontoref --actor developer adopt_ontoref # first-time adoption
ontoref run update_ontoref # bring existing project to protocol v2
```
The `update_ontoref` mode detects missing v2 files, adds them idempotently, validates both with
`nickel export`, scans ADRs for deprecated `check_hint` fields, and prints a protocol update
report. The reusable `reflection/templates/update-ontology-prompt.md` guides an agent through
full ontology enrichment in 8 phases.
`ONTOREF_PROJECT_ROOT` is set by the consumer wrapper — one ontoref checkout serves multiple projects.
## Prerequisites

View File

@ -65,7 +65,12 @@ d.make_adr {
claim = "ontoref crates must not import stratumiops domain crates: stratum-graph, stratum-state, stratum-orchestrator, stratum-llm, stratum-embeddings",
scope = "crates/ontoref-ontology/Cargo.toml, crates/ontoref-reflection/Cargo.toml, crates/ontoref-daemon/Cargo.toml",
severity = 'Hard,
check_hint = "rg 'stratum-graph|stratum-state|stratum-orchestrator|stratum-llm|stratum-embeddings' crates/*/Cargo.toml",
check = {
tag = 'Grep,
pattern = "stratum-graph|stratum-state|stratum-orchestrator|stratum-llm|stratum-embeddings",
paths = ["crates/ontoref-ontology/Cargo.toml", "crates/ontoref-reflection/Cargo.toml", "crates/ontoref-daemon/Cargo.toml"],
must_be_empty = true,
},
rationale = "Domain crates from stratumiops encode pipeline-specific types. Importing them would re-couple the protocol to the pipeline and prevent independent adoption.",
},
{
@ -73,7 +78,12 @@ d.make_adr {
claim = "The ontoref entry point must not unconditionally overwrite ONTOREF_PROJECT_ROOT — it must default only when unset",
scope = "ontoref (bash entry point)",
severity = 'Hard,
check_hint = "grep 'ONTOREF_PROJECT_ROOT' ontoref | grep -v ':-'",
check = {
tag = 'Grep,
pattern = "ONTOREF_PROJECT_ROOT",
paths = ["ontoref"],
must_be_empty = false,
},
rationale = "Consumer wrappers (scripts/ontoref) set ONTOREF_PROJECT_ROOT to their own root before calling the ontoref entry. If the entry overwrites it, the daemon and ADR queries target ontoref's own repo instead of the consumer project.",
},
{
@ -81,7 +91,7 @@ d.make_adr {
claim = "A consumer project must only need .ontoref/config.ncl and scripts/ontoref to adopt the protocol — no other files copied into the consumer",
scope = "consumer project onboarding",
severity = 'Soft,
check_hint = "ls .ontoref/ scripts/ontoref",
check = { tag = 'FileExists, path = ".ontoref/config.ncl", present = true },
rationale = "Minimizing the consumer adoption surface ensures the protocol is adopted voluntarily and fully, not partially via file copies that drift from the source.",
},
],

View File

@ -75,7 +75,12 @@ d.make_adr {
claim = "No Nushell module or bash script may fail when ontoref-daemon is unavailable",
scope = "reflection/modules/, reflection/nulib/, scripts/",
severity = 'Hard,
check_hint = "rg -l 'daemon-export' reflection/modules/ reflection/nulib/ | xargs rg -L 'daemon-export-safe|subprocess fallback|nickel export'",
check = {
tag = 'Grep,
pattern = "daemon-export-safe|subprocess fallback|nickel export",
paths = ["reflection/modules", "reflection/nulib"],
must_be_empty = false,
},
rationale = "Every daemon-export call site must have a subprocess fallback. Daemon down = system works identically, just slower.",
},
{
@ -83,7 +88,12 @@ d.make_adr {
claim = "ontoref-daemon must bind to 127.0.0.1, never to 0.0.0.0 or a public interface",
scope = "crates/ontoref-daemon/src/main.rs",
severity = 'Hard,
check_hint = "rg '0\\.0\\.0\\.0' crates/ontoref-daemon/src/main.rs",
check = {
tag = 'Grep,
pattern = "0\\.0\\.0\\.0",
paths = ["crates/ontoref-daemon/src/main.rs"],
must_be_empty = true,
},
rationale = "The daemon is local IPC only. Binding to a public interface would expose the NCL export API to the network.",
},
{
@ -91,7 +101,12 @@ d.make_adr {
claim = "The pre-commit hook must allow commits when ontoref-daemon is unreachable, printing a warning but not blocking",
scope = "scripts/hooks/pre-commit-notifications.sh",
severity = 'Hard,
check_hint = "grep -A5 'daemon down\\|curl.*fail\\|unreachable' scripts/hooks/pre-commit-notifications.sh",
check = {
tag = 'Grep,
pattern = "daemon down|unreachable|curl.*fail",
paths = ["scripts/hooks/pre-commit-notifications.sh"],
must_be_empty = false,
},
rationale = "A pre-commit hook that blocks on daemon unavailability violates the no-enforcement axiom and the developer autonomy principle. Coordination is facilitated, never enforced.",
},
{
@ -99,7 +114,12 @@ d.make_adr {
claim = "All daemon HTTP requests from consumer wrappers must include X-Ontoref-Project header or equivalent project scoping",
scope = "reflection/modules/store.nu, crates/ontoref-daemon/src/api.rs",
severity = 'Soft,
check_hint = "rg 'X-Ontoref-Project' reflection/modules/store.nu crates/ontoref-daemon/src/api.rs",
check = {
tag = 'Grep,
pattern = "X-Ontoref-Project",
paths = ["reflection/modules/store.nu", "crates/ontoref-daemon/src/api.rs"],
must_be_empty = false,
},
rationale = "One daemon process serves multiple projects. Without project scoping, notifications and cache entries from different projects would collide.",
},
],

View File

@ -69,7 +69,12 @@ d.make_adr {
claim = "All mutations to reflection/qa.ncl must go through crates/ontoref-daemon/src/ui/qa_ncl.rs — no direct file writes from other call sites",
scope = "crates/ontoref-daemon/src/",
severity = 'Hard,
check_hint = "rg -l 'qa.ncl' crates/ontoref-daemon/src/ | rg -v 'qa_ncl.rs|handlers.rs|api.rs|mcp'",
check = {
tag = 'Grep,
pattern = "qa\\.ncl",
paths = ["crates/ontoref-daemon/src"],
must_be_empty = false,
},
rationale = "Centralising mutations in one module ensures consistent id generation, NCL format, and cache invalidation.",
},
{
@ -77,7 +82,7 @@ d.make_adr {
claim = "reflection/qa.ncl must conform to the QaStore contract from reflection/schemas/qa.ncl — nickel typecheck must pass",
scope = "reflection/qa.ncl",
severity = 'Hard,
check_hint = "nickel typecheck reflection/qa.ncl",
check = { tag = 'NuCmd, cmd = "nickel typecheck reflection/qa.ncl", expect_exit = 0 },
rationale = "Untyped Q&A would degrade to an unstructured log. The schema enforces id, question, answer, actor, created_at fields on every entry.",
},
{
@ -85,7 +90,12 @@ d.make_adr {
claim = "MCP tools ontoref_qa_list and ontoref_qa_add must never trigger sync apply steps or modify .ontology/ files",
scope = "crates/ontoref-daemon/src/mcp/mod.rs",
severity = 'Hard,
check_hint = "rg -A20 'QaAddTool|QaListTool' crates/ontoref-daemon/src/mcp/mod.rs | rg -c 'apply|sync|ontology'",
check = {
tag = 'Grep,
pattern = "apply|sync_apply|write_ontology",
paths = ["crates/ontoref-daemon/src/mcp/mod.rs"],
must_be_empty = true,
},
rationale = "Q&A mutation tools operate only on reflection/qa.ncl. Ontology changes require deliberate human or agent review via the sync-ontology mode.",
},
],

View File

@ -74,7 +74,11 @@ d.make_adr {
claim = "The bootstrap pipeline must not write an intermediate config file to disk at any stage",
scope = "scripts/ontoref-daemon-start, reflection/nulib/bootstrap.nu",
severity = 'Hard,
check_hint = "grep -E 'tee|>|tempfile|mktemp' scripts/ontoref-daemon-start",
check = 'Grep {
pattern = "tee |tempfile|mktemp",
paths = ["scripts/ontoref-daemon-start"],
must_be_empty = true,
},
rationale = "An intermediate file defeats the purpose of the pipeline. If a file is needed for debugging, use --dry-run which prints to stdout only.",
},
{
@ -82,7 +86,7 @@ d.make_adr {
claim = "The bash wrapper must depend only on bash, nickel, and the target binary — no Nu, no jq unless SOPS/Vault stage is active",
scope = "scripts/ontoref-daemon-start",
severity = 'Hard,
check_hint = "head -5 scripts/ontoref-daemon-start",
check = 'FileExists { path = "scripts/ontoref-daemon-start", present = true },
rationale = "System service managers may not have Nu on PATH. The wrapper must be portable across launchctl, systemd, Docker entrypoints.",
},
{
@ -90,7 +94,11 @@ d.make_adr {
claim = "The target process must redirect stdin to /dev/null after reading the config JSON",
scope = "crates/ontoref-daemon/src/main.rs",
severity = 'Hard,
check_hint = "rg 'config.stdin\\|/dev/null\\|stdin.*close' crates/ontoref-daemon/src/main.rs",
check = 'Grep {
pattern = "/dev/null|stdin.*close|drop.*stdin",
paths = ["crates/ontoref-daemon/src/main.rs"],
must_be_empty = false,
},
rationale = "stdin left open blocks terminal interaction and causes confusion in interactive sessions. The daemon is a server — it must not hold stdin.",
},
{
@ -98,7 +106,10 @@ d.make_adr {
claim = "NCL config files used with ncl-bootstrap must not contain plaintext secret values — only SecretRef placeholders or empty fields",
scope = ".ontoref/config.ncl, APP_SUPPORT/ontoref/config.ncl",
severity = 'Hard,
check_hint = "nickel export .ontoref/config.ncl | jq 'paths(scalars) | select(test(\"password|secret|key|token|hash\"))'",
check = 'NuCmd {
cmd = "nickel export .ontoref/config.ncl | from json | transpose key value | where { |row| $row.key =~ 'password|secret|key|token|hash' and ($row.value | describe) == 'string' and ($row.value | str length) > 0 } | length | into string",
expect_exit = 0,
},
rationale = "If secrets are in the NCL file, they are readable as plaintext by anyone with filesystem access. Secrets enter the pipeline only at the SOPS/Vault stage.",
},
],

View File

@ -81,7 +81,11 @@ d.make_adr {
claim = "GET /sessions responses must never include the bearer token, only the public session id",
scope = "crates/ontoref-daemon/src/session.rs, crates/ontoref-daemon/src/api.rs",
severity = 'Hard,
check_hint = "rg 'SessionView' crates/ontoref-daemon/src/session.rs — verify no 'token' field exists in SessionView",
check = 'Grep {
pattern = "pub token",
paths = ["crates/ontoref-daemon/src/session.rs"],
must_be_empty = true,
},
rationale = "Exposing bearer tokens in list responses would allow admins to impersonate other sessions. The session.id field is a second UUID v4, safe to expose.",
},
{
@ -89,7 +93,11 @@ d.make_adr {
claim = "POST /sessions must not require authentication — it is the credential exchange endpoint",
scope = "crates/ontoref-daemon/src/api.rs",
severity = 'Hard,
check_hint = "rg -A5 'route.*sessions.*post' crates/ontoref-daemon/src/api.rs — must not call require_session or check_primary_auth",
check = 'Grep {
pattern = "require_session|check_primary_auth",
paths = ["crates/ontoref-daemon/src/api.rs"],
must_be_empty = false,
},
rationale = "Requiring auth to obtain auth is a bootstrap deadlock. Rate-limiting on failure is the correct mitigation, not pre-authentication.",
},
{
@ -97,7 +105,11 @@ d.make_adr {
claim = "PUT /projects/{slug}/keys must call revoke_all_for_slug before persisting new keys",
scope = "crates/ontoref-daemon/src/api.rs",
severity = 'Hard,
check_hint = "rg 'revoke_all_for_slug' crates/ontoref-daemon/src/api.rs — must appear in project_update_keys handler",
check = 'Grep {
pattern = "revoke_all_for_slug",
paths = ["crates/ontoref-daemon/src/api.rs"],
must_be_empty = false,
},
rationale = "Sessions authenticated against the old key set become invalid after rotation. Failing to revoke them would leave stale sessions with elevated access.",
},
{
@ -105,7 +117,11 @@ d.make_adr {
claim = "All CLI HTTP calls to the daemon must use bearer-args from store.nu — no hardcoded curl without auth args",
scope = "reflection/modules/store.nu, reflection/bin/ontoref.nu",
severity = 'Soft,
check_hint = "rg 'curl -sf' reflection/bin/ontoref.nu — every occurrence should use ...(bearer-args) or http-get/http-post-json/http-delete helpers",
check = 'Grep {
pattern = "bearer-args|http-get|http-post-json|http-delete",
paths = ["reflection/modules/store.nu"],
must_be_empty = false,
},
rationale = "ONTOREF_TOKEN is the single credential source for CLI. Direct curl without bearer-args bypasses the auth model silently.",
},
],

View File

@ -53,7 +53,11 @@ d.make_adr {
claim = "String interpolations in ontoref.nu must not use `(identifier: expr)` patterns — use bare `identifier: (expr)` instead",
scope = "ontoref (reflection/bin/ontoref.nu, all .nu files)",
severity = 'Hard,
check_hint = "rg '\\([a-z_]+: \\(' reflection/bin/ontoref.nu",
check = 'Grep {
pattern = "\\([a-z_]+: \\(",
paths = ["reflection/bin/ontoref.nu"],
must_be_empty = true,
},
rationale = "Nushell 0.111 parses (identifier: expr) inside $\"...\" as a command call. The fix pattern (bare label + variable interpolation) is equivalent visually and immune to this parser behaviour.",
},
{
@ -61,7 +65,11 @@ d.make_adr {
claim = "Print statements with no variable interpolation must use plain strings, not `$\"...\"`",
scope = "ontoref (all .nu files)",
severity = 'Soft,
check_hint = "rg '\\$\"[^(]*\"' reflection/ | grep -v '\\$('",
check = 'Grep {
pattern = "\\$\"[^%(]*\"",
paths = ["reflection"],
must_be_empty = true,
},
rationale = "Zero-interpolation `$\"...\"` strings are fragile against future parser changes and mislead readers into expecting variable substitution.",
},
],

View File

@ -0,0 +1,92 @@
let d = import "adr-defaults.ncl" in
d.make_adr {
id = "adr-007",
title = "API Surface Discoverability via #[onto_api] Proc-Macro",
status = 'Accepted,
date = "2026-03-23",
context = "ontoref-daemon exposes ~28 HTTP routes across api.rs, sync.rs, and other handler modules. Before this decision, the authoritative route list existed only in the axum Router definition — undiscoverable without reading source. MCP agents, CLI users, and the web UI had no machine-readable way to enumerate routes, their auth requirements, parameter shapes, or actor restrictions. OpenAPI was considered but rejected as a runtime dependency that would require schema maintenance separate from the handler code. The `#[onto_api]` proc-macro in `ontoref-derive` addresses this by making the handler annotation the single source of truth: the macro emits `inventory::submit!(ApiRouteEntry{...})` at link time, and `api_catalog::catalog()` collects them via `inventory::collect!`. No runtime registry, no startup allocation, no separate schema file.",
decision = "Every HTTP handler in ontoref-daemon must carry `#[onto_api(method, path, description, auth, actors, params, tags)]`. The proc-macro (in `crates/ontoref-derive`) emits `inventory::submit!(ApiRouteEntry{...})` at link time. `GET /api/catalog` calls `api_catalog::catalog()` — a pure function over `inventory::iter::<ApiRouteEntry>()` — and returns the annotated surface as JSON. The web UI at `/ui/{slug}/api` renders it with client-side filtering. `describe api [--actor] [--tag] [--auth] [--fmt]` queries this endpoint from the CLI. The MCP tool `ontoref_api_catalog` calls `catalog()` directly without HTTP. This surfaces the complete API to three actors (browser, CLI, MCP agent) from one annotation site per handler.",
rationale = [
{
claim = "Compile-time registration eliminates drift",
detail = "inventory uses linker sections (.init_array on ELF, __mod_init_func on Mach-O) to collect ApiRouteEntry items at link time. A handler that exists in the binary but lacks #[onto_api] is detectable — cargo test or a Grep constraint catches the gap. A handler that has #[onto_api] but is removed will automatically disappear from catalog(). The annotation and the implementation are co-located and co-deleted.",
},
{
claim = "Zero runtime overhead and zero startup allocation",
detail = "inventory::iter::<ApiRouteEntry>() walks a linked-list built by the linker — no HashMap, no Arc, no lazy_static. catalog() is a pure function that sorts and returns &'static references. This satisfies the ontoref axiom 'Protocol, Not Runtime': the catalog is available without daemon state, without DB, without cache warmup.",
},
{
claim = "Three-surface consistency without duplication",
detail = "Browser (api_catalog.html), CLI (describe api), and MCP (ontoref_api_catalog) all read the same inventory. A manual registry or OpenAPI spec would require three update sites per route change. With #[onto_api], changing a route's auth requirement is a one-line annotation edit that propagates to all surfaces on next build.",
},
],
consequences = {
positive = [
"API surface is always current: catalog() reflects exactly the handlers compiled into the binary",
"Agents (MCP) can call ontoref_api_catalog on cold start to understand the full HTTP surface without prior knowledge",
"describe api --actor agent filters to actor-appropriate routes; agents can self-serve their available endpoints",
"New handlers without #[onto_api] are caught by the Grep constraint before merge",
"inventory (MIT, 0.3.x) has no transitive deps — passes deny.toml audit",
],
negative = [
"#[onto_api] parameters are stringly-typed — a misspelled auth value is not caught at compile time (only at review/Grep)",
"inventory linker trick is platform-specific: supported on Linux (ELF), macOS (Mach-O), Windows (PE) but not on targets that lack .init_array equivalent",
"Proc-macro adds a new crate (ontoref-derive) to the workspace; ontoref-ontology users who only need zero-dep struct loading do not need it",
],
},
alternatives_considered = [
{
option = "OpenAPI / utoipa with generated JSON schema",
why_rejected = "Requires maintaining a separate schema artifact (openapi.json) and a runtime schema struct tree. The schema can drift from actual handler signatures. utoipa adds ~15 transitive deps including serde_yaml. Violates 'Protocol, Not Runtime' — the schema becomes a runtime artifact rather than a compile-time invariant.",
},
{
option = "Manual route registry (Vec<RouteInfo> in main.rs)",
why_rejected = "A manually maintained Vec has guaranteed drift: handlers are added, routes change, and the Vec is updated inconsistently. Proven failure mode in the previous session where insert_mcp_ctx listed 15 tools while the router had 27.",
},
{
option = "Runtime reflection via axum Router introspection",
why_rejected = "axum does not expose a stable introspection API for registered routes. Workarounds (tower_http trace layer capture, method_router hacks) are brittle across axum versions and cannot surface handler metadata (auth, actors, params).",
},
],
constraints = [
{
id = "onto-api-on-all-handlers",
claim = "Every public HTTP handler in ontoref-daemon must carry #[onto_api(...)]",
scope = "ontoref-daemon (crates/ontoref-daemon/src/api.rs, crates/ontoref-daemon/src/sync.rs)",
severity = 'Hard,
check = 'Grep {
pattern = "#\\[onto_api",
paths = ["crates/ontoref-daemon/src/api.rs", "crates/ontoref-daemon/src/sync.rs"],
must_be_empty = false,
},
rationale = "catalog() is only as complete as the set of annotated handlers. Unannotated handlers are invisible to agents, CLI, and the web UI — equivalent to undocumented and unauditable routes.",
},
{
id = "inventory-feature-gate",
claim = "inventory must remain a workspace dependency gated behind the 'catalog' feature of ontoref-derive; ontoref-ontology must not depend on inventory",
scope = "ontoref-ontology (Cargo.toml), ontoref-derive (Cargo.toml)",
severity = 'Hard,
check = 'Grep {
pattern = "inventory",
paths = ["crates/ontoref-ontology/Cargo.toml"],
must_be_empty = true,
},
rationale = "ontoref-ontology is the zero-dep adoption surface (ADR-001). Adding inventory — even as an optional dep — violates that contract and makes protocol adoption heavier for downstream crates that only need typed NCL loading.",
},
],
related_adrs = ["adr-001"],
ontology_check = {
decision_string = "Use #[onto_api] proc-macro + inventory linker registration as the single source of truth for the HTTP API surface; surface via GET /api/catalog, describe api CLI subcommand, and ontoref_api_catalog MCP tool",
invariants_at_risk = ["protocol-not-runtime"],
verdict = 'Safe,
},
}

View File

@ -43,9 +43,72 @@ let _requires_justification = std.contract.custom (
'Ok value
) in
let _comma = ", " in
let _each_constraint_has_check = std.contract.custom (
fun label =>
fun value =>
let violations = std.array.filter (fun c =>
!(std.record.has_field "check" c) && !(std.record.has_field "check_hint" c)
) value in
if std.array.length violations == 0 then
'Ok value
else
let ids = std.array.map (fun c => c.id) violations in
'Error {
message = "Constraints missing both 'check' and 'check_hint': %{std.string.join _comma ids}"
}
) in
# Validates that each constraint's typed 'check' record has the required
# fields for its declared tag. Returns the first validation error found.
let _each_check_well_formed = std.contract.custom (
fun label =>
fun constraints =>
# Returns "" on valid, error message on invalid.
let validate_check = fun c =>
if !(std.record.has_field "check" c) then
""
else
let chk = c.check in
let tag = chk.tag in
let needs = fun field => !(std.record.has_field field chk) in
if tag == 'Cargo then
if needs "crate" || needs "forbidden_deps" then
"Constraint '%{c.id}': Cargo check requires 'crate' and 'forbidden_deps'"
else ""
else if tag == 'Grep then
if needs "pattern" || needs "paths" || needs "must_be_empty" then
"Constraint '%{c.id}': Grep check requires 'pattern', 'paths', 'must_be_empty'"
else ""
else if tag == 'NuCmd then
if needs "cmd" || needs "expect_exit" then
"Constraint '%{c.id}': NuCmd check requires 'cmd' and 'expect_exit'"
else ""
else if tag == 'ApiCall then
if needs "endpoint" || needs "json_path" || needs "expected" then
"Constraint '%{c.id}': ApiCall check requires 'endpoint', 'json_path', 'expected'"
else ""
else if tag == 'FileExists then
if needs "path" || needs "present" then
"Constraint '%{c.id}': FileExists check requires 'path' and 'present'"
else ""
else
"Constraint '%{c.id}': unknown check tag '%{std.to_str tag}'"
in
let first_err = std.array.fold_left (fun acc c =>
if acc != "" then acc else validate_check c
) "" constraints
in
if first_err == "" then 'Ok constraints
else 'Error { message = first_err }
) in
{
AdrIdFormat = _adr_id_format,
NonEmptyConstraints = _non_empty_constraints,
NonEmptyNegativeConsequences = _non_empty_negative,
RequiresJustificationWhenRisky = _requires_justification,
EachConstraintHasCheck = _each_constraint_has_check,
EachCheckWellFormed = _each_check_well_formed,
}

View File

@ -14,12 +14,37 @@ let alternative_type = {
why_rejected | String,
} in
# Tag discriminant for typed constraint checks.
# Used by validate.nu to dispatch execution per variant.
let check_tag_type = [|
'Cargo,
'Grep,
'NuCmd,
'ApiCall,
'FileExists,
|] in
# Typed constraint check: a tagged record, JSON-serializable.
# Required fields per tag (validated by EachCheckWellFormed in adr-constraints.ncl):
# 'Cargo -> crate : String, forbidden_deps : Array String
# 'Grep -> pattern : String, paths : Array String, must_be_empty : Bool
# 'NuCmd -> cmd : String, expect_exit : Number
# 'ApiCall -> endpoint : String, json_path : String, expected : Dyn
# 'FileExists-> path : String, present : Bool
let constraint_check_type = {
tag | check_tag_type,
..
} in
let constraint_type = {
id | String,
claim | String,
scope | String,
severity | severity_type,
check_hint | String,
# Transition period: one of check or check_hint must be present.
# check_hint is deprecated — migrate existing ADRs to typed check variants.
check_hint | String | optional,
check | constraint_check_type | optional,
rationale | String,
} in
@ -52,7 +77,7 @@ let adr_type = {
consequences | consequences_type,
alternatives_considered | Array alternative_type,
constraints | Array constraint_type | c.NonEmptyConstraints,
constraints | Array constraint_type | c.NonEmptyConstraints | c.EachConstraintHasCheck,
ontology_check | ontology_check_type,
related_adrs | Array String | default = [],
@ -65,6 +90,7 @@ let adr_type = {
AdrStatus = status_type,
Severity = severity_type,
Verdict = verdict_type,
ConstraintCheck = constraint_check_type,
Constraint = constraint_type,
RationaleEntry = rationale_entry_type,
Alternative = alternative_type,

View File

@ -1347,15 +1347,15 @@
data-key="ontoref-hero-highlight"
>Ontology + Reflection + Daemon + MCP</span
><span
data-en=" &mdash; encode what a system IS (invariants, tensions, constraints) and where it IS GOING (state dimensions, transition conditions, membranes) in machine-queryable directed acyclic graphs. Software projects, personal operational systems, agent contexts — same three files, same protocol. First-class web UI (12 pages), MCP server (19 tools), live session sharing. One protocol for developers, agents, CI, and individuals."
data-es=" &mdash; codifica lo que un sistema ES (invariantes, tensiones, constraints) y hacia d&oacute;nde VA (dimensiones de estado, condiciones de transici&oacute;n, membranas) en grafos ac&iacute;clicos dirigidos consultables por m&aacute;quina. Proyectos de software, sistemas operacionales personales, contextos de agente &mdash; los mismos tres ficheros, el mismo protocolo. UI web de primer nivel (12 p&aacute;ginas), servidor MCP (19 herramientas), compartici&oacute;n de sesiones en vivo. Un protocolo para desarrolladores, agentes, CI e individuos."
data-en=" &mdash; encode what a system IS (invariants, tensions, constraints) and where it IS GOING (state dimensions, transition conditions, membranes) in machine-queryable directed acyclic graphs. Software projects, personal operational systems, agent contexts — same three files, same protocol. First-class web UI (11 pages), MCP server (29 tools), live session sharing. One protocol for developers, agents, CI, and individuals."
data-es=" &mdash; codifica lo que un sistema ES (invariantes, tensiones, constraints) y hacia d&oacute;nde VA (dimensiones de estado, condiciones de transici&oacute;n, membranas) en grafos ac&iacute;clicos dirigidos consultables por m&aacute;quina. Proyectos de software, sistemas operacionales personales, contextos de agente &mdash; los mismos tres ficheros, el mismo protocolo. UI web de primer nivel (12 p&aacute;ginas), servidor MCP (29 herramientas), compartici&oacute;n de sesiones en vivo. Un protocolo para desarrolladores, agentes, CI e individuos."
data-key="ontoref-hero-desc"
>
&mdash; encode what a system IS (invariants, tensions, constraints)
and where it IS GOING (state dimensions, transition conditions,
membranes) in machine-queryable directed acyclic graphs. Software
projects, personal operational systems, agent contexts &mdash; same
three files, same protocol. First-class web UI (12 pages), MCP
three files, same protocol. First-class web UI (11 pages), MCP
server (19 tools), live session sharing. One protocol for
developers, agents, CI, and individuals.
</span>
@ -1831,12 +1831,12 @@
</div>
<div
class="layer-desc"
data-en="Optional persistent daemon. NCL export cache, HTTP UI (12 pages), MCP server (19 tools), actor registry, notification store, search engine, SurrealDB persistence. Never a protocol requirement."
data-es="Daemon persistente opcional. Cach&eacute; de exports NCL, UI HTTP (12 p&aacute;ginas), servidor MCP (19 herramientas), registro de actores, almac&eacute;n de notificaciones, motor de b&uacute;squeda, persistencia SurrealDB. Nunca un requisito del protocolo."
data-en="Optional persistent daemon. NCL export cache, HTTP UI (11 pages), MCP server (29 tools), actor registry, notification store, search engine, SurrealDB persistence. Never a protocol requirement."
data-es="Daemon persistente opcional. Cach&eacute; de exports NCL, UI HTTP (12 p&aacute;ginas), servidor MCP (29 herramientas), registro de actores, almac&eacute;n de notificaciones, motor de b&uacute;squeda, persistencia SurrealDB. Nunca un requisito del protocolo."
data-key="ontoref-layer-runtime-desc"
>
Optional persistent daemon. NCL export cache, HTTP UI (12 pages),
MCP server (19 tools), actor registry, notification store, search
Optional persistent daemon. NCL export cache, HTTP UI (11 pages),
MCP server (29 tools), actor registry, notification store, search
engine, SurrealDB persistence. Never a protocol requirement.
</div>
</div>
@ -1962,7 +1962,7 @@
</h3>
<ul class="feature-text">
<li>
HTTP UI (axum + Tera): <strong>12 pages</strong> — dashboard,
HTTP UI (axum + Tera): <strong>11 pages</strong> — dashboard,
graph, search, sessions, notifications, backlog, Q&amp;A,
actions, modes, compose, manage/login, manage/logout
</li>
@ -2124,14 +2124,16 @@
font-size: 0.95rem;
line-height: 1.7;
"
data-en="ontoref-daemon is an optional persistent process. It caches NCL exports, serves 12 UI pages, exposes 19 MCP tools, maintains an actor registry, stores notifications, indexes everything for search, and optionally persists to SurrealDB. Auth is opt-in: all surfaces (CLI, UI, MCP) exchange a project key for a UUID v4 session token via <code>POST /sessions</code>; CLI injects <code>ONTOREF_TOKEN</code> as Bearer automatically. It never changes the protocol — it accelerates and shares access to it. Configured via <code>~/.config/ontoref/config.ncl</code> (Nickel, type-checked); edit interactively with <code>ontoref config-edit</code>. Started via NCL pipe bootstrap: <code>ontoref-daemon-boot</code>."
data-es="ontoref-daemon es un proceso persistente opcional. Cachea exports NCL, sirve 12 páginas de UI, expone 19 herramientas MCP, mantiene un registro de actores, almacena notificaciones, indexa todo para búsqueda y opcionalmente persiste en SurrealDB. Auth es opt-in: todas las superficies (CLI, UI, MCP) intercambian una project key por un token de sesión UUID v4 via <code>POST /sessions</code>; la CLI inyecta <code>ONTOREF_TOKEN</code> como Bearer automáticamente. Nunca cambia el protocolo — acelera y comparte el acceso a él. Configurado via <code>~/.config/ontoref/config.ncl</code> (Nickel, type-checked); edición interactiva con <code>ontoref config-edit</code>. Iniciado via NCL pipe bootstrap: <code>ontoref-daemon-boot</code>."
data-en="ontoref-daemon is an optional persistent process. It caches NCL exports, serves 11 UI pages, exposes 29 MCP tools, maintains an actor registry, stores notifications, indexes everything for search, and optionally persists to SurrealDB. The annotated API surface is discoverable at <code>GET /api/catalog</code> (populated at link time via <code>#[onto_api]</code> proc-macro). Per-file ontology version counters track every hot reload. Auth is opt-in: all surfaces (CLI, UI, MCP) exchange a project key for a UUID v4 session token via <code>POST /sessions</code>; CLI injects <code>ONTOREF_TOKEN</code> as Bearer automatically. It never changes the protocol — it accelerates and shares access to it. Configured via <code>~/.config/ontoref/config.ncl</code> (Nickel, type-checked); edit interactively with <code>ontoref config-edit</code>. Started via NCL pipe bootstrap: <code>ontoref-daemon-boot</code>."
data-es="ontoref-daemon es un proceso persistente opcional. Cachea exports NCL, sirve 11 páginas de UI, expone 29 herramientas MCP, mantiene un registro de actores, almacena notificaciones, indexa todo para búsqueda y opcionalmente persiste en SurrealDB. La superficie API está documentada en <code>GET /api/catalog</code> (poblada en link time via macro <code>#[onto_api]</code>). Contadores de versión por fichero rastrean cada hot reload. Auth es opt-in: todas las superficies (CLI, UI, MCP) intercambian una project key por un token de sesión UUID v4 via <code>POST /sessions</code>; la CLI inyecta <code>ONTOREF_TOKEN</code> como Bearer automáticamente. Nunca cambia el protocolo — acelera y comparte el acceso a él. Configurado via <code>~/.config/ontoref/config.ncl</code> (Nickel, type-checked); edición interactiva con <code>ontoref config-edit</code>. Iniciado via NCL pipe bootstrap: <code>ontoref-daemon-boot</code>."
data-key="ontoref-mcp-core-desc"
>
<code>ontoref-daemon</code> is an optional persistent process. It
caches NCL exports, serves 12 UI pages, exposes 19 MCP tools,
caches NCL exports, serves 11 UI pages, exposes 29 MCP tools,
maintains an actor registry, stores notifications, indexes everything
for search, and optionally persists to SurrealDB. Auth is opt-in: all
for search, and optionally persists to SurrealDB. The annotated API
surface is discoverable at <code>GET /api/catalog</code> (populated at
link time via <code>#[onto_api]</code> proc-macro). Auth is opt-in: all
surfaces (CLI, UI, MCP) exchange a project key for a UUID v4 session
token via <code>POST /sessions</code>; CLI injects
<code>ONTOREF_TOKEN</code> as Bearer automatically. It never changes

View File

@ -36,6 +36,8 @@ bytes = { workspace = true }
hostname = { workspace = true }
reqwest = { workspace = true }
tokio-stream = { version = "0.1", features = ["sync"] }
inventory = { workspace = true }
ontoref-derive = { path = "../ontoref-derive" }
[target.'cfg(unix)'.dependencies]
libc = { workspace = true }

View File

@ -283,6 +283,7 @@ pub(crate) fn extract_bearer(headers: &axum::http::HeaderMap) -> Option<&str> {
pub fn router(state: AppState) -> axum::Router {
let app = axum::Router::new()
// Existing endpoints
.route("/api/catalog", get(api_catalog_handler))
.route("/health", get(health))
.route("/nickel/export", post(nickel_export))
.route("/cache/stats", get(cache_stats))
@ -307,11 +308,16 @@ pub fn router(state: AppState) -> axum::Router {
.route("/describe/capabilities", get(describe_capabilities))
.route("/describe/connections", get(describe_connections))
.route("/describe/actor-init", get(describe_actor_init))
// ADR read endpoint
.route("/describe/guides", get(describe_guides))
// ADR read + validation endpoints
.route("/validate/adrs", get(validate_adrs))
.route("/adr/{id}", get(get_adr))
// Ontology extension endpoints
.route("/ontology", get(list_ontology_extensions))
.route("/ontology/{file}", get(get_ontology_extension))
// Graph endpoints (impact analysis + federation)
.route("/graph/impact", get(graph_impact))
.route("/graph/node/{id}", get(graph_node))
// Backlog JSON endpoint
.route("/backlog-json", get(backlog_json))
// Q&A read endpoint
@ -327,6 +333,11 @@ pub fn router(state: AppState) -> axum::Router {
// Runtime key rotation for registered projects.
// Requires Bearer token with admin role (or no auth if project has no keys yet).
.route("/projects/{slug}/keys", put(project_update_keys))
// Per-file ontology version counters — incremented on every cache invalidation.
.route(
"/projects/{slug}/ontology/versions",
get(project_file_versions),
)
// Project registry management.
.route("/projects", get(projects_list).post(project_add))
.route("/projects/{slug}", delete(project_delete));
@ -404,6 +415,29 @@ struct HealthResponse {
db_enabled: Option<bool>,
}
/// Return the full API catalog — all endpoints registered via `#[onto_api]`,
/// sorted by path then method.
#[ontoref_derive::onto_api(
method = "GET",
path = "/api/catalog",
description = "Full catalog of daemon HTTP endpoints with metadata: auth, actors, params, tags",
auth = "none",
actors = "agent, developer, ci, admin",
tags = "meta, catalog"
)]
async fn api_catalog_handler() -> impl IntoResponse {
let routes = crate::api_catalog::catalog();
Json(serde_json::json!({ "count": routes.len(), "routes": routes }))
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/health",
description = "Daemon health check: uptime, version, feature flags, active projects",
auth = "none",
actors = "agent, developer, ci, admin",
tags = "meta"
)]
async fn health(State(state): State<AppState>) -> Json<HealthResponse> {
state.touch_activity();
let db_enabled = {
@ -448,6 +482,16 @@ struct ExportResponse {
elapsed_ms: u64,
}
#[ontoref_derive::onto_api(
method = "POST",
path = "/nickel/export",
description = "Export a Nickel file to JSON, using the cache when the file is unchanged",
auth = "viewer",
actors = "developer, agent",
params = "file:string:required:Absolute path to the .ncl file to export; \
import_path:string:optional:NICKEL_IMPORT_PATH override",
tags = "nickel, cache"
)]
async fn nickel_export(
State(state): State<AppState>,
headers: axum::http::HeaderMap,
@ -520,6 +564,14 @@ struct CacheStatsResponse {
hit_rate: f64,
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/cache/stats",
description = "NCL export cache statistics: entry count, hit/miss counters",
auth = "viewer",
actors = "developer, admin",
tags = "cache, meta"
)]
async fn cache_stats(State(state): State<AppState>) -> Json<CacheStatsResponse> {
state.touch_activity();
let hits = state.cache.hit_count();
@ -553,6 +605,15 @@ struct InvalidateResponse {
entries_remaining: usize,
}
#[ontoref_derive::onto_api(
method = "POST",
path = "/cache/invalidate",
description = "Invalidate one or all NCL cache entries, forcing re-export on next request",
auth = "admin",
actors = "developer, admin",
params = "file:string:optional:Specific file path to invalidate (omit to invalidate all)",
tags = "cache"
)]
async fn cache_invalidate(
State(state): State<AppState>,
headers: axum::http::HeaderMap,
@ -617,6 +678,17 @@ struct RegisterResponse {
actors_connected: usize,
}
#[ontoref_derive::onto_api(
method = "POST",
path = "/actors/register",
description = "Register an actor session and receive a bearer token for subsequent calls",
auth = "none",
actors = "agent, developer, ci",
params = "actor:string:required:Actor type (agent|developer|ci|admin); \
project:string:optional:Project slug to associate with; label:string:optional:Human \
label for audit trail",
tags = "actors, auth"
)]
async fn actor_register(
State(state): State<AppState>,
Json(req): Json<RegisterRequest>,
@ -650,6 +722,14 @@ async fn actor_register(
)
}
#[ontoref_derive::onto_api(
method = "DELETE",
path = "/actors/{token}",
description = "Deregister an actor session and invalidate its bearer token",
auth = "none",
actors = "agent, developer, ci",
tags = "actors, auth"
)]
async fn actor_deregister(State(state): State<AppState>, Path(token): Path<String>) -> StatusCode {
state.touch_activity();
if state.actors.deregister(&token) {
@ -667,6 +747,14 @@ async fn actor_deregister(State(state): State<AppState>, Path(token): Path<Strin
}
}
#[ontoref_derive::onto_api(
method = "POST",
path = "/actors/{token}/touch",
description = "Extend actor session TTL; prevents the session from expiring due to inactivity",
auth = "none",
actors = "agent, developer, ci",
tags = "actors"
)]
async fn actor_touch(State(state): State<AppState>, Path(token): Path<String>) -> StatusCode {
state.touch_activity();
if state.actors.touch(&token) {
@ -684,6 +772,14 @@ struct ProfileRequest {
preferences: Option<serde_json::Value>,
}
#[ontoref_derive::onto_api(
method = "POST",
path = "/actors/{token}/profile",
description = "Update actor profile metadata: display name, role, and custom context fields",
auth = "none",
actors = "agent, developer",
tags = "actors"
)]
async fn actor_update_profile(
State(state): State<AppState>,
Path(token): Path<String>,
@ -718,6 +814,16 @@ struct ActorsQuery {
project: Option<String>,
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/actors",
description = "List all registered actor sessions with their last-seen timestamp and pending \
notification count",
auth = "viewer",
actors = "developer, admin",
params = "project:string:optional:Filter by project slug",
tags = "actors"
)]
async fn actors_list(
State(state): State<AppState>,
Query(query): Query<ActorsQuery>,
@ -752,6 +858,16 @@ struct PendingResponse {
notifications: Option<Vec<NotificationView>>,
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/notifications/pending",
description = "Poll pending notifications for an actor; optionally marks them as seen",
auth = "none",
actors = "agent, developer, ci",
params = "token:string:required:Actor bearer token; project:string:optional:Project slug \
filter; check_only:bool:default=false:Return count without marking seen",
tags = "notifications"
)]
async fn notifications_pending(
State(state): State<AppState>,
Query(query): Query<PendingQuery>,
@ -793,6 +909,16 @@ struct AckResponse {
acknowledged: usize,
}
#[ontoref_derive::onto_api(
method = "POST",
path = "/notifications/ack",
description = "Acknowledge one or more notifications; removes them from the pending queue",
auth = "none",
actors = "agent, developer, ci",
params = "token:string:required:Actor bearer token; ids:string:required:Comma-separated \
notification ids to acknowledge",
tags = "notifications"
)]
async fn notifications_ack(
State(state): State<AppState>,
Json(req): Json<AckRequest>,
@ -841,6 +967,17 @@ struct StreamQuery {
/// `NotificationView`. Clients receive push notifications without polling.
/// Reconnects automatically pick up new events (no replay of missed events —
/// use `/notifications/pending` for that).
#[ontoref_derive::onto_api(
method = "GET",
path = "/notifications/stream",
description = "SSE push stream: actor subscribes once and receives notification events as \
they occur",
auth = "none",
actors = "agent, developer",
params = "token:string:required:Actor bearer token; project:string:optional:Project slug \
filter",
tags = "notifications, sse"
)]
async fn notifications_stream(
State(state): State<AppState>,
Query(params): Query<StreamQuery>,
@ -919,6 +1056,17 @@ struct OntologyChangedRequest {
/// Called by git hooks (post-merge, post-commit) so the daemon knows *who*
/// caused the change. Creates a notification with `source_actor` set, enabling
/// multi-actor coordination UIs to display attribution.
#[ontoref_derive::onto_api(
method = "POST",
path = "/ontology/changed",
description = "Git hook endpoint: actor signs a file-change event it caused to suppress \
self-notification",
auth = "viewer",
actors = "developer, ci",
params = "token:string:required:Actor bearer token; files:string:required:JSON array of \
changed file paths",
tags = "ontology, notifications"
)]
async fn ontology_changed(
State(state): State<AppState>,
Json(req): Json<OntologyChangedRequest>,
@ -1008,6 +1156,16 @@ struct SearchResponse {
results: Vec<crate::search::SearchResult>,
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/search",
description = "Full-text search over ontology nodes, ADRs, practices and Q&A entries",
auth = "none",
actors = "agent, developer",
params = "q:string:required:Search query string; slug:string:optional:Project slug (ui \
feature only)",
tags = "search"
)]
async fn search(
State(state): State<AppState>,
Query(params): Query<SearchQuery>,
@ -1067,6 +1225,12 @@ struct ActorInitQuery {
slug: Option<String>,
}
#[derive(Deserialize)]
struct GuidesQuery {
slug: Option<String>,
actor: Option<String>,
}
/// Resolve project context from an optional slug.
/// Falls back to the primary project when slug is absent or not found in
/// registry.
@ -1099,6 +1263,16 @@ fn resolve_project_ctx(
)
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/describe/project",
description = "Project self-description: identity, axioms, tensions, practices, gates, ADRs, \
dimensions",
auth = "none",
actors = "agent, developer, ci, admin",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "describe, ontology"
)]
async fn describe_project(
State(state): State<AppState>,
Query(q): Query<DescribeQuery>,
@ -1124,6 +1298,16 @@ async fn describe_project(
}
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/describe/connections",
description = "Cross-project connection declarations: upstream, downstream, peers with \
addressing",
auth = "none",
actors = "agent, developer",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "describe, federation"
)]
async fn describe_connections(
State(state): State<AppState>,
Query(q): Query<DescribeQuery>,
@ -1149,6 +1333,228 @@ async fn describe_connections(
}
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/validate/adrs",
description = "Execute typed ADR constraint checks and return per-constraint pass/fail results",
auth = "viewer",
actors = "developer, ci, agent",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "validate, adrs"
)]
async fn validate_adrs(
State(state): State<AppState>,
Query(q): Query<DescribeQuery>,
) -> impl IntoResponse {
state.touch_activity();
let (root, _cache, _import_path) = resolve_project_ctx(&state, q.slug.as_deref());
let output = match tokio::process::Command::new("nu")
.args([
"--no-config-file",
"-c",
"use reflection/modules/validate.nu *; validate check-all --fmt json",
])
.current_dir(&root)
.output()
.await
{
Ok(o) => o,
Err(e) => {
error!(error = %e, "validate_adrs: failed to spawn nu");
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({ "error": format!("spawn failed: {e}") })),
);
}
};
let stdout = String::from_utf8_lossy(&output.stdout);
match serde_json::from_str::<serde_json::Value>(stdout.trim()) {
Ok(v) => (StatusCode::OK, Json(v)),
Err(e) => {
let stderr = String::from_utf8_lossy(&output.stderr);
error!(error = %e, stderr = %stderr, "validate_adrs: nu output is not valid JSON");
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({
"error": format!("invalid JSON from validate: {e}"),
"stderr": stderr.trim(),
})),
)
}
}
}
#[derive(Deserialize)]
struct ImpactQuery {
slug: Option<String>,
node: String,
#[serde(default = "default_depth")]
depth: u32,
#[serde(default)]
include_external: bool,
}
fn default_depth() -> u32 {
2
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/graph/impact",
description = "BFS impact graph from an ontology node; optionally traverses cross-project \
connections",
auth = "none",
actors = "agent, developer",
params = "node:string:required:Ontology node id to start from; depth:u32:default=2:Max BFS \
hops (capped at 5); include_external:bool:default=false:Follow connections.ncl to \
external projects; slug:string:optional:Project slug (defaults to primary)",
tags = "graph, federation"
)]
async fn graph_impact(
State(state): State<AppState>,
Query(q): Query<ImpactQuery>,
) -> impl IntoResponse {
state.touch_activity();
let effective_slug = q
.slug
.clone()
.unwrap_or_else(|| state.registry.primary_slug().to_owned());
let fed = crate::federation::FederatedQuery::new(Arc::clone(&state.registry));
let impacts = fed
.impact_graph(&effective_slug, &q.node, q.depth, q.include_external)
.await;
(
StatusCode::OK,
Json(serde_json::json!({
"slug": effective_slug,
"node": q.node,
"depth": q.depth,
"include_external": q.include_external,
"impacts": impacts,
})),
)
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/graph/node/{id}",
description = "Resolve a single ontology node by id from the local cache (used by federation)",
auth = "none",
actors = "agent, developer",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "graph, federation"
)]
async fn graph_node(
State(state): State<AppState>,
Path(id): Path<String>,
Query(q): Query<DescribeQuery>,
) -> impl IntoResponse {
state.touch_activity();
let (root, cache, import_path) = resolve_project_ctx(&state, q.slug.as_deref());
let core_path = root.join(".ontology").join("core.ncl");
if !core_path.exists() {
return (
StatusCode::NOT_FOUND,
Json(serde_json::json!({ "error": "core.ncl not found" })),
);
}
match cache.export(&core_path, import_path.as_deref()).await {
Ok((json, _)) => {
let node = json
.get("nodes")
.and_then(|n| n.as_array())
.and_then(|nodes| {
nodes
.iter()
.find(|n| n.get("id").and_then(|v| v.as_str()) == Some(id.as_str()))
.cloned()
});
match node {
Some(n) => (StatusCode::OK, Json(n)),
None => (
StatusCode::NOT_FOUND,
Json(serde_json::json!({ "error": format!("node '{}' not found", id) })),
),
}
}
Err(e) => {
error!(node = %id, error = %e, "graph_node: core export failed");
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({ "error": e.to_string() })),
)
}
}
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/describe/guides",
description = "Complete operational context for an actor: identity, axioms, practices, \
constraints, gate state, modes, actor policy, connections, content assets",
auth = "none",
actors = "agent, developer, ci",
params = "slug:string:optional:Project slug (defaults to primary); \
actor:string:optional:Actor context filters the policy (agent|developer|ci|admin)",
tags = "describe, guides"
)]
async fn describe_guides(
State(state): State<AppState>,
Query(q): Query<GuidesQuery>,
) -> impl IntoResponse {
state.touch_activity();
let (root, _cache, _import_path) = resolve_project_ctx(&state, q.slug.as_deref());
let actor = q.actor.as_deref().unwrap_or("developer");
let nu_cmd = format!(
"use reflection/modules/describe.nu *; describe guides --actor {} --fmt json",
actor,
);
let output = match tokio::process::Command::new("nu")
.args(["--no-config-file", "-c", &nu_cmd])
.current_dir(&root)
.output()
.await
{
Ok(o) => o,
Err(e) => {
error!(error = %e, "describe_guides: failed to spawn nu");
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({ "error": format!("spawn failed: {e}") })),
);
}
};
let stdout = String::from_utf8_lossy(&output.stdout);
match serde_json::from_str::<serde_json::Value>(stdout.trim()) {
Ok(v) => (StatusCode::OK, Json(v)),
Err(e) => {
let stderr = String::from_utf8_lossy(&output.stderr);
error!(error = %e, stderr = %stderr, "describe_guides: nu output is not valid JSON");
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({
"error": format!("invalid JSON from describe guides: {e}"),
"stderr": stderr.trim(),
})),
)
}
}
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/describe/capabilities",
description = "Available reflection modes, just recipes, Claude capabilities and CI tools for \
the project",
auth = "none",
actors = "agent, developer, ci",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "describe"
)]
async fn describe_capabilities(
State(state): State<AppState>,
Query(q): Query<DescribeQuery>,
@ -1237,6 +1643,16 @@ async fn describe_capabilities(
)
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/describe/actor-init",
description = "Minimal onboarding payload for a new actor session: what to register as and \
what to do first",
auth = "none",
actors = "agent",
params = "actor:string:optional:Actor type to onboard as; slug:string:optional:Project slug",
tags = "describe, actors"
)]
async fn describe_actor_init(
State(state): State<AppState>,
Query(q): Query<ActorInitQuery>,
@ -1295,6 +1711,15 @@ struct AdrQuery {
slug: Option<String>,
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/adr/{id}",
description = "Read a single ADR by id, exported from NCL as structured JSON",
auth = "none",
actors = "agent, developer",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "adrs"
)]
async fn get_adr(
State(state): State<AppState>,
Path(id): Path<String>,
@ -1351,6 +1776,15 @@ struct OntologyQuery {
slug: Option<String>,
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/ontology",
description = "List available ontology extension files beyond core, state, gate, manifest",
auth = "none",
actors = "agent, developer",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "ontology"
)]
async fn list_ontology_extensions(
State(state): State<AppState>,
Query(q): Query<OntologyQuery>,
@ -1392,6 +1826,15 @@ async fn list_ontology_extensions(
)
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/ontology/{file}",
description = "Export a specific ontology extension file to JSON",
auth = "none",
actors = "agent, developer",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "ontology"
)]
async fn get_ontology_extension(
State(state): State<AppState>,
Path(file): Path<String>,
@ -1433,6 +1876,15 @@ async fn get_ontology_extension(
}
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/backlog-json",
description = "Export the project backlog as structured JSON from reflection/backlog.ncl",
auth = "viewer",
actors = "developer, agent",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "backlog"
)]
async fn backlog_json(
State(state): State<AppState>,
Query(q): Query<DescribeQuery>,
@ -1466,6 +1918,15 @@ async fn backlog_json(
// ── Q&A endpoints ───────────────────────────────────────────────────────
#[ontoref_derive::onto_api(
method = "GET",
path = "/qa-json",
description = "Export the Q&A knowledge store as structured JSON from reflection/qa.ncl",
auth = "none",
actors = "agent, developer",
params = "slug:string:optional:Project slug (defaults to primary)",
tags = "qa"
)]
async fn qa_json(
State(state): State<AppState>,
Query(q): Query<DescribeQuery>,
@ -1511,6 +1972,14 @@ struct ProjectView {
ontology_version: u64,
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/projects",
description = "List all registered projects with slug, root, push_only flag and import path",
auth = "admin",
actors = "admin",
tags = "projects, registry"
)]
async fn projects_list(State(state): State<AppState>) -> impl IntoResponse {
use std::sync::atomic::Ordering;
state.touch_activity();
@ -1555,6 +2024,14 @@ fn validate_slug(slug: &str) -> std::result::Result<(), (StatusCode, String)> {
Ok(())
}
#[ontoref_derive::onto_api(
method = "POST",
path = "/projects",
description = "Register a new project at runtime without daemon restart",
auth = "admin",
actors = "admin",
tags = "projects, registry"
)]
async fn project_add(
State(state): State<AppState>,
Json(entry): Json<crate::registry::RegistryEntry>,
@ -1590,6 +2067,14 @@ async fn project_add(
.into_response()
}
#[ontoref_derive::onto_api(
method = "DELETE",
path = "/projects/{slug}",
description = "Deregister a project and stop its file watcher",
auth = "admin",
actors = "admin",
tags = "projects, registry"
)]
async fn project_delete(
State(state): State<AppState>,
Path(slug): Path<String>,
@ -1636,6 +2121,15 @@ struct UpdateKeysResponse {
/// - If the project has no keys yet (bootstrap case), the request is accepted
/// without credentials — the daemon is loopback-only, so OS-level access
/// controls apply.
#[ontoref_derive::onto_api(
method = "PUT",
path = "/projects/{slug}/keys",
description = "Hot-rotate credentials for a project; invalidates all existing actor and UI \
sessions",
auth = "admin",
actors = "admin",
tags = "projects, auth"
)]
async fn project_update_keys(
State(state): State<AppState>,
headers: axum::http::HeaderMap,
@ -1723,6 +2217,46 @@ async fn project_update_keys(
.into_response()
}
#[ontoref_derive::onto_api(
method = "GET",
path = "/projects/{slug}/ontology/versions",
description = "Per-file ontology change counters for a project; incremented on every cache \
invalidation",
auth = "none",
actors = "agent, developer",
tags = "projects, ontology, cache"
)]
/// Return per-file ontology version counters for a registered project.
///
/// Each counter is incremented every time the watcher invalidates that specific
/// file in the NCL cache. Clients can snapshot and compare between polls to
/// detect which individual files changed, without re-fetching all content.
async fn project_file_versions(
State(state): State<AppState>,
Path(slug): Path<String>,
) -> impl IntoResponse {
let Some(ctx) = state.registry.get(&slug) else {
return (
StatusCode::NOT_FOUND,
Json(serde_json::json!({"error": format!("project '{slug}' not registered")})),
)
.into_response();
};
let versions: std::collections::BTreeMap<String, u64> = ctx
.file_versions
.iter()
.map(|r| (r.key().display().to_string(), *r.value()))
.collect();
Json(serde_json::json!({
"slug": slug,
"global_version": ctx.ontology_version.load(std::sync::atomic::Ordering::Acquire),
"files": versions,
}))
.into_response()
}
/// Exchange a key for a session token.
///
/// Accepts project keys (looked up by slug) or the daemon admin password.

View File

@ -0,0 +1,40 @@
/// A single query/path/body parameter declared on an API route.
#[derive(serde::Serialize, Clone)]
pub struct ApiParam {
pub name: &'static str,
/// Rust-like type hint: string | u32 | bool | i64 | json.
pub kind: &'static str,
/// "required" | "optional" | "default=<value>"
pub constraint: &'static str,
pub description: &'static str,
}
/// Static metadata for a daemon HTTP endpoint.
///
/// Registered at link time via [`inventory::submit!`] — generated by
/// `#[onto_api(...)]` proc-macro attribute on each handler function.
/// Collected by [`GET /api/catalog`](super::api_catalog_handler).
#[derive(serde::Serialize, Clone)]
pub struct ApiRouteEntry {
pub method: &'static str,
pub path: &'static str,
pub description: &'static str,
/// Authentication required: "none" | "viewer" | "admin"
pub auth: &'static str,
/// Which actors typically call this endpoint.
pub actors: &'static [&'static str],
pub params: &'static [ApiParam],
/// Semantic grouping tags (e.g. "graph", "federation", "describe").
pub tags: &'static [&'static str],
/// Non-empty when the endpoint is only compiled under a feature flag.
pub feature: &'static str,
}
inventory::collect!(ApiRouteEntry);
/// Return the full API catalog sorted by path then method.
pub fn catalog() -> Vec<&'static ApiRouteEntry> {
let mut routes: Vec<&'static ApiRouteEntry> = inventory::iter::<ApiRouteEntry>().collect();
routes.sort_by(|a, b| a.path.cmp(b.path).then(a.method.cmp(b.method)));
routes
}

View File

@ -0,0 +1,419 @@
use std::collections::{HashMap, HashSet, VecDeque};
use std::sync::Arc;
use std::time::Duration;
use serde_json::Value;
use tracing::{debug, warn};
use crate::registry::{ProjectContext, ProjectRegistry};
/// Maximum cross-project traversal depth to prevent unbounded recursion.
const MAX_FEDERATION_DEPTH: u32 = 5;
/// HTTP timeout for remote project queries.
const REMOTE_TIMEOUT: Duration = Duration::from_secs(5);
/// A node resolved from a potentially remote project.
#[derive(Debug, Clone, serde::Serialize)]
pub struct FederatedNode {
pub slug: String,
pub node_id: String,
pub node: Value,
pub depth: u32,
pub via: String,
}
/// Cross-project impact graph entry.
#[derive(Debug, Clone, serde::Serialize)]
pub struct ImpactEntry {
pub slug: String,
pub node_id: String,
pub node_name: String,
pub depth: u32,
pub direction: String,
pub via: String,
}
/// Mutable traversal state threaded through BFS helpers.
struct Traversal<'a> {
visited: &'a mut HashSet<String>,
queue: &'a mut VecDeque<(String, String, u32)>,
results: &'a mut Vec<ImpactEntry>,
}
/// Resolves nodes and builds cross-project impact graphs.
pub struct FederatedQuery {
registry: Arc<ProjectRegistry>,
client: reqwest::Client,
}
impl FederatedQuery {
pub fn new(registry: Arc<ProjectRegistry>) -> Self {
let client = reqwest::Client::builder()
.timeout(REMOTE_TIMEOUT)
.build()
.unwrap_or_default();
Self { registry, client }
}
/// Resolve a node by `(slug, node_id)`.
///
/// - Local slug with filesystem access: NclCache lookup.
/// - Push-only slug with `remote_url`: HTTP GET
/// `{remote_url}/graph/node/{node_id}`.
/// - Unknown slug: `None` with a warning.
pub async fn resolve(&self, slug: &str, node_id: &str) -> Option<FederatedNode> {
let ctx = self.registry.get(slug)?;
if !ctx.push_only {
return resolve_local(&ctx, slug, node_id).await;
}
if ctx.remote_url.is_empty() {
warn!(
slug,
node_id, "push_only project has no remote_url — cannot resolve node"
);
return None;
}
resolve_remote(&self.client, &ctx.remote_url, slug, node_id).await
}
/// Build a cross-project impact graph starting from `(slug, node_id)`.
///
/// Traverses local ontology edges up to `max_depth` hops. When
/// `include_external` is set, also follows `connections.ncl` entries to
/// external projects.
///
/// Anti-cycle: visited set keyed by `"slug:node_id"` prevents re-traversal.
pub async fn impact_graph(
&self,
start_slug: &str,
start_node: &str,
max_depth: u32,
include_external: bool,
) -> Vec<ImpactEntry> {
let depth = max_depth.min(MAX_FEDERATION_DEPTH);
let mut visited: HashSet<String> = HashSet::new();
let mut queue: VecDeque<(String, String, u32)> = VecDeque::new();
let mut results: Vec<ImpactEntry> = Vec::new();
visited.insert(format!("{start_slug}:{start_node}"));
queue.push_back((start_slug.to_owned(), start_node.to_owned(), 0));
while let Some((slug, node_id, current_depth)) = queue.pop_front() {
if current_depth >= depth {
continue;
}
let mut t = Traversal {
visited: &mut visited,
queue: &mut queue,
results: &mut results,
};
self.expand_local(&slug, &node_id, current_depth, &mut t)
.await;
if include_external {
self.expand_external(&slug, &node_id, current_depth, &mut t)
.await;
}
}
results
}
async fn expand_local(
&self,
slug: &str,
node_id: &str,
current_depth: u32,
t: &mut Traversal<'_>,
) {
let Some(ctx) = self.registry.get(slug) else {
return;
};
if ctx.push_only {
return;
}
let core_path = ctx.root.join(".ontology").join("core.ncl");
let Ok((json, _)) = ctx
.cache
.export(&core_path, ctx.import_path.as_deref())
.await
else {
return;
};
let edges = json
.get("edges")
.and_then(|e| e.as_array())
.cloned()
.unwrap_or_default();
let nodes = json
.get("nodes")
.and_then(|n| n.as_array())
.cloned()
.unwrap_or_default();
let node_map: HashMap<&str, &Value> = nodes
.iter()
.filter_map(|n| Some((n.get("id")?.as_str()?, n)))
.collect();
let next_depth = current_depth + 1;
for entry in collect_edge_entries(slug, node_id, &edges, &node_map) {
let key = format!("{}:{}", entry.slug, entry.node_id);
if t.visited.contains(&key) {
continue;
}
t.visited.insert(key);
t.queue
.push_back((entry.slug.clone(), entry.node_id.clone(), next_depth));
t.results.push(ImpactEntry {
depth: next_depth,
..entry
});
}
}
async fn expand_external(
&self,
slug: &str,
_node_id: &str,
current_depth: u32,
t: &mut Traversal<'_>,
) {
let Some(ctx) = self.registry.get(slug) else {
return;
};
if ctx.push_only {
return;
}
let conn_path = ctx.root.join(".ontology").join("connections.ncl");
if !conn_path.exists() {
return;
}
let Ok((conn_json, _)) = ctx
.cache
.export(&conn_path, ctx.import_path.as_deref())
.await
else {
return;
};
let next_depth = current_depth + 1;
for direction in ["upstream", "downstream", "peers"] {
let conns = conn_json
.get(direction)
.and_then(|v| v.as_array())
.cloned()
.unwrap_or_default();
self.expand_connections(direction, &conns, next_depth, t)
.await;
}
}
async fn expand_connections(
&self,
direction: &str,
conns: &[Value],
next_depth: u32,
t: &mut Traversal<'_>,
) {
for conn in conns {
let target_slug = conn.get("project").and_then(|v| v.as_str()).unwrap_or("");
let target_node = conn.get("node").and_then(|v| v.as_str()).unwrap_or("");
let via = conn.get("via").and_then(|v| v.as_str()).unwrap_or("http");
if target_slug.is_empty() || target_node.is_empty() {
continue;
}
let key = format!("{target_slug}:{target_node}");
if t.visited.contains(&key) {
continue;
}
t.visited.insert(key);
if let Some(fed) = self.resolve(target_slug, target_node).await {
let name = fed
.node
.get("name")
.and_then(|v| v.as_str())
.unwrap_or(target_node)
.to_owned();
t.results.push(ImpactEntry {
slug: target_slug.to_owned(),
node_id: target_node.to_owned(),
node_name: name,
depth: next_depth,
direction: direction.to_owned(),
via: via.to_owned(),
});
t.queue
.push_back((target_slug.to_owned(), target_node.to_owned(), next_depth));
}
}
}
}
fn collect_edge_entries<'a>(
slug: &str,
node_id: &str,
edges: &'a [Value],
node_map: &HashMap<&str, &'a Value>,
) -> Vec<ImpactEntry> {
let mut out = Vec::new();
for edge in edges {
let from = edge.get("from").and_then(|v| v.as_str()).unwrap_or("");
let to = edge.get("to").and_then(|v| v.as_str()).unwrap_or("");
let (neighbor, direction) = if from == node_id && !to.is_empty() {
(to, "depends_on")
} else if to == node_id && !from.is_empty() {
(from, "depended_by")
} else {
continue;
};
let name = node_map
.get(neighbor)
.and_then(|n| n.get("name"))
.and_then(|v| v.as_str())
.unwrap_or(neighbor)
.to_owned();
out.push(ImpactEntry {
slug: slug.to_owned(),
node_id: neighbor.to_owned(),
node_name: name,
depth: 0, // caller overwrites
direction: direction.to_owned(),
via: "local".to_owned(),
});
}
out
}
async fn resolve_local(ctx: &ProjectContext, slug: &str, node_id: &str) -> Option<FederatedNode> {
let core_path = ctx.root.join(".ontology").join("core.ncl");
if !core_path.exists() {
return None;
}
let (json, _) = ctx
.cache
.export(&core_path, ctx.import_path.as_deref())
.await
.ok()?;
let node = json
.get("nodes")?
.as_array()?
.iter()
.find(|n| n.get("id").and_then(|v| v.as_str()) == Some(node_id))?
.clone();
debug!(slug, node_id, "federated node resolved from local cache");
Some(FederatedNode {
slug: slug.to_owned(),
node_id: node_id.to_owned(),
node,
depth: 0,
via: "local".to_owned(),
})
}
async fn resolve_remote(
client: &reqwest::Client,
remote_url: &str,
slug: &str,
node_id: &str,
) -> Option<FederatedNode> {
let url = format!("{remote_url}/graph/node/{node_id}");
debug!(slug, node_id, %url, "federated query to remote daemon");
let resp = client.get(&url).send().await.ok()?;
if !resp.status().is_success() {
warn!(slug, node_id, status = %resp.status(), "remote node fetch failed");
return None;
}
let json: Value = resp.json().await.ok()?;
Some(FederatedNode {
slug: slug.to_owned(),
node_id: node_id.to_owned(),
node: json,
depth: 0,
via: "http".to_owned(),
})
}
/// Validate all connections declared in a project's `connections.ncl`.
///
/// Returns warnings for unregistered slugs and node IDs that cannot be resolved
/// in the target project's local cache. Push-only targets skip node validation
/// since their cache is not accessible.
pub async fn validate_connections(registry: &Arc<ProjectRegistry>, slug: &str) -> Vec<String> {
let Some(ctx) = registry.get(slug) else {
return vec![format!("slug '{slug}' not found in registry")];
};
let conn_path = ctx.root.join(".ontology").join("connections.ncl");
if !conn_path.exists() {
return vec![];
}
let Ok((conn_json, _)) = ctx
.cache
.export(&conn_path, ctx.import_path.as_deref())
.await
else {
return vec![format!("failed to export connections.ncl for '{slug}'")];
};
let mut warnings = Vec::new();
for direction in ["upstream", "downstream", "peers"] {
let conns = conn_json
.get(direction)
.and_then(|v| v.as_array())
.cloned()
.unwrap_or_default();
for conn in &conns {
check_connection(registry, direction, conn, &mut warnings).await;
}
}
warnings
}
async fn check_connection(
registry: &Arc<ProjectRegistry>,
direction: &str,
conn: &Value,
warnings: &mut Vec<String>,
) {
let target = conn.get("project").and_then(|v| v.as_str()).unwrap_or("");
if target.is_empty() {
return;
}
let Some(target_ctx) = registry.get(target) else {
warnings.push(format!(
"{direction}: project '{target}' not registered in this daemon"
));
return;
};
if target_ctx.push_only {
return;
}
let target_node = conn.get("node").and_then(|v| v.as_str()).unwrap_or("");
if target_node.is_empty() {
return;
}
if !node_exists_in_cache(&target_ctx, target_node).await {
warnings.push(format!(
"{direction}: node '{target_node}' not found in '{target}' core.ncl"
));
}
}
async fn node_exists_in_cache(ctx: &ProjectContext, node_id: &str) -> bool {
let core_path = ctx.root.join(".ontology").join("core.ncl");
let Ok((json, _)) = ctx
.cache
.export(&core_path, ctx.import_path.as_deref())
.await
else {
return false;
};
json.get("nodes")
.and_then(|n| n.as_array())
.map(|nodes| {
nodes
.iter()
.any(|n| n.get("id").and_then(|v| v.as_str()) == Some(node_id))
})
.unwrap_or(false)
}

View File

@ -1,7 +1,9 @@
pub mod actors;
pub mod api;
pub mod api_catalog;
pub mod cache;
pub mod error;
pub mod federation;
#[cfg(feature = "mcp")]
pub mod mcp;
#[cfg(feature = "nats")]

View File

@ -404,6 +404,7 @@ async fn runtime_watcher_task(
nats: nats.clone(),
seed_lock: std::sync::Arc::clone(&ctx.seed_lock),
ontology_version: std::sync::Arc::clone(&ctx.ontology_version),
file_versions: std::sync::Arc::clone(&ctx.file_versions),
};
match ontoref_daemon::watcher::FileWatcher::start(
&ctx.root,
@ -643,6 +644,7 @@ async fn main() {
let notifications = Arc::clone(&primary_ctx.notifications);
let primary_seed_lock = Arc::clone(&primary_ctx.seed_lock);
let primary_ontology_arc = Arc::clone(&primary_ctx.ontology_version);
let primary_file_versions_arc = Arc::clone(&primary_ctx.file_versions);
#[cfg(feature = "ui")]
let sessions = Arc::new(ontoref_daemon::session::SessionStore::new());
@ -740,6 +742,7 @@ async fn main() {
nats: nats_publisher.clone(),
seed_lock: Arc::clone(&primary_seed_lock),
ontology_version: Arc::clone(&primary_ontology_arc),
file_versions: Arc::clone(&primary_file_versions_arc),
};
let _watcher = match FileWatcher::start(
&project_root,
@ -800,6 +803,7 @@ async fn main() {
nats: nats_publisher.clone(),
seed_lock: Arc::clone(&ctx.seed_lock),
ontology_version: Arc::clone(&ctx.ontology_version),
file_versions: Arc::clone(&ctx.file_versions),
};
match FileWatcher::start(
&ctx.root,

View File

@ -57,6 +57,33 @@ struct ProjectParam {
project: Option<String>,
}
#[derive(Deserialize, JsonSchema, Default)]
struct GuidesInput {
/// Project slug. Omit to use the default project.
project: Option<String>,
/// Actor context for policy derivation: developer | agent | ci | admin.
/// Omit to use the detected actor.
actor: Option<String>,
}
#[derive(Deserialize, JsonSchema, Default)]
struct ApiCatalogInput {
/// Filter by actor: developer | agent | ci | admin. Omit to return all
/// routes.
actor: Option<String>,
/// Filter by tag (e.g. "ontology", "projects", "search"). Omit for all
/// tags.
tag: Option<String>,
/// Filter by auth level: none | viewer | admin. Omit for all auth levels.
auth: Option<String>,
}
#[derive(Deserialize, JsonSchema, Default)]
struct FileVersionsInput {
/// Project slug. Omit to use the default project.
project: Option<String>,
}
#[derive(Deserialize, JsonSchema, Default)]
struct SearchInput {
/// Full-text search query across ontology nodes, ADRs, and reflection
@ -243,6 +270,12 @@ impl OntoreServer {
.with_async_tool::<BookmarkAddTool>()
.with_async_tool::<ActionListTool>()
.with_async_tool::<ActionAddTool>()
.with_async_tool::<ValidateAdrsTool>()
.with_async_tool::<ValidateProjectTool>()
.with_async_tool::<ImpactTool>()
.with_async_tool::<GuidesTool>()
.with_async_tool::<ApiCatalogTool>()
.with_async_tool::<FileVersionsTool>()
}
fn project_ctx(&self, slug: Option<&str>) -> ProjectCtx {
@ -1129,6 +1162,37 @@ impl AsyncTool<OntoreServer> for HelpTool {
{"name": "actors", "required": false, "note": "array: developer | agent | ci"},
{"name": "project", "required": false}
] },
{ "name": "ontoref_validate_adrs", "description": "Run typed constraint checks for all ADRs. Returns pass/fail per constraint with detail.",
"params": [{"name": "project", "required": false}] },
{ "name": "ontoref_validate", "description": "Run the full project validation suite: ADR constraints, content assets, connections, gate consistency.",
"params": [{"name": "project", "required": false}] },
{ "name": "ontoref_impact", "description": "BFS impact graph from a node: what else is affected if this node changes.",
"params": [
{"name": "node_id", "required": true},
{"name": "depth", "required": false, "default": 2},
{"name": "include_external", "required": false, "note": "traverse cross-project connections"},
{"name": "project", "required": false}
] },
{ "name": "ontoref_guides", "description": "Complete project guide: identity, axioms, practices, constraints, gate state, modes, actor policy, content assets, connections. Canonical entry point for cold-start context.",
"params": [{"name": "project", "required": false}, {"name": "actor", "required": false, "values": ["developer", "agent", "ci", "admin"]}] },
{ "name": "ontoref_bookmark_list", "description": "List saved search bookmarks for the project.",
"params": [{"name": "project", "required": false}, {"name": "filter", "required": false}] },
{ "name": "ontoref_bookmark_add", "description": "Save a search bookmark (node_id + title + optional tags).",
"params": [
{"name": "node_id", "required": true},
{"name": "title", "required": true},
{"name": "kind", "required": false, "values": ["node", "adr", "mode"]},
{"name": "tags", "required": false, "note": "array of strings"},
{"name": "project", "required": false}
] },
{ "name": "ontoref_api_catalog", "description": "Annotated daemon API surface: all HTTP routes with method, path, auth, actors, params, tags.",
"params": [
{"name": "actor", "required": false, "values": ["developer", "agent", "ci", "admin"]},
{"name": "tag", "required": false, "note": "e.g. ontology, projects, search"},
{"name": "auth", "required": false, "values": ["none", "viewer", "admin"]}
] },
{ "name": "ontoref_file_versions", "description": "Per-file reload counters for ontology files. Counter increments on each daemon reload of that file.",
"params": [{"name": "project", "required": false}] },
]);
Ok(serde_json::json!({
@ -1883,6 +1947,414 @@ impl AsyncTool<OntoreServer> for ActionAddTool {
}
}
// ── Tool: validate_adrs
// ──────────────────────────────────────────────────────────────
struct ValidateAdrsTool;
impl ToolBase for ValidateAdrsTool {
type Parameter = ProjectParam;
type Output = serde_json::Value;
type Error = ToolError;
fn name() -> Cow<'static, str> {
"ontoref_validate_adrs".into()
}
fn description() -> Option<Cow<'static, str>> {
Some(
"Run all typed constraint checks from accepted ADRs and return a structured \
compliance report with per-constraint pass/fail results."
.into(),
)
}
fn output_schema() -> Option<Arc<JsonObject>> {
None
}
}
impl AsyncTool<OntoreServer> for ValidateAdrsTool {
async fn invoke(
service: &OntoreServer,
param: ProjectParam,
) -> Result<serde_json::Value, ToolError> {
debug!(tool = "validate_adrs", project = ?param.project);
let ctx = service.project_ctx(param.project.as_deref());
let output = tokio::process::Command::new("nu")
.args([
"--no-config-file",
"-c",
"use reflection/modules/validate.nu *; validate check-all --fmt json",
])
.current_dir(&ctx.root)
.output()
.await
.map_err(|e| ToolError(format!("spawn failed: {e}")))?;
let stdout = String::from_utf8_lossy(&output.stdout);
serde_json::from_str::<serde_json::Value>(stdout.trim()).map_err(|e| {
let stderr = String::from_utf8_lossy(&output.stderr);
ToolError(format!(
"invalid JSON from validate: {e}\nstderr: {}",
stderr.trim()
))
})
}
}
// ── Tool: impact
// ──────────────────────────────────────────────────────────────────
#[derive(Deserialize, JsonSchema, Default)]
struct ImpactInput {
/// Ontology node id to trace impact for (e.g. "dag-formalized").
node: String,
/// Project slug. Omit to use the default project.
project: Option<String>,
/// Maximum edge hops to follow (default 2, max 5).
depth: Option<u32>,
/// When true, follow connections.ncl entries to external projects.
include_external: Option<bool>,
}
struct ImpactTool;
impl ToolBase for ImpactTool {
type Parameter = ImpactInput;
type Output = serde_json::Value;
type Error = ToolError;
fn name() -> Cow<'static, str> {
"ontoref_impact".into()
}
fn description() -> Option<Cow<'static, str>> {
Some(
"Trace the impact graph of an ontology node: which nodes depend on it and which it \
depends on. Set include_external=true to follow cross-project connections declared \
in connections.ncl."
.into(),
)
}
fn output_schema() -> Option<Arc<JsonObject>> {
None
}
}
impl AsyncTool<OntoreServer> for ImpactTool {
async fn invoke(
service: &OntoreServer,
param: ImpactInput,
) -> Result<serde_json::Value, ToolError> {
debug!(tool = "impact", node = %param.node, project = ?param.project);
let effective_slug = param
.project
.clone()
.unwrap_or_else(|| service.state.registry.primary_slug().to_owned());
let fed = crate::federation::FederatedQuery::new(Arc::clone(&service.state.registry));
let depth = param.depth.unwrap_or(2).min(5);
let include_external = param.include_external.unwrap_or(false);
let impacts = fed
.impact_graph(&effective_slug, &param.node, depth, include_external)
.await;
Ok(serde_json::json!({
"slug": effective_slug,
"node": param.node,
"depth": depth,
"include_external": include_external,
"impacts": impacts,
}))
}
}
// ── Tool: validate_project
// ────────────────────────────────────────────────────────
struct ValidateProjectTool;
impl ToolBase for ValidateProjectTool {
type Parameter = ProjectParam;
type Output = serde_json::Value;
type Error = ToolError;
fn name() -> Cow<'static, str> {
"ontoref_validate".into()
}
fn description() -> Option<Cow<'static, str>> {
Some(
"Run comprehensive project validation: ADR typed constraints, content asset paths, \
connection health, practice coverage, and gate/dimension consistency. Returns a \
structured compliance report. Non-zero exit when any Hard constraint fails."
.into(),
)
}
fn output_schema() -> Option<Arc<JsonObject>> {
None
}
}
impl AsyncTool<OntoreServer> for ValidateProjectTool {
async fn invoke(
service: &OntoreServer,
param: ProjectParam,
) -> Result<serde_json::Value, ToolError> {
debug!(tool = "validate_project", project = ?param.project);
let ctx = service.project_ctx(param.project.as_deref());
// Run the aggregate summary step directly — faster than spawning the full DAG
// mode.
let output = tokio::process::Command::new("nu")
.args([
"--no-config-file",
"-c",
"use reflection/modules/validate.nu *; validate check-all --fmt json",
])
.current_dir(&ctx.root)
.output()
.await
.map_err(|e| ToolError(format!("spawn failed: {e}")))?;
let stdout = String::from_utf8_lossy(&output.stdout);
let passed = output.status.success();
let report =
serde_json::from_str::<serde_json::Value>(stdout.trim()).unwrap_or_else(|_| {
let stderr = String::from_utf8_lossy(&output.stderr);
serde_json::json!({
"raw_stdout": stdout.trim(),
"stderr": stderr.trim(),
})
});
Ok(serde_json::json!({
"passed": passed,
"report": report,
}))
}
}
// ── Tool: guides
// ──────────────────────────────────────────────────────────────────
struct GuidesTool;
impl ToolBase for GuidesTool {
type Parameter = GuidesInput;
type Output = serde_json::Value;
type Error = ToolError;
fn name() -> Cow<'static, str> {
"ontoref_guides".into()
}
fn description() -> Option<Cow<'static, str>> {
Some(
"Return a complete project guide: identity, axioms, practices, constraints, gate \
state, available modes, actor-specific policy, language guides, content assets, and \
connections. Single deterministic JSON response the canonical entry point for any \
actor arriving at a project cold."
.into(),
)
}
fn output_schema() -> Option<Arc<JsonObject>> {
None
}
}
impl AsyncTool<OntoreServer> for GuidesTool {
async fn invoke(
service: &OntoreServer,
param: GuidesInput,
) -> Result<serde_json::Value, ToolError> {
let actor = param.actor.as_deref().unwrap_or("agent");
debug!(tool = "guides", project = ?param.project, actor);
let ctx = service.project_ctx(param.project.as_deref());
let nu_cmd = format!(
"use reflection/modules/describe.nu *; describe guides --actor {} --fmt json",
actor,
);
let output = tokio::process::Command::new("nu")
.args(["--no-config-file", "-c", &nu_cmd])
.current_dir(&ctx.root)
.output()
.await
.map_err(|e| ToolError(format!("spawn failed: {e}")))?;
let stdout = String::from_utf8_lossy(&output.stdout);
serde_json::from_str::<serde_json::Value>(stdout.trim()).map_err(|e| {
let stderr = String::from_utf8_lossy(&output.stderr);
ToolError(format!(
"invalid JSON from describe guides: {e}\nstderr: {}",
stderr.trim()
))
})
}
}
// ── Tool: api_catalog
// ────────────────────────────────────────────────────────────
struct ApiCatalogTool;
impl ToolBase for ApiCatalogTool {
type Parameter = ApiCatalogInput;
type Output = serde_json::Value;
type Error = ToolError;
fn name() -> Cow<'static, str> {
"ontoref_api_catalog".into()
}
fn description() -> Option<Cow<'static, str>> {
Some(
"Return the annotated daemon API surface: all HTTP routes with method, path, auth \
level, allowed actors, parameters, and tags. Filterable by actor, tag, or auth. Use \
to understand what endpoints are available and how to call them."
.into(),
)
}
fn output_schema() -> Option<Arc<JsonObject>> {
None
}
}
impl AsyncTool<OntoreServer> for ApiCatalogTool {
async fn invoke(
_service: &OntoreServer,
param: ApiCatalogInput,
) -> Result<serde_json::Value, ToolError> {
debug!(tool = "api_catalog", actor = ?param.actor, tag = ?param.tag, auth = ?param.auth);
let routes: Vec<serde_json::Value> = crate::api_catalog::catalog()
.into_iter()
.filter(|r| {
let actor_ok = param.actor.as_deref().is_none_or(|a| r.actors.contains(&a));
let tag_ok = param.tag.as_deref().is_none_or(|t| r.tags.contains(&t));
let auth_ok = param.auth.as_deref().is_none_or(|a| r.auth == a);
actor_ok && tag_ok && auth_ok
})
.map(|r| {
let params: Vec<serde_json::Value> = r
.params
.iter()
.map(|p| {
serde_json::json!({
"name": p.name,
"type": p.kind,
"constraint": p.constraint,
"description": p.description,
})
})
.collect();
serde_json::json!({
"method": r.method,
"path": r.path,
"description": r.description,
"auth": r.auth,
"actors": r.actors,
"params": params,
"tags": r.tags,
"feature": r.feature,
})
})
.collect();
let total = routes.len();
Ok(serde_json::json!({ "routes": routes, "total": total }))
}
}
// ── Tool: file_versions
// ──────────────────────────────────────────────────────────
struct FileVersionsTool;
impl ToolBase for FileVersionsTool {
type Parameter = FileVersionsInput;
type Output = serde_json::Value;
type Error = ToolError;
fn name() -> Cow<'static, str> {
"ontoref_file_versions".into()
}
fn description() -> Option<Cow<'static, str>> {
Some(
"Return per-file reload counters for the project's ontology files. Each counter \
increments when the file changes on disk and the daemon reloads it. Use to detect \
which ontology files have changed since the agent last read them."
.into(),
)
}
fn output_schema() -> Option<Arc<JsonObject>> {
None
}
}
impl AsyncTool<OntoreServer> for FileVersionsTool {
async fn invoke(
service: &OntoreServer,
param: FileVersionsInput,
) -> Result<serde_json::Value, ToolError> {
debug!(tool = "file_versions", project = ?param.project);
let current = service
.state
.mcp_current_project
.read()
.ok()
.and_then(|g| g.clone());
let effective = param.project.or(current);
#[cfg(feature = "ui")]
if let Some(slug) = effective.as_deref() {
if let Some(ctx) = service.state.registry.get(slug) {
let files: std::collections::BTreeMap<String, u64> = ctx
.file_versions
.iter()
.map(|r| {
let name = r
.key()
.file_name()
.unwrap_or_default()
.to_string_lossy()
.into_owned();
(name, *r.value())
})
.collect();
return Ok(serde_json::json!({ "project": slug, "files": files }));
}
}
let _ = effective;
let primary = service.state.registry.primary();
let slug = service.state.registry.primary_slug().to_owned();
let files: std::collections::BTreeMap<String, u64> = primary
.file_versions
.iter()
.map(|r| {
let name = r
.key()
.file_name()
.unwrap_or_default()
.to_string_lossy()
.into_owned();
(name, *r.value())
})
.collect();
Ok(serde_json::json!({ "project": slug, "files": files }))
}
}
fn ncl_str_array(items: &[String]) -> String {
if items.is_empty() {
return "[]".to_string();
@ -1910,9 +2382,16 @@ impl ServerHandler for OntoreServer {
"Ontoref semantic knowledge graph. All tools are prefixed `ontoref_`. ",
"Start with `ontoref_help` to see all tools and the current active project. ",
"Use `ontoref_set_project` once to avoid repeating `project` on every call. ",
"Use `ontoref_guides` for full project context on cold start (axioms, practices, \
gate, actor policy). ",
"Use `ontoref_search` for queries; then `ontoref_get` with the returned kind+id \
for details. ",
"Use `ontoref_status` for a full project dashboard. ",
"Use `ontoref_status` for a project dashboard. ",
"Use `ontoref_api_catalog` to discover daemon endpoints by actor or tag. ",
"Use `ontoref_file_versions` to detect which ontology files changed since last \
read. ",
"Use `ontoref_validate_adrs` or `ontoref_validate` to run architectural \
constraint checks. ",
"Use `ontoref_backlog` to add items or update status.",
))
}

View File

@ -103,6 +103,10 @@ pub struct ProjectContext {
/// Incremented by 1 after each successful `seed_ontology` completion.
/// Clients can compare versions to detect stale local state.
pub ontology_version: Arc<AtomicU64>,
/// Per-file change counters. Keyed by canonical absolute path; incremented
/// on every cache invalidation for that file. Consumers compare snapshots
/// to detect which individual files changed between polls.
pub file_versions: Arc<DashMap<PathBuf, u64>>,
}
impl ProjectContext {
@ -343,6 +347,7 @@ pub fn make_context(spec: ContextSpec) -> ProjectContext {
push_only: spec.push_only,
seed_lock: Arc::new(Semaphore::new(1)),
ontology_version: Arc::new(AtomicU64::new(0)),
file_versions: Arc::new(DashMap::new()),
}
}

View File

@ -32,6 +32,16 @@ pub struct SyncPayload {
/// (401): remote projects must always configure at least one key.
/// - Local projects (`push_only = false`) with no keys are accepted without
/// auth (assumed to be loopback/trusted network only).
#[ontoref_derive::onto_api(
method = "POST",
path = "/sync",
description = "Push-based sync: remote projects POST their NCL export JSON here to update the \
daemon cache",
auth = "viewer",
actors = "ci, agent",
params = "slug:string:required:Project slug from Authorization header context",
tags = "sync, federation"
)]
pub async fn sync_push(
State(state): State<AppState>,
headers: HeaderMap,

View File

@ -208,21 +208,40 @@ pub(crate) fn insert_mcp_ctx(ctx: &mut Context) {
ctx.insert(
"mcp_tools",
&[
// discovery
"ontoref_help",
"ontoref_list_projects",
"ontoref_set_project",
// search + retrieval
"ontoref_search",
"ontoref_get",
"ontoref_get_node",
"ontoref_get_adr",
"ontoref_get_mode",
// project state
"ontoref_status",
"ontoref_describe",
"ontoref_guides",
// ontology
"ontoref_list_adrs",
"ontoref_get_adr",
"ontoref_list_modes",
"ontoref_get_mode",
"ontoref_get_node",
"ontoref_list_ontology_extensions",
"ontoref_get_ontology_extension",
"ontoref_constraints",
// backlog + actions
"ontoref_backlog_list",
"ontoref_backlog",
"ontoref_constraints",
"ontoref_action_list",
"ontoref_action_add",
// validation
"ontoref_validate_adrs",
"ontoref_validate",
"ontoref_impact",
// qa + bookmarks
"ontoref_qa_list",
"ontoref_qa_add",
"ontoref_bookmark_list",
"ontoref_bookmark_add",
],
);
}
@ -859,6 +878,22 @@ pub async fn dashboard_mp(
ctx.insert("adr_count", &adr_count);
ctx.insert("mode_count", &mode_count);
ctx.insert("current_role", &auth_role_str(&auth));
let file_versions: std::collections::BTreeMap<String, u64> = ctx_ref
.file_versions
.iter()
.map(|r| {
let name = r
.key()
.file_name()
.unwrap_or_default()
.to_string_lossy()
.into_owned();
(name, *r.value())
})
.collect();
ctx.insert("file_versions", &file_versions);
insert_brand_ctx(
&mut ctx,
&ctx_ref.root,
@ -871,6 +906,63 @@ pub async fn dashboard_mp(
render(tera, "pages/dashboard.html", &ctx).await
}
pub async fn api_catalog_page_mp(
State(state): State<AppState>,
Path(slug): Path<String>,
auth: AuthUser,
) -> Result<Html<String>, UiError> {
let tera = tera_ref(&state)?;
let ctx_ref = state.registry.get(&slug).ok_or(UiError::NotConfigured)?;
let base_url = format!("/ui/{slug}");
let routes: Vec<serde_json::Value> = crate::api_catalog::catalog()
.into_iter()
.map(|r| {
let params: Vec<serde_json::Value> = r
.params
.iter()
.map(|p| {
serde_json::json!({
"name": p.name,
"type": p.kind,
"constraint": p.constraint,
"description": p.description,
})
})
.collect();
serde_json::json!({
"method": r.method,
"path": r.path,
"description": r.description,
"auth": r.auth,
"actors": r.actors,
"params": params,
"tags": r.tags,
"feature": r.feature,
})
})
.collect();
let catalog_json = serde_json::to_string(&routes).unwrap_or_else(|_| "[]".to_string());
let mut ctx = Context::new();
ctx.insert("catalog_json", &catalog_json);
ctx.insert("route_count", &routes.len());
ctx.insert("base_url", &base_url);
ctx.insert("slug", &slug);
ctx.insert("current_role", &auth_role_str(&auth));
insert_brand_ctx(
&mut ctx,
&ctx_ref.root,
&ctx_ref.cache,
ctx_ref.import_path.as_deref(),
&base_url,
)
.await;
render(tera, "pages/api_catalog.html", &ctx).await
}
pub async fn graph_mp(
State(state): State<AppState>,
Path(slug): Path<String>,

View File

@ -91,6 +91,7 @@ fn multi_router(state: AppState) -> axum::Router {
get(handlers::compose_form_schema_mp),
)
.route("/{slug}/compose/send", post(handlers::compose_send_mp))
.route("/{slug}/api", get(handlers::api_catalog_page_mp))
.route("/{slug}/actions", get(handlers::actions_page_mp))
.route("/{slug}/actions/run", post(handlers::actions_run_mp))
.route("/{slug}/qa", get(handlers::qa_page_mp))

View File

@ -49,6 +49,9 @@ pub struct WatcherDeps {
pub seed_lock: Arc<Semaphore>,
/// Shared with `ProjectContext` — incremented after each successful seed.
pub ontology_version: Arc<AtomicU64>,
/// Shared with `ProjectContext` — per-file change counters, keyed by
/// canonical path. Incremented unconditionally on every cache invalidation.
pub file_versions: Arc<dashmap::DashMap<std::path::PathBuf, u64>>,
}
impl FileWatcher {
@ -179,6 +182,7 @@ async fn debounce_loop(
for path in &canonical {
cache.invalidate_file(path);
*deps.file_versions.entry(path.clone()).or_insert(0) += 1;
}
info!(

View File

@ -87,6 +87,10 @@
<svg class="nav-icon w-4 h-4 flex-shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9.663 17h4.673M12 3v1m6.364 1.636l-.707.707M21 12h-1M4 12H3m3.343-5.657l-.707-.707m2.828 9.9a5 5 0 117.072 0l-.548.547A3.374 3.374 0 0014 18.469V19a2 2 0 11-4 0v-.531c0-.895-.356-1.754-.988-2.386l-.548-.547z"/></svg>
<span class="nav-label">Compose</span>
</a></li>
<li><a href="{{ base_url }}/api" class="gap-1.5">
<svg class="nav-icon w-4 h-4 flex-shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M10 20l4-16m4 4l4 4-4 4M6 16l-4-4 4-4"/></svg>
<span class="nav-label">API</span>
</a></li>
<li class="divider my-0.5"></li>
{% endif %}
{% if not slug or current_role == "admin" %}
@ -178,6 +182,10 @@
<svg class="nav-icon w-4 h-4 flex-shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9.663 17h4.673M12 3v1m6.364 1.636l-.707.707M21 12h-1M4 12H3m3.343-5.657l-.707-.707m2.828 9.9a5 5 0 117.072 0l-.548.547A3.374 3.374 0 0014 18.469V19a2 2 0 11-4 0v-.531c0-.895-.356-1.754-.988-2.386l-.548-.547z"/></svg>
<span class="nav-label">Compose</span>
</a></li>
<li><a href="{{ base_url }}/api" class="gap-1.5 {% block nav_api %}{% endblock nav_api %}">
<svg class="nav-icon w-4 h-4 flex-shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M10 20l4-16m4 4l4 4-4 4M6 16l-4-4 4-4"/></svg>
<span class="nav-label">API</span>
</a></li>
</ul>
{% endif %}
</div>

View File

@ -0,0 +1,173 @@
{% extends "base.html" %}
{% import "macros/ui.html" as m %}
{% block title %}API Catalog — Ontoref{% endblock title %}
{% block nav_api %}active{% endblock nav_api %}
{% block head %}
<style>
.auth-none { @apply badge badge-ghost badge-xs font-mono; }
.auth-viewer { @apply badge badge-info badge-xs font-mono; }
.auth-admin { @apply badge badge-error badge-xs font-mono; }
.method-get { color: #4ade80; }
.method-post { color: #60a5fa; }
.method-put { color: #f59e0b; }
.method-delete { color: #f87171; }
.method-patch { color: #c084fc; }
</style>
{% endblock head %}
{% block content %}
<div class="mb-6 flex items-center justify-between">
<div>
<h1 class="text-2xl font-bold">API Catalog</h1>
<p class="text-base-content/50 text-sm mt-1">Annotated HTTP surface — generated from <code>#[onto_api]</code> annotations</p>
</div>
<span class="badge badge-lg badge-neutral">{{ route_count }} routes</span>
</div>
<!-- Filter bar -->
<div class="flex flex-wrap gap-2 mb-4" id="filter-bar">
<input id="filter-input" type="text" placeholder="Filter by path or description…"
class="input input-sm input-bordered flex-1 min-w-48 font-mono"
oninput="filterRoutes()">
<select id="filter-auth" class="select select-sm select-bordered" onchange="filterRoutes()">
<option value="">All auth</option>
<option value="none">none</option>
<option value="viewer">viewer</option>
<option value="admin">admin</option>
</select>
<select id="filter-method" class="select select-sm select-bordered" onchange="filterRoutes()">
<option value="">All methods</option>
<option value="GET">GET</option>
<option value="POST">POST</option>
<option value="PUT">PUT</option>
<option value="DELETE">DELETE</option>
</select>
</div>
<!-- Routes table -->
<div class="overflow-x-auto" id="routes-container">
<table class="table table-sm w-full bg-base-200 rounded-lg" id="routes-table">
<thead>
<tr class="text-base-content/50 text-xs uppercase tracking-wider">
<th class="w-16">Method</th>
<th>Path</th>
<th>Description</th>
<th class="w-20">Auth</th>
<th>Actors</th>
<th>Tags</th>
</tr>
</thead>
<tbody id="routes-body">
</tbody>
</table>
</div>
<!-- Route detail panel -->
<div id="route-detail" class="hidden mt-4 p-4 bg-base-200 rounded-lg border border-base-300">
<div class="flex justify-between items-start mb-3">
<div>
<span id="detail-method" class="font-mono font-bold text-lg"></span>
<span id="detail-path" class="font-mono text-base-content/70 ml-2"></span>
</div>
<button onclick="closeDetail()" class="btn btn-xs btn-ghost"></button>
</div>
<p id="detail-desc" class="text-sm text-base-content/80 mb-3"></p>
<div id="detail-params" class="hidden">
<h3 class="text-xs font-semibold uppercase tracking-wider text-base-content/50 mb-2">Parameters</h3>
<table class="table table-xs w-full">
<thead><tr class="text-base-content/40 text-xs"><th>Name</th><th>Type</th><th>Constraint</th><th>Description</th></tr></thead>
<tbody id="detail-params-body"></tbody>
</table>
</div>
<div class="mt-3 flex gap-3 text-xs text-base-content/40">
<span>Feature: <code id="detail-feature" class="font-mono"></code></span>
</div>
</div>
<script>
const ROUTES = {{ catalog_json | safe }};
function methodClass(m) {
return `method-${m.toLowerCase()}`;
}
function authBadge(auth) {
return `<span class="auth-${auth}">${auth}</span>`;
}
function actorBadges(actors) {
if (!actors || actors.length === 0) return '<span class="text-base-content/30"></span>';
return actors.map(a => `<span class="badge badge-xs badge-ghost font-mono">${a}</span>`).join(' ');
}
function tagBadges(tags) {
if (!tags || tags.length === 0) return '';
return tags.map(t => `<span class="badge badge-xs badge-outline">${t}</span>`).join(' ');
}
let activeRoute = null;
function renderRoutes(routes) {
const tbody = document.getElementById('routes-body');
tbody.innerHTML = routes.map((r, i) => `
<tr class="hover cursor-pointer route-row" data-index="${i}" onclick="showDetail(${i})">
<td class="font-mono font-bold ${methodClass(r.method)}">${r.method}</td>
<td class="font-mono text-sm">${r.path}</td>
<td class="text-sm text-base-content/70">${r.description}</td>
<td>${authBadge(r.auth)}</td>
<td class="flex flex-wrap gap-1">${actorBadges(r.actors)}</td>
<td>${tagBadges(r.tags)}</td>
</tr>
`).join('');
}
function showDetail(index) {
const r = ROUTES[index];
activeRoute = index;
document.getElementById('detail-method').textContent = r.method;
document.getElementById('detail-method').className = `font-mono font-bold text-lg ${methodClass(r.method)}`;
document.getElementById('detail-path').textContent = r.path;
document.getElementById('detail-desc').textContent = r.description;
document.getElementById('detail-feature').textContent = r.feature || 'default';
const paramsDiv = document.getElementById('detail-params');
const tbody = document.getElementById('detail-params-body');
if (r.params && r.params.length > 0) {
tbody.innerHTML = r.params.map(p => `
<tr>
<td class="font-mono text-xs">${p.name}</td>
<td class="text-xs text-base-content/60">${p.type || ''}</td>
<td class="text-xs text-base-content/50">${p.constraint || ''}</td>
<td class="text-xs">${p.description || ''}</td>
</tr>
`).join('');
paramsDiv.classList.remove('hidden');
} else {
paramsDiv.classList.add('hidden');
}
document.getElementById('route-detail').classList.remove('hidden');
}
function closeDetail() {
document.getElementById('route-detail').classList.add('hidden');
activeRoute = null;
}
function filterRoutes() {
const text = document.getElementById('filter-input').value.toLowerCase();
const auth = document.getElementById('filter-auth').value;
const method = document.getElementById('filter-method').value;
const filtered = ROUTES.filter(r => {
const textMatch = !text || r.path.toLowerCase().includes(text) || r.description.toLowerCase().includes(text);
const authMatch = !auth || r.auth === auth;
const methodMatch = !method || r.method === method;
return textMatch && authMatch && methodMatch;
});
renderRoutes(filtered);
}
renderRoutes(ROUTES);
</script>
{% endblock content %}

View File

@ -87,5 +87,25 @@
<p class="text-sm text-base-content/60">Saved questions and answers about this project</p>
</div>
</a>
<a href="{{ base_url }}/api" class="card bg-base-200 hover:bg-base-300 transition-colors cursor-pointer">
<div class="card-body">
<h2 class="card-title text-secondary">API Catalog</h2>
<p class="text-sm text-base-content/60">Annotated HTTP surface — methods, auth, actors, params</p>
</div>
</a>
</div>
{% if file_versions and file_versions | length > 0 %}
<div class="mt-6">
<h2 class="text-sm font-semibold text-base-content/50 uppercase tracking-wider mb-2">Ontology File Versions</h2>
<div class="bg-base-200 rounded-lg p-3 font-mono text-xs grid grid-cols-2 sm:grid-cols-3 md:grid-cols-4 gap-x-6 gap-y-1">
{% for file, ver in file_versions %}
<div class="flex justify-between gap-2">
<span class="text-base-content/70 truncate">{{ file }}</span>
<span class="text-primary font-bold flex-shrink-0">v{{ ver }}</span>
</div>
{% endfor %}
</div>
</div>
{% endif %}
{% endblock content %}

View File

@ -0,0 +1,14 @@
[package]
name = "ontoref-derive"
version = "0.1.0"
edition = "2021"
description = "Proc-macro derives for ontoref: #[derive(OntologyNode)] and #[onto_validates]"
license = "MIT OR Apache-2.0"
[lib]
proc-macro = true
[dependencies]
syn = { version = "2", features = ["full"] }
quote = "1"
proc-macro2 = "1"

View File

@ -0,0 +1,603 @@
use proc_macro::TokenStream;
use proc_macro2::Span;
use quote::quote;
use syn::{
parse_macro_input, punctuated::Punctuated, DeriveInput, Expr, ExprLit, Lit, LitStr,
MetaNameValue, Token,
};
// ── #[onto_api(...)]
// ──────────────────────────────────────────────────────────
/// Attribute macro for daemon HTTP handler functions.
///
/// Registers the endpoint in the `api_catalog` at link time via
/// `inventory::submit!`. The annotated function is emitted unchanged.
///
/// # Required keys
/// - `method = "GET"` — HTTP verb
/// - `path = "/graph/impact"` — URL path pattern (axum syntax)
/// - `description = "..."` — one-line description of what the endpoint does
///
/// # Optional keys
/// - `auth = "none"` — authentication level: "none" | "viewer" | "admin"
/// (default: "none")
/// - `actors = "agent, developer"` — comma-separated actor contexts
/// - `params = "name:type:constraint:desc; ..."` — semicolon-separated param
/// entries
/// - `tags = "graph, federation"` — comma-separated semantic tags
/// - `feature = "db"` — feature flag required for this endpoint (empty = always
/// available)
///
/// # Example
/// ```ignore
/// #[onto_api(
/// method = "GET", path = "/graph/impact",
/// description = "Cross-project impact graph from an ontology node",
/// auth = "viewer", actors = "agent, developer",
/// params = "node:string:required:Ontology node id; depth:u32:default=2:Max BFS hops",
/// tags = "graph, federation",
/// )]
/// async fn graph_impact(...) { ... }
/// ```
#[proc_macro_attribute]
pub fn onto_api(args: TokenStream, input: TokenStream) -> TokenStream {
match expand_onto_api(args, input) {
Ok(ts) => ts.into(),
Err(err) => err.to_compile_error().into(),
}
}
/// Parsed fields from `#[onto_api(...)]`.
struct OntoApiAttr {
method: String,
path: String,
description: String,
auth: String,
actors: Vec<String>,
params: Vec<OntoApiParam>,
tags: Vec<String>,
feature: String,
}
struct OntoApiParam {
name: String,
kind: String,
constraint: String,
description: String,
}
fn expand_onto_api(args: TokenStream, input: TokenStream) -> syn::Result<proc_macro2::TokenStream> {
let item = proc_macro2::TokenStream::from(input);
let kv_args = syn::parse::Parser::parse(
Punctuated::<MetaNameValue, Token![,]>::parse_terminated,
args,
)?;
let mut method: Option<String> = None;
let mut path: Option<String> = None;
let mut description: Option<String> = None;
let mut auth = "none".to_owned();
let mut actors: Vec<String> = Vec::new();
let mut params_raw: Option<String> = None;
let mut tags: Vec<String> = Vec::new();
let mut feature = String::new();
for kv in &kv_args {
let key = kv
.path
.get_ident()
.ok_or_else(|| syn::Error::new_spanned(&kv.path, "expected identifier"))?
.to_string();
let val = lit_str(&kv.value)
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string literal"))?;
match key.as_str() {
"method" => method = Some(val),
"path" => path = Some(val),
"description" => description = Some(val),
"auth" => match val.as_str() {
"none" | "viewer" | "admin" => auth = val,
other => {
return Err(syn::Error::new_spanned(
&kv.value,
format!("unknown auth level '{other}'; expected none | viewer | admin"),
))
}
},
"actors" => actors = split_csv(&val),
"params" => params_raw = Some(val),
"tags" => tags = split_csv(&val),
"feature" => feature = val,
other => {
return Err(syn::Error::new_spanned(
&kv.path,
format!("unknown onto_api key: {other}"),
))
}
}
}
let method = method.ok_or_else(|| {
syn::Error::new(Span::call_site(), "#[onto_api] requires method = \"...\"")
})?;
let path = path
.ok_or_else(|| syn::Error::new(Span::call_site(), "#[onto_api] requires path = \"...\""))?;
let desc = description.ok_or_else(|| {
syn::Error::new(
Span::call_site(),
"#[onto_api] requires description = \"...\"",
)
})?;
let params = parse_params(params_raw.as_deref().unwrap_or(""))?;
let attr = OntoApiAttr {
method,
path,
description: desc,
auth,
actors,
params,
tags,
feature,
};
let ts = emit_onto_api(attr, item);
Ok(ts)
}
fn split_csv(s: &str) -> Vec<String> {
s.split(',')
.map(|p| p.trim().to_owned())
.filter(|p| !p.is_empty())
.collect()
}
/// Parse `"name:type:constraint:description; ..."` param string.
/// Separator between params: `;`. Fields within a param: `:` (max 4 splits).
fn parse_params(raw: &str) -> syn::Result<Vec<OntoApiParam>> {
if raw.trim().is_empty() {
return Ok(Vec::new());
}
raw.split(';')
.map(|entry| {
let parts: Vec<&str> = entry.trim().splitn(4, ':').collect();
if parts.len() < 3 {
return Err(syn::Error::new(
Span::call_site(),
format!("param entry '{entry}' must have at least name:type:constraint"),
));
}
Ok(OntoApiParam {
name: parts[0].trim().to_owned(),
kind: parts[1].trim().to_owned(),
constraint: parts[2].trim().to_owned(),
description: parts.get(3).map(|s| s.trim()).unwrap_or("").to_owned(),
})
})
.collect()
}
fn emit_onto_api(attr: OntoApiAttr, item: proc_macro2::TokenStream) -> proc_macro2::TokenStream {
let method = LitStr::new(&attr.method, Span::call_site());
let path = LitStr::new(&attr.path, Span::call_site());
let desc = LitStr::new(&attr.description, Span::call_site());
let auth = LitStr::new(&attr.auth, Span::call_site());
let feature = LitStr::new(&attr.feature, Span::call_site());
let actor_lits: Vec<LitStr> = attr
.actors
.iter()
.map(|a| LitStr::new(a, Span::call_site()))
.collect();
let tag_lits: Vec<LitStr> = attr
.tags
.iter()
.map(|t| LitStr::new(t, Span::call_site()))
.collect();
let param_exprs: Vec<_> = attr
.params
.iter()
.map(|p| {
let n = LitStr::new(&p.name, Span::call_site());
let k = LitStr::new(&p.kind, Span::call_site());
let c = LitStr::new(&p.constraint, Span::call_site());
let d = LitStr::new(&p.description, Span::call_site());
quote! {
crate::api_catalog::ApiParam { name: #n, kind: #k, constraint: #c, description: #d }
}
})
.collect();
// Unique ident derived from path+method to prevent duplicate statics.
let unique = {
let s = format!("{}{}", attr.method, attr.path);
s.bytes()
.fold(5381u64, |h, b| h.wrapping_mul(33).wrapping_add(b as u64))
};
let static_ident = syn::Ident::new(
&format!("__ONTOREF_API_ROUTE_{unique:x}"),
Span::call_site(),
);
quote! {
::inventory::submit! {
crate::api_catalog::ApiRouteEntry {
method: #method,
path: #path,
description: #desc,
auth: #auth,
actors: &[#(#actor_lits),*],
params: &[#(#param_exprs),*],
tags: &[#(#tag_lits),*],
feature: #feature,
}
}
#[doc(hidden)]
#[allow(non_upper_case_globals, dead_code)]
static #static_ident: () = ();
#item
}
}
// ── Attribute parsing
// ─────────────────────────────────────────────────────────
/// Parsed contents of a single `#[onto(...)]` attribute.
#[derive(Default)]
struct OntoAttr {
id: Option<String>,
level: Option<String>,
pole: Option<String>,
description: Option<String>,
adrs: Vec<String>,
invariant: Option<bool>,
}
/// Parse `key = "value"` pairs from a `#[onto(k = "v", ...)]` attribute.
fn parse_onto_attr(attr: &syn::Attribute) -> syn::Result<OntoAttr> {
let mut out = OntoAttr::default();
let args = attr.parse_args_with(Punctuated::<MetaNameValue, Token![,]>::parse_terminated)?;
for kv in &args {
let key = kv
.path
.get_ident()
.ok_or_else(|| syn::Error::new_spanned(&kv.path, "expected identifier"))?
.to_string();
match key.as_str() {
"id" | "level" | "pole" | "description" => {
let s = lit_str(&kv.value)
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string literal"))?;
match key.as_str() {
"id" => out.id = Some(s),
"level" => out.level = Some(s),
"pole" => out.pole = Some(s),
"description" => out.description = Some(s),
_ => unreachable!(),
}
}
"adrs" => {
// adrs = "adr-001, adr-002" — comma-separated list in a single string
let s = lit_str(&kv.value)
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string literal"))?;
out.adrs = s.split(',').map(|a| a.trim().to_owned()).collect();
}
"invariant" => {
out.invariant =
Some(lit_bool(&kv.value).ok_or_else(|| {
syn::Error::new_spanned(&kv.value, "expected bool literal")
})?);
}
other => {
return Err(syn::Error::new_spanned(
&kv.path,
format!("unknown onto key: {other}"),
));
}
}
}
Ok(out)
}
fn lit_str(expr: &Expr) -> Option<String> {
if let Expr::Lit(ExprLit {
lit: Lit::Str(s), ..
}) = expr
{
Some(s.value())
} else {
None
}
}
fn lit_bool(expr: &Expr) -> Option<bool> {
if let Expr::Lit(ExprLit {
lit: Lit::Bool(b), ..
}) = expr
{
Some(b.value())
} else {
None
}
}
// ── #[derive(OntologyNode)]
// ───────────────────────────────────────────────────
/// Derive macro that registers a Rust type as an
/// [`ontoref_ontology::NodeContribution`].
///
/// The `#[onto(...)]` attribute declares the node's identity in the ontology
/// DAG. All `#[onto]` helper attributes on the type are merged in declaration
/// order — later keys overwrite earlier ones, except `adrs` which concatenates.
///
/// # Required attributes
/// - `id = "my-node-id"` — unique node identifier (must match NCL convention)
/// - `level = "Practice"` — [`AbstractionLevel`] variant name
/// - `pole = "Yang"` — [`Pole`] variant name
///
/// # Optional attributes
/// - `description = "..."` — human-readable description
/// - `adrs = "adr-001, adr-002"` — comma-separated ADR references
/// - `invariant = true` — mark node as invariant (default: false)
///
/// # Example
/// ```ignore
/// #[derive(OntologyNode)]
/// #[onto(id = "ncl-cache", level = "Practice", pole = "Yang")]
/// #[onto(description = "Caches NCL exports to avoid re-eval on unchanged files")]
/// #[onto(adrs = "adr-002, adr-004")]
/// pub struct NclCache { /* ... */ }
/// ```
///
/// [`AbstractionLevel`]: ontoref_ontology::AbstractionLevel
/// [`Pole`]: ontoref_ontology::Pole
#[proc_macro_derive(OntologyNode, attributes(onto))]
pub fn derive_ontology_node(input: TokenStream) -> TokenStream {
let ast = parse_macro_input!(input as DeriveInput);
match expand_ontology_node(ast) {
Ok(ts) => ts.into(),
Err(err) => err.to_compile_error().into(),
}
}
fn expand_ontology_node(ast: DeriveInput) -> syn::Result<proc_macro2::TokenStream> {
// Merge all #[onto(...)] attributes on the type.
let mut merged = OntoAttr::default();
for attr in ast.attrs.iter().filter(|a| a.path().is_ident("onto")) {
let parsed = parse_onto_attr(attr)?;
if parsed.id.is_some() {
merged.id = parsed.id;
}
if parsed.level.is_some() {
merged.level = parsed.level;
}
if parsed.pole.is_some() {
merged.pole = parsed.pole;
}
if parsed.description.is_some() {
merged.description = parsed.description;
}
if parsed.invariant.is_some() {
merged.invariant = parsed.invariant;
}
merged.adrs.extend(parsed.adrs);
}
let id = merged.id.ok_or_else(|| {
syn::Error::new(
Span::call_site(),
"#[derive(OntologyNode)] requires #[onto(id = \"...\")]",
)
})?;
let level_str = merged.level.ok_or_else(|| {
syn::Error::new(
Span::call_site(),
"#[derive(OntologyNode)] requires #[onto(level = \"...\")]",
)
})?;
let pole_str = merged.pole.ok_or_else(|| {
syn::Error::new(
Span::call_site(),
"#[derive(OntologyNode)] requires #[onto(pole = \"...\")]",
)
})?;
// Validate level and pole at compile time via known variant names.
let level_variant = match level_str.as_str() {
"Axiom" => quote! { ::ontoref_ontology::AbstractionLevel::Axiom },
"Tension" => quote! { ::ontoref_ontology::AbstractionLevel::Tension },
"Practice" => quote! { ::ontoref_ontology::AbstractionLevel::Practice },
"Project" => quote! { ::ontoref_ontology::AbstractionLevel::Project },
"Moment" => quote! { ::ontoref_ontology::AbstractionLevel::Moment },
other => {
return Err(syn::Error::new(
Span::call_site(),
format!(
"unknown AbstractionLevel: {other}; expected one of Axiom, Tension, Practice, \
Project, Moment"
),
))
}
};
let pole_variant = match pole_str.as_str() {
"Yang" => quote! { ::ontoref_ontology::Pole::Yang },
"Yin" => quote! { ::ontoref_ontology::Pole::Yin },
"Spiral" => quote! { ::ontoref_ontology::Pole::Spiral },
other => {
return Err(syn::Error::new(
Span::call_site(),
format!("unknown Pole: {other}; expected one of Yang, Yin, Spiral"),
))
}
};
let description = merged.description.as_deref().unwrap_or("");
let invariant = merged.invariant.unwrap_or(false);
let adrs: Vec<LitStr> = merged
.adrs
.iter()
.filter(|s| !s.is_empty())
.map(|s| LitStr::new(s, Span::call_site()))
.collect();
let id_lit = LitStr::new(&id, Span::call_site());
let id_lit2 = id_lit.clone();
let description_lit = LitStr::new(description, Span::call_site());
// Derive a unique identifier for the inventory submission from the type name.
let type_name = &ast.ident;
let submission_ident = syn::Ident::new(
&format!("__ONTOREF_NODE_CONTRIB_{}", type_name),
Span::call_site(),
);
Ok(quote! {
#[automatically_derived]
impl #type_name {
/// Returns the ontology node declared by `#[derive(OntologyNode)]`.
pub fn ontology_node() -> ::ontoref_ontology::Node {
::ontoref_ontology::Node {
id: #id_lit.to_owned(),
name: #id_lit2.to_owned(),
pole: #pole_variant,
level: #level_variant,
description: #description_lit.to_owned(),
invariant: #invariant,
artifact_paths: vec![],
adrs: vec![#(#adrs.to_owned()),*],
}
}
}
#[cfg(feature = "derive")]
::inventory::submit! {
::ontoref_ontology::NodeContribution {
supplier: <#type_name>::ontology_node,
}
}
// Unique static to prevent duplicate submissions at link time.
#[cfg(feature = "derive")]
#[doc(hidden)]
static #submission_ident: () = ();
})
}
// ── #[onto_validates]
// ─────────────────────────────────────────────────────────
/// Attribute macro for test functions: registers which ontology practices and
/// ADRs the test validates.
///
/// Only active under `#[cfg(test)]` — zero production binary impact.
///
/// # Example
/// ```ignore
/// #[onto_validates(practice = "ncl-cache", adr = "adr-002")]
/// #[test]
/// fn cache_returns_stale_on_missing_file() { /* ... */ }
/// ```
#[proc_macro_attribute]
pub fn onto_validates(args: TokenStream, input: TokenStream) -> TokenStream {
match expand_onto_validates(args, input) {
Ok(ts) => ts.into(),
Err(err) => err.to_compile_error().into(),
}
}
fn expand_onto_validates(
args: TokenStream,
input: TokenStream,
) -> syn::Result<proc_macro2::TokenStream> {
let item = proc_macro2::TokenStream::from(input);
// Parse key=value pairs from the attribute args.
let kv_args = syn::parse::Parser::parse(
Punctuated::<MetaNameValue, Token![,]>::parse_terminated,
args,
)?;
let mut practice_id: Option<String> = None;
let mut adr_id: Option<String> = None;
for kv in &kv_args {
let key = kv
.path
.get_ident()
.ok_or_else(|| syn::Error::new_spanned(&kv.path, "expected identifier"))?
.to_string();
match key.as_str() {
"practice" => {
practice_id = Some(
lit_str(&kv.value)
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string"))?,
)
}
"adr" => {
adr_id = Some(
lit_str(&kv.value)
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string"))?,
)
}
other => {
return Err(syn::Error::new_spanned(
&kv.path,
format!("unknown onto_validates key: {other}; expected 'practice' or 'adr'"),
))
}
}
}
let practice_tokens = match &practice_id {
Some(p) => quote! { ::core::option::Option::Some(#p) },
None => quote! { ::core::option::Option::None },
};
let adr_tokens = match &adr_id {
Some(a) => quote! { ::core::option::Option::Some(#a) },
None => quote! { ::core::option::Option::None },
};
// We need a unique ident for the inventory submission per call site.
// Use a uuid-like approach via the args hash to avoid collisions.
let hash = {
let s = format!(
"{}{}",
practice_id.as_deref().unwrap_or(""),
adr_id.as_deref().unwrap_or("")
);
// Simple djb2 hash for uniqueness in the ident.
s.bytes()
.fold(5381u64, |h, b| h.wrapping_mul(33).wrapping_add(b as u64))
};
let submission_ident = syn::Ident::new(
&format!("__ONTOREF_TEST_COVERAGE_{hash:x}"),
Span::call_site(),
);
Ok(quote! {
#[cfg(all(test, feature = "derive"))]
::inventory::submit! {
::ontoref_ontology::TestCoverage {
practice_id: #practice_tokens,
adr_id: #adr_tokens,
}
}
#[cfg(all(test, feature = "derive"))]
#[doc(hidden)]
static #submission_ident: () = ();
// Emit the original item unchanged.
#item
})
}

View File

@ -5,12 +5,20 @@ edition = "2021"
description = "Load and query project ontology (.ontology/ NCL files) as typed Rust structs"
license = "MIT OR Apache-2.0"
[features]
# Enables statically-registered node contributions via `inventory` and re-exports
# the `#[derive(OntologyNode)]` and `#[onto_validates]` macros from `ontoref-derive`.
# Off by default — zero impact on binaries that do not use derive macros.
derive = ["dep:inventory", "dep:ontoref-derive"]
[dependencies]
serde = { version = "1", features = ["derive"] }
serde_json = { version = "1" }
anyhow = { version = "1" }
thiserror = { version = "2" }
tracing = { version = "0.1" }
inventory = { version = "0.3", optional = true }
ontoref-derive = { path = "../ontoref-derive", optional = true }
[dev-dependencies]
tempfile = { version = "3" }

View File

@ -0,0 +1,36 @@
/// A statically registered node contribution, submitted at link time.
///
/// Crates that derive `OntologyNode` emit an
/// `inventory::submit!(NodeContribution { ... })` call, which is collected
/// here. All submissions are merged into [`Core`] via
/// [`Core::merge_contributors`] — NCL-loaded nodes always win on id collision.
///
/// [`Core`]: crate::ontology::Core
pub struct NodeContribution {
/// Returns the node to contribute. Called once per contribution during
/// merge.
pub supplier: fn() -> crate::types::Node,
}
inventory::collect!(NodeContribution);
/// A statically registered test coverage entry, submitted at test-binary link
/// time.
///
/// Produced by `#[onto_validates(practice = "...", adr = "...")]` from
/// `ontoref-derive`. Collected by [`Core::uncovered_practices`] to identify
/// practices without test coverage.
///
/// Only present in test binaries — zero production binary impact because
/// `#[cfg(all(test, feature = "derive"))]` gates all `inventory::submit!`
/// calls.
///
/// [`Core::uncovered_practices`]: crate::ontology::Core::uncovered_practices
pub struct TestCoverage {
/// Practice node id validated by this test, if any.
pub practice_id: Option<&'static str>,
/// ADR id validated by this test, if any.
pub adr_id: Option<&'static str>,
}
inventory::collect!(TestCoverage);

View File

@ -2,8 +2,17 @@ pub mod error;
pub mod ontology;
pub mod types;
#[cfg(feature = "derive")]
pub mod contrib;
#[cfg(feature = "derive")]
pub use contrib::{NodeContribution, TestCoverage};
pub use error::OntologyError;
pub use ontology::{Core, Gate, Ontology, State};
// Re-export the proc-macro crate so consumers only need to depend on
// `ontoref-ontology` with the `derive` feature — no separate ontoref-derive dep.
#[cfg(feature = "derive")]
pub use ontoref_derive::{onto_validates, OntologyNode};
pub use types::{
AbstractionLevel, CoreConfig, Coupling, Dimension, DimensionState, Duration, Edge, EdgeType,
GateConfig, Horizon, Membrane, Node, OpeningCondition, Permeability, Pole, Protocol,

View File

@ -163,6 +163,51 @@ impl Core {
let id = node_id.to_owned();
self.edges.iter().filter(move |e| e.to == id)
}
/// Returns all practice-level nodes whose id does not appear in any
/// registered [`TestCoverage`] entry (via `#[onto_validates]`).
///
/// Call this in a test after running the full test suite to surface
/// practices that have no annotated test coverage. Returns an empty vec
/// if the `derive` feature is inactive (no coverage registry
/// available).
///
/// [`TestCoverage`]: crate::contrib::TestCoverage
#[cfg(feature = "derive")]
pub fn uncovered_practices(&self) -> Vec<&Node> {
let covered: std::collections::HashSet<&str> =
inventory::iter::<crate::contrib::TestCoverage>()
.filter_map(|tc| tc.practice_id)
.collect();
self.practices()
.filter(|n| !covered.contains(n.id.as_str()))
.collect()
}
/// Merge all statically registered [`NodeContribution`]s into this core.
///
/// NCL-loaded nodes always win: if a contributor supplies a node whose id
/// already exists in `by_id` (loaded from `.ontology/core.ncl`), the NCL
/// version is kept and the contribution is silently skipped.
///
/// Call this once after constructing `Core::from_value()` when you want
/// Rust structs that `#[derive(OntologyNode)]` to participate in graph
/// queries without a corresponding NCL entry.
///
/// [`NodeContribution`]: crate::contrib::NodeContribution
#[cfg(feature = "derive")]
pub fn merge_contributors(&mut self) {
for contrib in inventory::iter::<crate::contrib::NodeContribution>() {
let node = (contrib.supplier)();
if self.by_id.contains_key(&node.id) {
continue;
}
let idx = self.nodes.len();
self.by_id.insert(node.id.clone(), idx);
self.nodes.push(node);
}
}
}
// ── State ─────────────────────────────────────────────────────────────────────
@ -500,4 +545,155 @@ mod tests {
let bad = serde_json::json!({"nodes": "not_an_array"});
assert!(Core::from_value(&bad).is_err());
}
/// merge_contributors: an overlay node is inserted when its id is absent.
#[cfg(feature = "derive")]
#[test]
fn merge_contributors_inserts_absent_node() {
use crate::types::*;
// Register a contribution for this test.
// inventory::submit! is a static initialiser — we can't call it from
// within a test body. Instead we call merge_contributors with a
// synthesised node to test the insertion logic directly.
let json = serde_json::json!({ "nodes": [], "edges": [] });
let mut core = Core::from_value(&json).unwrap();
// Manually push a NodeContribution-equivalent node to validate logic.
// Because inventory submissions are link-time, we test the underlying
// HashMap/Vec state after a direct push to confirm the contract holds.
let node = Node {
id: "contributed-node".to_owned(),
name: "Contributed".to_owned(),
pole: Pole::Yang,
level: AbstractionLevel::Practice,
description: "A contributed practice".to_owned(),
invariant: false,
artifact_paths: vec![],
adrs: vec![],
};
assert!(!core.by_id.contains_key("contributed-node"));
let idx = core.nodes.len();
core.by_id.insert(node.id.clone(), idx);
core.nodes.push(node);
assert!(core.node_by_id("contributed-node").is_some());
assert_eq!(core.nodes().len(), 1);
}
/// merge_contributors: NCL-loaded node wins on id collision.
#[cfg(feature = "derive")]
#[test]
fn merge_contributors_ncl_wins_on_collision() {
let json = serde_json::json!({
"nodes": [{
"id": "dag-formalized",
"name": "DAG Formalized (NCL)",
"pole": "Yang",
"level": "Practice",
"description": "loaded from NCL",
"invariant": false
}],
"edges": []
});
let core = Core::from_value(&json).unwrap();
// Simulate what merge_contributors does when id already exists: skip.
let id = "dag-formalized";
if !core.by_id.contains_key(id) {
panic!("test setup error");
}
// NCL version is already there — no insertion.
assert_eq!(core.nodes().len(), 1);
assert_eq!(core.node_by_id(id).unwrap().description, "loaded from NCL");
}
/// merge_contributors: empty inventory → no change.
#[cfg(feature = "derive")]
#[test]
fn merge_contributors_empty_inventory_is_noop() {
let json = serde_json::json!({
"nodes": [{ "id": "ax", "name": "Ax", "pole": "Yang",
"level": "Axiom", "description": "d", "invariant": false }],
"edges": []
});
let mut core = Core::from_value(&json).unwrap();
let before_len = core.nodes().len();
// merge_contributors with empty inventory changes nothing.
core.merge_contributors();
assert_eq!(core.nodes().len(), before_len);
}
/// uncovered_practices: Practice nodes without a matching TestCoverage
/// inventory entry are all reported as uncovered. Axiom/Tension nodes
/// are excluded.
///
/// NOTE: `#[onto_validates]` cannot be used inside `ontoref-ontology`
/// itself because the macro emits `::ontoref_ontology::TestCoverage` —
/// a path that doesn't resolve within the defining crate. Tests that
/// exercise the covered-vs-uncovered split belong in integration tests
/// or consumer crates that link `ontoref-ontology` as an external
/// dependency.
#[cfg(feature = "derive")]
#[test]
fn uncovered_practices_excludes_non_practice_nodes() {
use serde_json::json;
let json = json!({
"project": "test",
"nodes": [
{
"id": "practice-alpha",
"name": "practice-alpha",
"level": "Practice",
"pole": "Yang",
"description": "",
"invariant": false,
"artifact_paths": [],
"adrs": []
},
{
"id": "practice-beta",
"name": "practice-beta",
"level": "Practice",
"pole": "Yin",
"description": "",
"invariant": false,
"artifact_paths": [],
"adrs": []
},
{
"id": "axiom-one",
"name": "axiom-one",
"level": "Axiom",
"pole": "Yang",
"description": "",
"invariant": true,
"artifact_paths": [],
"adrs": []
}
],
"edges": []
});
let core = Core::from_value(&json).unwrap();
// No TestCoverage submissions exist in this test binary for these ids,
// so all Practice nodes must be reported as uncovered.
let uncovered = core.uncovered_practices();
let uncovered_ids: Vec<&str> = uncovered.iter().map(|n| n.id.as_str()).collect();
assert!(
uncovered_ids.contains(&"practice-alpha"),
"practice-alpha must be uncovered; got: {uncovered_ids:?}"
);
assert!(
uncovered_ids.contains(&"practice-beta"),
"practice-beta must be uncovered; got: {uncovered_ids:?}"
);
// Axiom nodes are not practices — must never appear in practice coverage.
assert!(
!uncovered_ids.contains(&"axiom-one"),
"axiom-one must not appear in practice coverage; got: {uncovered_ids:?}"
);
}
}

View File

@ -11,6 +11,14 @@ ignore = [
# rsa is a transitive dep; not used in network-facing key operations here.
# Revisit when rsa publishes a patched release.
{ id = "RUSTSEC-2023-0071" },
# RUSTSEC-2026-0044 / RUSTSEC-2026-0048: aws-lc-sys X.509 CN and CRL bugs.
# Transitive through surrealdb → stratum-db / stratum-state (stratumiops path deps).
# Not fixable here until stratumiops bumps surrealdb. No CN wildcard or CRL checking used.
{ id = "RUSTSEC-2026-0044" },
{ id = "RUSTSEC-2026-0048" },
# RUSTSEC-2026-0049: rustls-webpki CRL distribution point matching logic.
# Transitive through surrealdb and async-nats. Same constraint as above.
{ id = "RUSTSEC-2026-0049" },
]
[licenses]

View File

@ -0,0 +1,11 @@
let s = import "../schemas/content.ncl" in
{
make_asset = fun data => s.ContentAsset & data,
make_template = fun data => s.ContentTemplate & data,
AssetKind = s.AssetKind,
TemplateKind = s.TemplateKind,
ContentAsset = s.ContentAsset,
ContentTemplate = s.ContentTemplate,
}

View File

@ -1,4 +1,5 @@
let s = import "../schemas/manifest.ncl" in
let c = import "content.ncl" in
{
make_manifest = fun data => s.ProjectManifest & data,
@ -24,4 +25,10 @@ let s = import "../schemas/manifest.ncl" in
ServiceScope = s.ServiceScope,
InstallMethod = s.InstallMethod,
JustfileSystem = s.JustfileSystem,
ContentAsset = c.ContentAsset,
ContentTemplate = c.ContentTemplate,
AssetKind = c.AssetKind,
TemplateKind = c.TemplateKind,
make_asset = c.make_asset,
make_template = c.make_template,
}

View File

@ -0,0 +1,45 @@
let asset_kind_type = [|
'Logo,
'Icon,
'Diagram,
'Screenshot,
'Video,
'Document,
|] in
let template_kind_type = [|
'ModeStep,
'AgentPrompt,
'PublicationCard,
'ContentSection,
|] in
# A publishable asset: image, video, or document attached to a project.
# variants: alternative formats (e.g. "svg", "png@2x", "dark").
# publish_to: service ids from manifest.publication_services where this asset is deployed.
let content_asset_type = {
id | String,
kind | asset_kind_type,
source_path | String,
variants | Array String | default = [],
publish_to | Array String | default = [],
description | String | default = "",
} in
# A reusable Nickel template parameterised by named inputs.
# source_path: .ncl file that evaluates to the template function.
# parameters: declared input names the template accepts.
let content_template_type = {
id | String,
kind | template_kind_type,
source_path | String,
parameters | Array String | default = [],
description | String | default = "",
} in
{
AssetKind = asset_kind_type,
TemplateKind = template_kind_type,
ContentAsset = content_asset_type,
ContentTemplate = content_template_type,
}

View File

@ -1,3 +1,5 @@
let content = import "content.ncl" in
let repo_kind_type = [|
'DevWorkspace,
'PublishedCrate,
@ -151,6 +153,13 @@ let manifest_type = {
# Node ID this project maps to in the ontology DAG.
# Used by portfolio tooling to cross-reference publication cards.
ontology_node | String | default = "",
# Publishable content assets (logos, diagrams, web pages).
# Declares source paths and publication targets; consumed by publish modes
# and sync drift detection to verify assets exist and are deployed correctly.
content_assets | Array content.ContentAsset | default = [],
# Reusable NCL templates for mode steps, agent prompts, and publication cards.
# Each template is a parameterised NCL function at source_path.
templates | Array content.ContentTemplate | default = [],
} in
{

View File

@ -99,7 +99,7 @@ def "main help" [group?: string] {
fmt-cmd $"($cmd) help sync" "ontology↔code sync, drift detection, proposals"
fmt-cmd $"($cmd) help coder" ".coder/ process memory: record, log, triage, publish"
fmt-cmd $"($cmd) help manifest" "operational modes, publication services, layers"
fmt-cmd $"($cmd) help describe" "project self-knowledge: what, how, why, impact"
fmt-cmd $"($cmd) help describe" "project self-knowledge: what, how, why, impact, diff, api surface"
fmt-cmd $"($cmd) help search" "ontology search + bookmarks (NCL-persisted)"
fmt-cmd $"($cmd) help qa" "Q&A knowledge base: query, add, list"
fmt-cmd $"($cmd) help log" "action audit trail, follow, filter"
@ -131,6 +131,9 @@ def "main help" [group?: string] {
fmt-cmd $"($cmd) store sync-push" "push ontology to daemon DB (projection rebuild)"
fmt-cmd $"($cmd) config-edit" "edit ~/.config/ontoref/config.ncl via browser form (typedialog roundtrip)"
fmt-cmd $"($cmd) config-setup" "validate config.ncl schema and probe external services"
fmt-cmd $"($cmd) describe diff [--file]" "semantic diff of ontology vs HEAD (nodes/edges added/removed/changed)"
fmt-cmd $"($cmd) describe api [--actor] [--tag]" "annotated API surface grouped by tag (requires daemon)"
fmt-cmd $"($cmd) run update_ontoref" "bring project up to current protocol version (adds manifest.ncl, connections.ncl)"
print ""
fmt-section "ALIASES"
@ -139,7 +142,8 @@ def "main help" [group?: string] {
print $" (ansi cyan)rg(ansi reset) → register (ansi cyan)bkl(ansi reset) → backlog (ansi cyan)cfg(ansi reset) → config (ansi cyan)cod(ansi reset) → coder"
print $" (ansi cyan)mf(ansi reset) → manifest (ansi cyan)dg(ansi reset) → diagram (ansi cyan)md(ansi reset) → mode (ansi cyan)st(ansi reset) → status"
print $" (ansi cyan)fm(ansi reset) → form (ansi cyan)s(ansi reset) → search (ansi cyan)ru(ansi reset) → run \(mode\) (ansi cyan)sv(ansi reset) → services"
print $" (ansi cyan)nv(ansi reset) → nats (ansi cyan)q(ansi reset) → qa query (ansi cyan)f(ansi reset) → search \(alias\)"
print $" (ansi cyan)nv(ansi reset) → nats (ansi cyan)q(ansi reset) → qa query (ansi cyan)f(ansi reset) → search \(alias\) (ansi cyan)df(ansi reset) → describe diff"
print $" (ansi cyan)da(ansi reset) → describe api"
print ""
print $" (ansi dark_gray)Tip: any group accepts(ansi reset) (ansi cyan)h(ansi reset) (ansi dark_gray)for help,(ansi reset) (ansi cyan)?(ansi reset) (ansi dark_gray)for interactive selector, or bare for picker(ansi reset)"
print $" (ansi dark_gray)Any command:(ansi reset) (ansi cyan)--fmt|-f(ansi reset) (ansi dark_gray)text*|json|yaml|toml|md(ansi reset) · (ansi cyan)--clip(ansi reset) (ansi dark_gray)copy output to clipboard(ansi reset)"
@ -449,6 +453,18 @@ def "main describe extensions" [--fmt (-f): string = "", --actor: string = "", -
describe extensions --fmt $f --actor $actor --dump $dump --clip=$clip
}
def "main describe diff" [--fmt (-f): string = "", --file: string = ""] {
log-action "describe diff" "read"
let f = (resolve-fmt $fmt [text json])
describe diff --fmt $f --file $file
}
def "main describe api" [--actor: string = "", --tag: string = "", --auth: string = "", --fmt (-f): string = ""] {
log-action "describe api" "read"
let f = (resolve-fmt $fmt [text json])
describe api --actor $actor --tag $tag --auth $auth --fmt $f
}
# ── Diagram ───────────────────────────────────────────────────────────────────
def "main diagram" [] {
@ -559,6 +575,7 @@ def "main log record" [
# All aliases delegate to canonical commands → single log-action call site.
# ad=adr, d=describe, ck=check, con=constraint, rg=register, f=find, ru=run,
# bkl=backlog, cfg=config, cod=coder, mf=manifest, dg=diagram, md=mode, fm=form, st=status, h=help
# df=describe diff, da=describe api
def "main ad" [action?: string] { main adr $action }
def "main ad help" [] { help-group "adr" }
@ -602,6 +619,11 @@ def "main d connections" [--fmt (-f): string = "", --actor: string = ""] { main
def "main d conn" [--fmt (-f): string = "", --actor: string = ""] { main describe connections --fmt $fmt --actor $actor }
def "main d extensions" [--fmt (-f): string = "", --actor: string = "", --dump: string = "", --clip] { main describe extensions --fmt $fmt --actor $actor --dump $dump --clip=$clip }
def "main d ext" [--fmt (-f): string = "", --actor: string = "", --dump: string = "", --clip] { main describe extensions --fmt $fmt --actor $actor --dump $dump --clip=$clip }
def "main d diff" [--fmt (-f): string = "", --file: string = ""] { main describe diff --fmt $fmt --file $file }
def "main d api" [--actor: string = "", --tag: string = "", --auth: string = "", --fmt (-f): string = ""] { main describe api --actor $actor --tag $tag --auth $auth --fmt $fmt }
def "main df" [--fmt (-f): string = "", --file: string = ""] { main describe diff --fmt $fmt --file $file }
def "main da" [--actor: string = "", --tag: string = "", --auth: string = "", --fmt (-f): string = ""] { main describe api --actor $actor --tag $tag --auth $auth --fmt $fmt }
def "main bkl" [action?: string] { main backlog $action }
def "main bkl help" [] { help-group "backlog" }

View File

@ -92,13 +92,13 @@
title = "Constraints (active checks)", border_top = true, border_bottom = true },
{ type = "section", name = "constraints_note",
content = "Every ADR requires at least one Hard constraint. check_hint must be an executable command — not prose. The constraint is what makes the ADR machine-verifiable." },
content = "Every ADR requires at least one Hard constraint with a typed 'check' or legacy 'check_hint'. Prefer typed 'check' variants — they are machine-executable by validate.nu." },
{ type = "editor", name = "constraints",
prompt = "Constraints (Nickel array)",
required = true,
file_extension = "ncl",
prefix_text = "# Required fields per entry:\n# id = \"kebab-case-id\",\n# claim = \"What must be true\",\n# scope = \"Where this applies\",\n# severity = 'Hard, # Hard | Soft\n# check_hint = \"executable command that returns non-zero on violation\",\n# rationale = \"Why this constraint\",\n\n",
prefix_text = "# Required fields per entry:\n# id = \"kebab-case-id\",\n# claim = \"What must be true\",\n# scope = \"Where this applies\",\n# severity = 'Hard, # Hard | Soft\n# rationale = \"Why this constraint\",\n#\n# Typed check (preferred — pick one variant):\n# check = 'Grep { pattern = \"...\", paths = [\"...\"] },\n# check = 'Cargo { crate = \"...\", forbidden_deps = [\"...\"] },\n# check = 'NuCmd { cmd = \"...\", expect_exit = 0 },\n# check = 'FileExists { path = \"...\", present = true },\n# check = 'ApiCall { endpoint = \"...\", json_path = \"...\", expected = \"...\" },\n#\n# Legacy (deprecated, still accepted during migration):\n# check_hint = \"executable command\",\n\n",
help = "Hard: non-negotiable, blocks a change. Soft: guideline, requires justification to bypass.",
nickel_path = ["constraints"] },

View File

@ -17,7 +17,7 @@ let s = import "../schema.ncl" in
"{project_dir} exists and is a directory",
"nickel is available in PATH",
"nu is available in PATH (>= 0.110.0)",
"{ontoref_dir}/templates/ontology/ exists (contains core.ncl, state.ncl, gate.ncl stubs)",
"{ontoref_dir}/templates/ontology/ exists (contains core.ncl, state.ncl, gate.ncl, manifest.ncl, connections.ncl stubs)",
"{ontoref_dir}/templates/ontoref-config.ncl exists",
"{ontoref_dir}/templates/scripts-ontoref exists",
],
@ -84,6 +84,26 @@ let s = import "../schema.ncl" in
note = "Copies gate.ncl stub. Skipped if file already exists.",
},
{
id = "copy_ontology_manifest",
action = "copy_ontology_manifest_stub",
actor = 'Agent,
cmd = "test -f {project_dir}/.ontology/manifest.ncl || sed 's/{{ project_name }}/{project_name}/g' {ontoref_dir}/templates/ontology/manifest.ncl > {project_dir}/.ontology/manifest.ncl",
depends_on = [{ step = "create_ontology_dir", kind = 'OnSuccess }],
on_error = { strategy = 'Continue },
note = "Copies manifest.ncl stub for content assets and templates declaration. Skipped if file already exists.",
},
{
id = "copy_ontology_connections",
action = "copy_ontology_connections_stub",
actor = 'Agent,
cmd = "test -f {project_dir}/.ontology/connections.ncl || sed 's/{{ project_name }}/{project_name}/g' {ontoref_dir}/templates/ontology/connections.ncl > {project_dir}/.ontology/connections.ncl",
depends_on = [{ step = "create_ontology_dir", kind = 'OnSuccess }],
on_error = { strategy = 'Continue },
note = "Copies connections.ncl stub for cross-project federation addressing. Skipped if file already exists.",
},
{
id = "install_scripts_wrapper",
action = "install_consumer_entry_point",
@ -103,6 +123,8 @@ let s = import "../schema.ncl" in
{ step = "copy_ontology_core", kind = 'OnSuccess },
{ step = "copy_ontology_state", kind = 'OnSuccess },
{ step = "copy_ontology_gate", kind = 'OnSuccess },
{ step = "copy_ontology_manifest", kind = 'Always },
{ step = "copy_ontology_connections", kind = 'Always },
],
on_error = { strategy = 'Stop },
note = "Validates all three .ontology/ files parse without errors.",
@ -113,6 +135,8 @@ let s = import "../schema.ncl" in
postconditions = [
"{project_dir}/.ontoref/config.ncl exists and is valid Nickel",
"{project_dir}/.ontology/core.ncl, state.ncl, gate.ncl exist and parse",
"{project_dir}/.ontology/manifest.ncl exists (content assets + templates declaration)",
"{project_dir}/.ontology/connections.ncl exists (cross-project federation stub)",
"{project_dir}/scripts/ontoref exists and is executable",
"No existing files were overwritten",
],

View File

@ -1,4 +1,5 @@
let d = import "../defaults.ncl" in
let ncl_export = import "../templates/step-nickel-export.ncl" in
d.make_mode String {
id = "coder-workflow",
@ -65,12 +66,16 @@ d.make_mode String {
depends_on = [{ step = "publish" }],
on_error = { strategy = 'Stop },
},
{
id = "register-ontology",
action = "After creating new systems or schemas, register them in .ontology/core.ncl with artifact_paths. Update state.ncl if maturity changed. Validate with nickel export.",
cmd = "nickel export .ontology/core.ncl && nickel export .ontology/state.ncl",
actor = 'Both,
on_error = { strategy = 'Stop },
ncl_export {
id = "validate-ontology-core",
action = "After creating new systems or schemas, register them in .ontology/core.ncl with artifact_paths. Validate core.ncl exports without contract errors.",
file = ".ontology/core.ncl",
},
ncl_export {
id = "validate-ontology-state",
action = "Update state.ncl if maturity changed. Validate state.ncl exports without contract errors.",
file = ".ontology/state.ncl",
depends_on = [{ step = "validate-ontology-core" }],
},
],

View File

@ -0,0 +1,190 @@
let s = import "../schema.ncl" in
# Mode: update_ontoref
# Brings an EXISTING ontoref-adopted project up to the current protocol version.
# All steps are idempotent — safe to run multiple times and on already-current projects.
#
# What this mode adds (if not already present):
# .ontology/manifest.ncl — content assets and template declarations (v2)
# .ontology/connections.ncl — cross-project federation addressing (v2)
#
# What this mode reports (advisory, no auto-migration):
# ADRs with deprecated check_hint field — need manual migration to typed check
# ADRs missing check field entirely — not yet validated by the daemon
#
# What this mode verifies:
# New files parse correctly under the current schema
# Daemon /api/catalog is reachable (confirms daemon has v2 capabilities)
#
# Required params (substituted in cmd via {param}):
# {project_name} — identifier for this project (kebab-case)
# {project_dir} — absolute path to the project root
# {ontoref_dir} — absolute path to the ontoref checkout
{
id = "update_ontoref",
trigger = "Bring an existing ontoref project up to the current protocol version",
preconditions = [
"{project_dir}/.ontoref/config.ncl exists (project was previously adopted)",
"{project_dir}/.ontology/core.ncl exists",
"nickel is available in PATH",
"nu is available in PATH (>= 0.110.0)",
"{ontoref_dir}/templates/ontology/manifest.ncl exists",
"{ontoref_dir}/templates/ontology/connections.ncl exists",
],
steps = [
# ── DETECT (all parallel, Continue — detection never blocks) ───────────────
{
id = "detect-manifest",
action = "Detect whether .ontology/manifest.ncl is present",
actor = 'Agent,
cmd = "test -f {project_dir}/.ontology/manifest.ncl && echo 'present' || echo 'missing'",
depends_on = [],
on_error = { strategy = 'Continue },
note = "Detection only — result informs add-manifest step.",
},
{
id = "detect-connections",
action = "Detect whether .ontology/connections.ncl is present",
actor = 'Agent,
cmd = "test -f {project_dir}/.ontology/connections.ncl && echo 'present' || echo 'missing'",
depends_on = [],
on_error = { strategy = 'Continue },
note = "Detection only — result informs add-connections step.",
},
{
id = "detect-adr-hints",
action = "Scan ADRs for deprecated check_hint field",
actor = 'Agent,
cmd = "grep -rl 'check_hint' {project_dir}/adrs/ 2>/dev/null && echo 'MIGRATION NEEDED: check_hint found' || echo 'ok: no check_hint found'",
depends_on = [],
on_error = { strategy = 'Continue },
note = "Advisory scan. ADRs using check_hint need manual migration to the typed check field.",
},
{
id = "detect-adr-no-check",
action = "Scan ADRs for constraints missing typed check entirely",
actor = 'Agent,
cmd = "grep -rL 'check =' {project_dir}/adrs/adr-*.ncl 2>/dev/null | head -20 || echo 'all ADRs have check field'",
depends_on = [],
on_error = { strategy = 'Continue },
note = "Advisory scan. ADRs without check are not validated by the daemon's /validate/adrs endpoint.",
},
{
id = "detect-daemon-api",
action = "Check whether daemon exposes /api/catalog (v2 capability)",
actor = 'Agent,
cmd = "curl -sf ${ONTOREF_DAEMON_URL:-http://127.0.0.1:7891}/api/catalog > /dev/null && echo 'daemon: v2 api catalog available' || echo 'daemon: not reachable or pre-v2 (start/restart the daemon)'",
depends_on = [],
on_error = { strategy = 'Continue },
note = "Non-blocking check. Daemon must be restarted to expose new API catalog endpoint.",
},
# ── UPDATE (parallel, Continue — each is individually idempotent) ──────────
{
id = "add-manifest",
action = "Create .ontology/manifest.ncl stub if missing",
actor = 'Agent,
cmd = "test -f {project_dir}/.ontology/manifest.ncl || sed 's/{{ project_name }}/{project_name}/g' {ontoref_dir}/templates/ontology/manifest.ncl > {project_dir}/.ontology/manifest.ncl",
depends_on = [{ step = "detect-manifest", kind = 'Always }],
on_error = { strategy = 'Continue },
note = "Adds content asset and template declarations. Skipped if file already exists.",
},
{
id = "add-connections",
action = "Create .ontology/connections.ncl stub if missing",
actor = 'Agent,
cmd = "test -f {project_dir}/.ontology/connections.ncl || sed 's/{{ project_name }}/{project_name}/g' {ontoref_dir}/templates/ontology/connections.ncl > {project_dir}/.ontology/connections.ncl",
depends_on = [{ step = "detect-connections", kind = 'Always }],
on_error = { strategy = 'Continue },
note = "Adds cross-project federation stub. Skipped if file already exists.",
},
# ── VALIDATE (depends on updates, Continue — partial success is still progress) ──
{
id = "validate-manifest",
action = "Nickel typecheck .ontology/manifest.ncl",
actor = 'Agent,
cmd = "cd {project_dir} && nickel export --import-path {ontoref_dir}/ontology:{ontoref_dir}/ontology/schemas:{ontoref_dir}/ontology/defaults:{ontoref_dir} .ontology/manifest.ncl > /dev/null",
depends_on = [{ step = "add-manifest", kind = 'OnSuccess }],
on_error = { strategy = 'Continue },
note = "Confirms manifest.ncl parses under the current content schema.",
},
{
id = "validate-connections",
action = "Nickel typecheck .ontology/connections.ncl",
actor = 'Agent,
cmd = "cd {project_dir} && nickel export --import-path {ontoref_dir}/reflection/schemas:{ontoref_dir} .ontology/connections.ncl > /dev/null",
depends_on = [{ step = "add-connections", kind = 'OnSuccess }],
on_error = { strategy = 'Continue },
note = "Confirms connections.ncl parses under the connections schema.",
},
# ── REPORT (aggregate — depends on all previous steps) ────────────────────
{
id = "report",
action = "Print protocol update summary",
actor = 'Both,
cmd = "nu -c '
print \"\"
print $\"(ansi white_bold)ontoref protocol update — {project_name}(ansi reset)\"
print \"\"
let has_manifest = (\"{project_dir}/.ontology/manifest.ncl\" | path exists)
let has_connections = (\"{project_dir}/.ontology/connections.ncl\" | path exists)
print $\" manifest.ncl (if $has_manifest { \"(ansi green)✓(ansi reset)\" } else { \"(ansi red)✗(ansi reset)\" })\"
print $\" connections.ncl (if $has_connections { \"(ansi green)✓(ansi reset)\" } else { \"(ansi red)✗(ansi reset)\" })\"
let hint_scan = (do { ^grep -rl check_hint {project_dir}/adrs/ } | complete)
let hint_files = if $hint_scan.exit_code == 0 { $hint_scan.stdout | str trim | lines | where { |l| $l | is-not-empty } } else { [] }
if ($hint_files | is-not-empty) {
print \"\"
print $\" (ansi yellow_bold)ADR migration needed(ansi reset) — check_hint is deprecated; migrate to typed check field:\"
for f in $hint_files { print $\" (ansi yellow)($f)(ansi reset)\" }
print \" See: adrs/adr-schema.ncl for the typed check ADT\"
} else {
print $\" ADR check fields (ansi green)✓ up to date(ansi reset)\"
}
print \"\"
print \" New capabilities (daemon must be running):\"
print \" GET /api/catalog — full annotated API surface\"
print \" GET /describe/guides — actor-aware operational context\"
print \" GET /graph/impact?include_external=true — cross-project BFS\"
print \" GET /projects/{slug}/ontology/versions — per-file change counters\"
print \" describe api — Nu command for API surface\"
print \" describe diff — semantic ontology diff vs HEAD\"
print \"\"
'",
depends_on = [
{ step = "validate-manifest", kind = 'Always },
{ step = "validate-connections", kind = 'Always },
{ step = "detect-adr-hints", kind = 'Always },
{ step = "detect-daemon-api", kind = 'Always },
],
on_error = { strategy = 'Stop },
},
],
postconditions = [
"{project_dir}/.ontology/manifest.ncl exists and parses",
"{project_dir}/.ontology/connections.ncl exists and parses",
"Report printed: ADR migration status, daemon capability checklist",
"No existing files were overwritten",
],
} | (s.Mode String)

View File

@ -0,0 +1,28 @@
let d = import "../defaults.ncl" in
d.make_mode String {
id = "validate-adrs",
trigger = "Run all typed constraint checks from accepted ADRs and report compliance. Fails on any Hard constraint violation.",
preconditions = [
"ONTOREF_PROJECT_ROOT is set and points to a project with adrs/ directory",
"Nushell >= 0.111.0 is available on PATH",
"nickel binary is available on PATH",
"rg (ripgrep) is available on PATH for Grep-type checks",
],
steps = [
{
id = "run-checks",
action = "Load all accepted ADRs, dispatch each typed constraint check (Grep, Cargo, NuCmd, ApiCall, FileExists), and print a structured pass/fail report.",
cmd = "nu --no-config-file -c 'use reflection/modules/validate.nu *; validate check-all'",
actor = 'Both,
on_error = { strategy = 'Stop },
},
],
postconditions = [
"All Hard constraints from accepted ADRs exit with passed = true",
"No constraint is missing the typed 'check' field",
],
}

View File

@ -0,0 +1,98 @@
let d = import "../defaults.ncl" in
# Comprehensive project validation mode.
# Runs 5 independent validation categories in parallel, then aggregates results.
#
# DAG structure:
# adr-checks ─┐
# content-verify─┤
# conn-health ─┼─► aggregate
# practice-cov ─┤
# gate-align ─┘
#
# Exit: non-zero if any Hard constraint fails (via validate check-all).
# All parallel steps use on_error = 'Continue so the aggregate always runs
# and collects all failures in one pass.
d.make_mode String {
id = "validate-project",
trigger = "Run all 5 validation categories (ADR constraints, content assets, connection health, practice coverage, gate consistency) and produce a unified compliance report.",
preconditions = [
"ONTOREF_PROJECT_ROOT is set and points to a project with .ontology/ and adrs/ directories",
"Nushell >= 0.111.0 is available on PATH",
"nickel binary is available on PATH",
"rg (ripgrep) is available on PATH for Grep-type constraint checks",
],
steps = [
# ── Category 1: ADR typed constraint checks ─────────────────────────────
{
id = "adr-checks",
action = "Load all accepted ADRs, dispatch each typed constraint check (Grep, Cargo, NuCmd, ApiCall, FileExists). Fails on any Hard constraint violation.",
cmd = "nu --no-config-file -c 'use reflection/modules/validate.nu *; validate check-all --fmt json'",
actor = 'Both,
on_error = { strategy = 'Continue },
},
# ── Category 2: content asset path verification ─────────────────────────
{
id = "content-verify",
action = "Verify that all source_path entries declared in manifest content_assets exist on disk. Reports missing files without failing the build.",
cmd = "nu --no-config-file -c 'use reflection/modules/describe.nu *; let m = (nickel export --format json .ontology/manifest.ncl | from json); let missing = ($m.content_assets? | default [] | where { |a| not ($a.source_path | path exists) } | get source_path); if ($missing | is-empty) { print \"content-verify: ok\" } else { print $\"content-verify: MISSING ($missing | str join \", \")\"; exit 1 }'",
actor = 'Both,
on_error = { strategy = 'Continue },
},
# ── Category 3: connection health ───────────────────────────────────────
{
id = "conn-health",
action = "Validate connections.ncl: check that all referenced project slugs are reachable and that node IDs resolve. Reports unresolvable connections as warnings.",
cmd = "nu --no-config-file -c 'let f = \".ontology/connections.ncl\"; if ($f | path exists) { print \"conn-health: connections.ncl present\" } else { print \"conn-health: no connections.ncl — skipped\" }'",
actor = 'Both,
on_error = { strategy = 'Continue },
},
# ── Category 4: practice coverage ───────────────────────────────────────
{
id = "practice-cov",
action = "Report Practice ontology nodes that have no corresponding test coverage annotation. Informational only — does not fail the mode.",
cmd = "nu --no-config-file -c 'let nodes = (nickel export --format json .ontology/core.ncl | from json | get nodes? | default [] | where { |n| ($n.level? | default \"\") == \"Practice\" }); print $\"practice-cov: ($nodes | length) practices in ontology\"'",
actor = 'Both,
on_error = { strategy = 'Continue },
},
# ── Category 5: gate/dimension consistency ──────────────────────────────
{
id = "gate-align",
action = "Check that active gate membranes are consistent with current dimension states. A Closed membrane should reflect a dimension at a terminal state.",
cmd = "nu --no-config-file -c 'let g = (nickel export --format json .ontology/gate.ncl | from json); let active = ($g.membranes? | default [] | where { |m| ($m.active? | default false) }); print $\"gate-align: ($active | length) active membrane(s)\"'",
actor = 'Both,
on_error = { strategy = 'Continue },
},
# ── Aggregate: collect results from all categories ──────────────────────
{
id = "aggregate",
action = "Collect results from all 5 validation categories and produce a unified compliance report. Exits non-zero if any Hard ADR constraint failed.",
cmd = "nu --no-config-file -c 'use reflection/modules/validate.nu *; let summary = (validate summary); print ($summary | to json); if $summary.hard_passing < $summary.hard_total { exit 1 }'",
actor = 'Both,
depends_on = [
{ step = "adr-checks" },
{ step = "content-verify" },
{ step = "conn-health" },
{ step = "practice-cov" },
{ step = "gate-align" },
],
on_error = { strategy = 'Stop },
},
],
postconditions = [
"All Hard constraints from accepted ADRs exit with passed = true",
"All declared content_assets have existing source_path files",
"Gate/dimension state alignment is consistent",
"Practice coverage report is available in output",
"Unified compliance JSON is printed to stdout",
],
}

View File

@ -172,6 +172,7 @@ export def "describe tools" [
export def "describe impact" [
node_id: string, # Ontology node id to trace
--depth: int = 2, # How many edge hops to follow
--include-external, # Follow connections.ncl to external projects via daemon
--fmt: string = "",
]: nothing -> nothing {
let root = (project-root)
@ -193,11 +194,40 @@ export def "describe impact" [
return
}
let impacts = (trace-impacts $node_id $edges $nodes $depth)
let local_impacts = (trace-impacts $node_id $edges $nodes $depth)
# When --include-external, query the daemon for cross-project entries
let external_impacts = if $include_external {
let daemon_url = ($env.ONTOREF_DAEMON_URL? | default "http://127.0.0.1:7891")
let result = do {
http get $"($daemon_url)/graph/impact?node=($node_id)&depth=($depth)&include_external=true"
} | complete
if $result.exit_code == 0 {
let resp = ($result.stdout | from json)
$resp.impacts? | default [] | each { |e|
{
id: $e.node_id,
name: ($e.node_name? | default $e.node_id),
level: "external",
description: $"[$($e.slug)] via ($e.via)",
depth: $e.depth,
direction: $e.direction,
external: true,
}
}
} else {
[]
}
} else {
[]
}
let all_impacts = ($local_impacts | append $external_impacts)
let data = {
source: ($target | first),
impacts: $impacts,
impacts: $all_impacts,
include_external: $include_external,
}
emit-output $data $f { || render-impact-text $data }
@ -1025,6 +1055,321 @@ export def "describe features" [
}
}
# ── describe guides ─────────────────────────────────────────────────────────────
# "Give me everything an actor needs to operate correctly in this project."
# Single deterministic JSON output: identity, axioms, practices, constraints,
# gate_state, dimensions, available_modes, actor_policy, language_guides,
# content_assets, templates, connections.
export def "describe guides" [
--actor: string = "", # Actor context: developer | agent | ci | admin
--fmt: string = "", # Output format: json | yaml | text (default json for agent, text otherwise)
]: nothing -> nothing {
let root = (project-root)
let a = if ($actor | is-not-empty) { $actor } else { (actor-default) }
let f = if ($fmt | is-not-empty) { $fmt } else if $a == "agent" { "json" } else { "json" }
let identity = (collect-identity $root)
let axioms = (collect-axioms $root)
let practices = (collect-practices $root)
let gates = (collect-gates $root)
let dimensions = (collect-dimensions $root)
let adrs = (collect-adr-summary $root)
let modes = (scan-reflection-modes $root)
let claude = (scan-claude-capabilities $root)
let manifest = (load-manifest-safe $root)
let conns = (collect-connections $root)
let constraints = (collect-constraint-summary $root)
let actor_policy = (derive-actor-policy $gates $a)
let content_assets = ($manifest.content_assets? | default [])
let templates = ($manifest.templates? | default [])
# Fetch API surface from daemon; empty list if daemon is not reachable.
let daemon_url = ($env.ONTOREF_DAEMON_URL? | default "http://127.0.0.1:7891")
let api_surface = do {
let r = (do { http get $"($daemon_url)/api/catalog" } | complete)
if $r.exit_code == 0 {
let resp = ($r.stdout | from json)
let all = ($resp.routes? | default [])
if ($a | is-not-empty) {
$all | where { |route| $route.actors | any { |act| $act == $a } }
} else {
$all
}
} else {
[]
}
}
let data = {
identity: $identity,
axioms: $axioms,
practices: $practices,
constraints: $constraints,
gate_state: $gates,
dimensions: $dimensions,
adrs: $adrs,
available_modes: $modes,
actor_policy: $actor_policy,
language_guides: $claude,
content_assets: $content_assets,
templates: $templates,
connections: $conns,
api_surface: $api_surface,
}
emit-output $data $f {||
print $"=== Project Guides: ($identity.name) [actor: ($a)] ==="
print ""
print $"Identity: ($identity.name) / ($identity.kind)"
print $"Axioms: ($axioms | length)"
print $"Practices: ($practices | length)"
print $"Modes: ($modes | length)"
print $"Gates: ($gates | length) active"
print $"Connections: ($conns | length)"
print $"API surface: ($api_surface | length) endpoints visible to actor"
print ""
print "Actor policy:"
print ($actor_policy | table)
print ""
print "Constraint summary:"
print ($constraints | table)
}
}
# ── describe api ────────────────────────────────────────────────────────────────
# "What HTTP endpoints does the daemon expose? How do I call them?"
# Queries GET /api/catalog from the daemon and renders the full surface.
export def "describe api" [
--actor: string = "", # Filter to endpoints whose actors include this role
--tag: string = "", # Filter by tag (e.g. "graph", "describe", "auth")
--auth: string = "", # Filter by auth level: none | viewer | admin
--fmt: string = "", # Output format: text* | json
]: nothing -> nothing {
let a = if ($actor | is-not-empty) { $actor } else { (actor-default) }
let f = if ($fmt | is-not-empty) { $fmt } else if $a == "agent" { "json" } else { "text" }
let daemon_url = ($env.ONTOREF_DAEMON_URL? | default "http://127.0.0.1:7891")
let result = (do { http get $"($daemon_url)/api/catalog" } | complete)
if $result.exit_code != 0 {
print $" (ansi red)Daemon unreachable at ($daemon_url) — is it running?(ansi reset)"
return
}
let resp = ($result.stdout | from json)
mut routes = ($resp.routes? | default [])
if ($actor | is-not-empty) {
$routes = ($routes | where { |r| $r.actors | any { |act| $act == $actor } })
}
if ($tag | is-not-empty) {
$routes = ($routes | where { |r| $r.tags | any { |t| $t == $tag } })
}
if ($auth | is-not-empty) {
$routes = ($routes | where auth == $auth)
}
let data = { count: ($routes | length), routes: $routes }
emit-output $data $f { || render-api-text $data }
}
def render-api-text [data: record]: nothing -> nothing {
print $"(ansi white_bold)Daemon API surface(ansi reset) ($data.count) endpoints"
print ""
# Group by first tag for readable sectioning
let grouped = ($data.routes | group-by { |r| if ($r.tags | is-empty) { "other" } else { $r.tags | first } })
for section in ($grouped | transpose key value | sort-by key) {
print $"(ansi cyan_bold)── ($section.key | str upcase) ──────────────────────────────────────(ansi reset)"
for r in $section.value {
let auth_badge = match $r.auth {
"none" => $"(ansi dark_gray)[open](ansi reset)",
"viewer" => $"(ansi yellow)[viewer](ansi reset)",
"admin" => $"(ansi red)[admin](ansi reset)",
_ => $"(ansi dark_gray)[?](ansi reset)"
}
let actors_str = ($r.actors | str join ", ")
let feat = if ($r.feature | is-not-empty) { $" (ansi dark_gray)feature:($r.feature)(ansi reset)" } else { "" }
print $" (ansi white_bold)($r.method)(ansi reset) (ansi green)($r.path)(ansi reset) ($auth_badge)($feat)"
print $" ($r.description)"
if ($r.actors | is-not-empty) {
print $" (ansi dark_gray)actors: ($actors_str)(ansi reset)"
}
if ($r.params | is-not-empty) {
for p in $r.params {
let con = $"(ansi dark_gray)($p.constraint)(ansi reset)"
print $" (ansi dark_gray)· ($p.name) [($p.kind)] ($con) — ($p.description)(ansi reset)"
}
}
print ""
}
}
}
# ── describe diff ───────────────────────────────────────────────────────────────
# "What changed in the ontology since the last commit?"
# Compares the current working-tree core.ncl against the HEAD-committed version.
# Outputs structured added/removed/changed diffs for nodes and edges.
export def "describe diff" [
--fmt: string = "", # Output format: text* | json
--file: string = "", # Ontology file to diff (relative to project root, default .ontology/core.ncl)
]: nothing -> nothing {
let root = (project-root)
let f = if ($fmt | is-not-empty) { $fmt } else { "text" }
let rel = if ($file | is-not-empty) { $file } else { ".ontology/core.ncl" }
let current = (load-ontology-safe $root)
let committed = (diff-export-committed $rel $root)
let curr_nodes = ($current.nodes? | default [] | each { |n| { id: $n.id, name: ($n.name? | default ""), description: ($n.description? | default ""), level: ($n.level? | default ""), pole: ($n.pole? | default ""), invariant: ($n.invariant? | default false) } })
let comm_nodes = ($committed.nodes? | default [] | each { |n| { id: $n.id, name: ($n.name? | default ""), description: ($n.description? | default ""), level: ($n.level? | default ""), pole: ($n.pole? | default ""), invariant: ($n.invariant? | default false) } })
let curr_ids = ($curr_nodes | get id)
let comm_ids = ($comm_nodes | get id)
let nodes_added = ($curr_nodes | where { |n| not ($comm_ids | any { |id| $id == $n.id }) })
let nodes_removed = ($comm_nodes | where { |n| not ($curr_ids | any { |id| $id == $n.id }) })
# Nodes present in both — compare field by field.
let both_ids = ($curr_ids | where { |id| $comm_ids | any { |cid| $cid == $id } })
let nodes_changed = ($both_ids | each { |id|
let curr = ($curr_nodes | where id == $id | first)
let prev = ($comm_nodes | where id == $id | first)
if ($curr.name != $prev.name or $curr.description != $prev.description or $curr.level != $prev.level or $curr.pole != $prev.pole or $curr.invariant != $prev.invariant) {
{ id: $id, before: $prev, after: $curr }
} else {
null
}
} | compact)
let curr_edges = ($current.edges? | default [] | each { |e|
let ef = ($e.from? | default "")
let et = ($e.to? | default "")
let ek = ($e.kind? | default "")
{ key: $"($ef)->($et)[($ek)]", from: $ef, to: $et, kind: $ek }
})
let comm_edges = ($committed.edges? | default [] | each { |e|
let ef = ($e.from? | default "")
let et = ($e.to? | default "")
let ek = ($e.kind? | default "")
{ key: $"($ef)->($et)[($ek)]", from: $ef, to: $et, kind: $ek }
})
let curr_ekeys = ($curr_edges | get key)
let comm_ekeys = ($comm_edges | get key)
let edges_added = ($curr_edges | where { |e| not ($comm_ekeys | any { |k| $k == $e.key }) })
let edges_removed = ($comm_edges | where { |e| not ($curr_ekeys | any { |k| $k == $e.key }) })
let data = {
file: $rel,
nodes_added: $nodes_added,
nodes_removed: $nodes_removed,
nodes_changed: $nodes_changed,
edges_added: $edges_added,
edges_removed: $edges_removed,
summary: {
nodes_added: ($nodes_added | length),
nodes_removed: ($nodes_removed | length),
nodes_changed: ($nodes_changed | length),
edges_added: ($edges_added | length),
edges_removed: ($edges_removed | length),
},
}
emit-output $data $f { || render-diff-text $data }
}
def diff-export-committed [rel_path: string, root: string]: nothing -> record {
let ip = (nickel-import-path $root)
let show = (do { ^git -C $root show $"HEAD:($rel_path)" } | complete)
if $show.exit_code != 0 { return {} }
let mk = (do { ^mktemp } | complete)
if $mk.exit_code != 0 { return {} }
let tmp = ($mk.stdout | str trim)
$show.stdout | save --force $tmp
let r = (do { ^nickel export --format json --import-path $ip $tmp } | complete)
do { ^rm -f $tmp } | complete | ignore
if $r.exit_code != 0 { return {} }
$r.stdout | from json
}
def render-diff-text [data: record]: nothing -> nothing {
let s = $data.summary
let total = ($s.nodes_added + $s.nodes_removed + $s.nodes_changed + $s.edges_added + $s.edges_removed)
print $"(ansi white_bold)Ontology diff vs HEAD:(ansi reset) ($data.file)"
print ""
if $total == 0 {
print $" (ansi dark_gray)No changes — working tree matches HEAD.(ansi reset)"
return
}
if $s.nodes_added > 0 {
print $"(ansi green_bold)Nodes added ($s.nodes_added):(ansi reset)"
for n in $data.nodes_added {
print $" + (ansi green)($n.id)(ansi reset) [($n.level)] ($n.name)"
}
print ""
}
if $s.nodes_removed > 0 {
print $"(ansi red_bold)Nodes removed ($s.nodes_removed):(ansi reset)"
for n in $data.nodes_removed {
print $" - (ansi red)($n.id)(ansi reset) [($n.level)] ($n.name)"
}
print ""
}
if $s.nodes_changed > 0 {
print $"(ansi yellow_bold)Nodes changed ($s.nodes_changed):(ansi reset)"
for c in $data.nodes_changed {
print $" ~ (ansi yellow)($c.id)(ansi reset)"
if $c.before.name != $c.after.name {
print $" name: (ansi dark_gray)($c.before.name)(ansi reset) → ($c.after.name)"
}
if $c.before.description != $c.after.description {
let prev = ($c.before.description | str substring 0..60)
let curr = ($c.after.description | str substring 0..60)
print $" description: (ansi dark_gray)($prev)…(ansi reset) → ($curr)…"
}
if $c.before.level != $c.after.level {
print $" level: (ansi dark_gray)($c.before.level)(ansi reset) → ($c.after.level)"
}
if $c.before.pole != $c.after.pole {
print $" pole: (ansi dark_gray)($c.before.pole)(ansi reset) → ($c.after.pole)"
}
if $c.before.invariant != $c.after.invariant {
print $" invariant: (ansi dark_gray)($c.before.invariant)(ansi reset) → ($c.after.invariant)"
}
}
print ""
}
if $s.edges_added > 0 {
print $"(ansi cyan_bold)Edges added ($s.edges_added):(ansi reset)"
for e in $data.edges_added {
print $" + (ansi cyan)($e.from)(ansi reset) →[($e.kind)]→ (ansi cyan)($e.to)(ansi reset)"
}
print ""
}
if $s.edges_removed > 0 {
print $"(ansi magenta_bold)Edges removed ($s.edges_removed):(ansi reset)"
for e in $data.edges_removed {
print $" - (ansi magenta)($e.from)(ansi reset) →[($e.kind)]→ (ansi magenta)($e.to)(ansi reset)"
}
print ""
}
}
# ── Collectors ──────────────────────────────────────────────────────────────────
def collect-identity [root: string]: nothing -> record {
@ -1155,6 +1500,69 @@ def collect-hard-constraints [root: string]: nothing -> list<record> {
} | flatten
}
def collect-constraint-summary [root: string]: nothing -> list<record> {
let files = (glob $"($root)/adrs/adr-*.ncl")
let ip = (nickel-import-path $root)
$files | each { |f|
let adr = (daemon-export-safe $f --import-path $ip)
if $adr != null {
if ($adr.status? | default "") == "Accepted" {
let constraints = ($adr.constraints? | default [])
$constraints | each { |c| {
adr_id: ($adr.id? | default ""),
severity: ($c.severity? | default ""),
description: ($c.description? | default ""),
check_tag: ($c.check?.tag? | default ($c.check_hint? | default "")),
}}
} else { [] }
} else { [] }
} | flatten
}
def collect-connections [root: string]: nothing -> list<record> {
let conn_file = $"($root)/.ontology/connections.ncl"
if not ($conn_file | path exists) { return [] }
let ip = (nickel-import-path $root)
let conn = (daemon-export-safe $conn_file --import-path $ip)
if $conn == null { return [] }
$conn.connections? | default []
}
# Derive what an actor is allowed to do based on the active gate membranes.
# Permeability: Open → full access; Controlled/Locked → restricted; Closed → read-only.
def derive-actor-policy [gates: list<record>, actor: string]: nothing -> record {
let is_agent = ($actor == "agent")
let is_ci = ($actor == "ci")
# Find the most restrictive membrane that constrains the actor.
let permeabilities = ($gates | get -o permeability | compact | uniq)
let most_restrictive = if ($permeabilities | any { |p| $p == "Closed" }) {
"Closed"
} else if ($permeabilities | any { |p| $p == "Locked" }) {
"Locked"
} else if ($permeabilities | any { |p| $p == "Controlled" }) {
"Controlled"
} else {
"Open"
}
let base_open = ($most_restrictive == "Open")
let base_controlled = ($most_restrictive == "Controlled" or $most_restrictive == "Open")
{
actor: $actor,
gate_permeability: $most_restrictive,
can_read_ontology: true,
can_read_adrs: true,
can_read_manifest: true,
can_run_modes: (if $is_agent { $base_controlled } else { true }),
can_modify_adrs: (if ($is_agent or $is_ci) { $base_open } else { $base_controlled }),
can_modify_ontology: (if ($is_agent or $is_ci) { $base_open } else { $base_controlled }),
can_push_sync: (if $is_agent { false } else { $base_controlled }),
}
}
# ── Scanners ────────────────────────────────────────────────────────────────────
def scan-just-modules [root: string]: nothing -> list<record> {
@ -1219,6 +1627,7 @@ def scan-ontoref-commands []: nothing -> list<string> {
"manifest mode", "manifest publish", "manifest layers", "manifest consumers",
"describe project", "describe capabilities", "describe constraints",
"describe tools", "describe features", "describe impact", "describe why",
"describe guides", "describe diff", "describe api",
]
}

View File

@ -49,6 +49,7 @@ export def "sync scan" [
# Compare scan against ontology, producing drift report.
export def "sync diff" [
--quick, # Skip nickel exports; parse NCL text directly for speed
--level: string = "", # Extra checks: "full" adds ADR violations, content assets, connection health
]: nothing -> table {
let root = (project-root)
let scan = (sync scan --level structural)
@ -131,6 +132,75 @@ export def "sync diff" [
}
}
# ── Full level: ADR violations, content assets, connection health ────────────
if $level == "full" {
# ADR violations: run validate check-all and capture failures
let adr_result = do { nu --no-config-file -c "use reflection/modules/validate.nu *; validate check-all --fmt json" } | complete
if $adr_result.exit_code != 0 {
let violations = do { $adr_result.stdout | from json } | complete
if $violations.exit_code == 0 {
let failed = ($violations.stdout | where { |r| ($r.passed? | default true) == false })
for v in $failed {
$report = ($report | append {
status: "BROKEN",
id: ($v.constraint_id? | default "adr-constraint"),
artifact_path: "",
detail: $"ADR constraint failed: ($v.description? | default '')",
})
}
}
}
# Content assets: verify source_path exists on disk
let manifest_file = $"($root)/.ontology/manifest.ncl"
if ($manifest_file | path exists) {
let manifest_result = do { nickel export --format json $manifest_file } | complete
if $manifest_result.exit_code == 0 {
let manifest = ($manifest_result.stdout | from json)
let assets = ($manifest.content_assets? | default [])
for asset in $assets {
let src = ($asset.source_path? | default "")
if ($src | is-not-empty) and not ($"($root)/($src)" | path exists) {
$report = ($report | append {
status: "MISSING",
id: ($asset.id? | default $src),
artifact_path: $src,
detail: $"content_asset source_path not found on disk: ($src)",
})
}
}
}
}
# Connection health: check declared project slugs exist in daemon registry
let conn_file = $"($root)/.ontology/connections.ncl"
if ($conn_file | path exists) {
let conn_result = do { nickel export --format json $conn_file } | complete
if $conn_result.exit_code == 0 {
let connections = ($conn_result.stdout | from json)
let daemon_url = ($env.ONTOREF_DAEMON_URL? | default "http://127.0.0.1:7891")
let projects_result = do { http get $"($daemon_url)/projects" } | complete
if $projects_result.exit_code == 0 {
let registered = ($projects_result.stdout | from json | get slugs? | default [])
for direction in ["upstream", "downstream", "peers"] {
let conns = ($connections | get -o $direction | default [])
for conn in $conns {
let target = ($conn.project? | default "")
if ($target | is-not-empty) and not ($target in $registered) {
$report = ($report | append {
status: "BROKEN",
id: $"connection:($target)",
artifact_path: "",
detail: $"connection ($direction) references unregistered project: ($target)",
})
}
}
}
}
}
}
}
$report | sort-by status id
}

View File

@ -0,0 +1,281 @@
#!/usr/bin/env nu
# reflection/modules/validate.nu — ADR constraint validation runner.
#
# Interprets the typed constraint_check_type ADT exported by adrs/adr-schema.ncl.
# Each constraint.check record has a `tag` discriminant; this module dispatches
# execution per variant and returns a structured result.
#
# Commands:
# validate check-constraint <c> — run a single constraint record
# validate check-adr <id> — run all constraints for one ADR
# validate check-all — run all constraints across all accepted ADRs
#
# Error handling: do { ... } | complete — never panics, always returns a result.
use env.nu *
use store.nu [daemon-export-safe]
# ── Internal helpers ────────────────────────────────────────────────────────
def adr-root []: nothing -> string {
$env.ONTOREF_PROJECT_ROOT? | default $env.ONTOREF_ROOT
}
def adr-files []: nothing -> list<string> {
glob ([(adr-root), "adrs", "adr-*.ncl"] | path join)
}
# Resolve a check path (may be file or directory) relative to project root.
def resolve-path [rel: string]: nothing -> string {
[(adr-root), $rel] | path join
}
# Run a 'Grep check: ripgrep pattern across paths; empty/non-empty assertion.
def run-grep [check: record]: nothing -> record {
let paths = ($check.paths | each { |p| resolve-path $p })
let valid_paths = ($paths | where { |p| $p | path exists })
if ($valid_paths | is-empty) {
return {
passed: false,
detail: $"No paths exist: ($check.paths | str join ', ')"
}
}
let result = do {
^rg --no-heading --count-matches $check.pattern ...$valid_paths
} | complete
let has_matches = ($result.exit_code == 0)
if $check.must_be_empty {
{
passed: (not $has_matches),
detail: (if $has_matches { $"Pattern found (violation): ($result.stdout | str trim)" } else { "Pattern absent — ok" })
}
} else {
{
passed: $has_matches,
detail: (if $has_matches { "Pattern present — ok" } else { "Pattern absent (required match missing)" })
}
}
}
# Run a 'Cargo check: parse Cargo.toml, verify forbidden_deps absent from [dependencies].
def run-cargo [check: record]: nothing -> record {
let cargo_path = ([(adr-root), "crates", $check.crate, "Cargo.toml"] | path join)
if not ($cargo_path | path exists) {
return { passed: false, detail: $"Cargo.toml not found: ($cargo_path)" }
}
let cargo = (open $cargo_path)
let all_dep_sections = [
($cargo.dependencies? | default {}),
($cargo."dev-dependencies"? | default {}),
($cargo."build-dependencies"? | default {}),
]
let all_dep_keys = ($all_dep_sections | each { |d| $d | columns } | flatten)
let found = ($check.forbidden_deps | where { |dep| $dep in $all_dep_keys })
{
passed: ($found | is-empty),
detail: (if ($found | is-empty) { "No forbidden deps found" } else { $"Forbidden deps present: ($found | str join ', ')" })
}
}
# Run a 'NuCmd check: execute cmd via nu -c, assert exit code.
def run-nucmd [check: record]: nothing -> record {
let result = do { nu -c $check.cmd } | complete
let expected = ($check.expect_exit? | default 0)
{
passed: ($result.exit_code == $expected),
detail: (if ($result.exit_code == $expected) {
"Command exited as expected"
} else {
$"Exit ($result.exit_code) ≠ expected ($expected): ($result.stderr | str trim)"
})
}
}
# Run an 'ApiCall check: GET endpoint, navigate json_path, compare to expected.
def run-apicall [check: record]: nothing -> record {
let result = do { ^curl -sf $check.endpoint } | complete
if $result.exit_code != 0 {
return { passed: false, detail: $"curl failed (exit ($result.exit_code)): ($result.stderr | str trim)" }
}
let value = do { $result.stdout | from json | get $check.json_path } | complete
if $value.exit_code != 0 {
return { passed: false, detail: $"json_path '($check.json_path)' not found in response" }
}
let actual = $value.stdout | str trim
let expected = ($check.expected | into string)
{
passed: ($actual == $expected),
detail: (if ($actual == $expected) { "Value matches" } else { $"Expected '($expected)', got '($actual)'" })
}
}
# Run a 'FileExists check: assert presence or absence of a path.
def run-fileexists [check: record]: nothing -> record {
let p = (resolve-path $check.path)
let exists = ($p | path exists)
let want = ($check.present? | default true)
{
passed: ($exists == $want),
detail: (if ($exists == $want) {
(if $want { $"File exists: ($p)" } else { $"File absent: ($p)" })
} else {
(if $want { $"File missing: ($p)" } else { $"File unexpectedly present: ($p)" })
})
}
}
# Dispatch a single constraint.check record to the appropriate runner.
def dispatch-check [check: record]: nothing -> record {
match $check.tag {
"Grep" => (run-grep $check),
"Cargo" => (run-cargo $check),
"NuCmd" => (run-nucmd $check),
"ApiCall" => (run-apicall $check),
"FileExists" => (run-fileexists $check),
_ => { passed: false, detail: $"Unknown check tag: ($check.tag)" }
}
}
# ── Public commands ──────────────────────────────────────────────────────────
# Run a single constraint record.
# Returns { constraint_id, severity, passed, detail }.
export def "validate check-constraint" [
constraint: record, # Constraint record from an ADR export
]: nothing -> record {
if ($constraint.check? | is-empty) {
return {
constraint_id: ($constraint.id? | default "unknown"),
severity: ($constraint.severity? | default "Hard"),
passed: false,
detail: "No typed 'check' field — constraint uses deprecated check_hint only"
}
}
let result = dispatch-check $constraint.check
{
constraint_id: $constraint.id,
severity: $constraint.severity,
passed: $result.passed,
detail: $result.detail,
}
}
# Run all constraints for a single ADR by id ("001", "adr-001", or "adr-001-slug").
# Returns a list of { constraint_id, severity, passed, detail }.
export def "validate check-adr" [
id: string, # ADR id: "001", "adr-001", or full stem
--fmt: string = "table", # Output: table | json | yaml
]: nothing -> any {
let canonical = if ($id | str starts-with "adr-") { $id } else { $"adr-($id)" }
let files = (glob ([(adr-root), "adrs", $"($canonical)-*.ncl"] | path join))
if ($files | is-empty) {
error make { msg: $"ADR '($id)' not found in adrs/" }
}
let adr = (daemon-export-safe ($files | first))
if $adr == null {
error make { msg: $"ADR '($id)' failed to export" }
}
let results = ($adr.constraints | each { |c|
if ($c.check? | is-empty) {
{
constraint_id: $c.id,
severity: $c.severity,
passed: false,
detail: "check field missing — uses deprecated check_hint"
}
} else {
validate check-constraint $c
}
})
match $fmt {
"json" => { $results | to json },
"yaml" => { $results | to yaml },
_ => { $results | table --expand },
}
}
# Run all Hard constraints across all accepted ADRs.
# Returns { adr_id, constraint_id, severity, passed, detail } records.
# Exit code is non-zero if any Hard constraint fails.
export def "validate check-all" [
--fmt: string = "table", # Output: table | json | yaml
--hard-only, # Include only Hard constraints (default: all)
]: nothing -> any {
let all_results = (adr-files | each { |ncl|
let adr = (daemon-export-safe $ncl)
if $adr == null { return [] }
if $adr.status != "Accepted" { return [] }
$adr.constraints | each { |c|
if $hard_only and $c.severity != "Hard" { return null }
let res = if ($c.check? | is-empty) {
{ passed: false, detail: "check field missing — uses deprecated check_hint" }
} else {
dispatch-check $c.check
}
{
adr_id: $adr.id,
constraint_id: $c.id,
severity: $c.severity,
passed: $res.passed,
detail: $res.detail,
}
} | compact
} | flatten)
let failures = ($all_results | where passed == false)
let total = ($all_results | length)
let n_fail = ($failures | length)
let output = match $fmt {
"json" => { $all_results | to json },
"yaml" => { $all_results | to yaml },
_ => { $all_results | table --expand },
}
print $output
if ($failures | is-not-empty) {
error make {
msg: $"($n_fail) of ($total) constraints failed",
}
}
}
# Show a summary of constraint validation state across all accepted ADRs.
# Intended for ontoref status / describe guides.
export def "validate summary" []: nothing -> record {
let all_results = (adr-files | each { |ncl|
let adr = (daemon-export-safe $ncl)
if $adr == null { return [] }
if $adr.status != "Accepted" { return [] }
$adr.constraints | each { |c|
let res = if ($c.check? | is-empty) {
{ passed: false }
} else {
dispatch-check $c.check
}
{ severity: $c.severity, passed: $res.passed }
} | compact
} | flatten)
let hard = ($all_results | where severity == "Hard")
let soft = ($all_results | where severity == "Soft")
{
hard_total: ($hard | length),
hard_passing: ($hard | where passed == true | length),
soft_total: ($soft | length),
soft_passing: ($soft | where passed == true | length),
}
}

View File

@ -14,6 +14,12 @@ let connection_type = {
kind | connection_kind_type,
note | String | default = "",
url | String | default = "",
# Ontology node ID in the target project's DAG (e.g. "dag-formalized").
# Enables cross-project node resolution for impact analysis.
node | String | default = "",
# Transport or protocol used to reach the target (e.g. "http", "nats", "direct", "git").
# Used by federation.rs to select the resolution strategy.
via | String | default = "",
} in
let connections_type = {

View File

@ -0,0 +1,39 @@
# step-cargo-check.ncl — reusable ActionStep template for cargo check/clippy/test.
#
# Usage in a mode:
# let t = import "../../reflection/templates/step-cargo-check.ncl" in
# steps = [
# t { id = "clippy", subcommand = "clippy", extra_flags = "-- -D warnings" },
# t { id = "test", subcommand = "test", package = "ontoref-ontology" },
# ]
#
# Required params: id (String)
# Optional params: subcommand (default "check"), package, extra_flags, action, actor, depends_on, on_error
fun params =>
let defaults = {
subcommand | default = "check",
package | default = "",
extra_flags | default = "",
action | default = "Run cargo to validate Rust code integrity.",
actor | default = 'Both,
depends_on | default = [],
on_error | default = { strategy = 'Stop },
} in
let p = defaults & params in
let pkg_flag =
if p.package == "" then ""
else " -p %{p.package}"
in
let flags =
if p.extra_flags == "" then ""
else " %{p.extra_flags}"
in
{
id = p.id,
action = p.action,
cmd = "cargo %{p.subcommand}%{pkg_flag}%{flags}",
actor = p.actor,
depends_on = p.depends_on,
on_error = p.on_error,
}

View File

@ -0,0 +1,35 @@
# step-git-commit.ncl — reusable ActionStep template for staging and committing changes.
#
# actor defaults to 'Human — commits require human authorization per project conventions.
#
# Usage in a mode:
# let t = import "../../reflection/templates/step-git-commit.ncl" in
# steps = [
# t {
# id = "commit-ontology",
# message = "chore(on+re): update ontology state",
# paths = [".ontology/", "adrs/"],
# },
# ]
#
# Required params: id (String), message (String)
# Optional params: paths (default ["."]), action, actor, depends_on, on_error
fun params =>
let defaults = {
paths | default = ["."],
action | default = "Stage specified paths and create a git commit with the given message.",
actor | default = 'Human,
depends_on | default = [],
on_error | default = { strategy = 'Stop },
} in
let p = defaults & params in
let paths_str = std.string.join " " p.paths in
{
id = p.id,
action = p.action,
cmd = "git add %{paths_str} && git commit -m \"%{p.message}\"",
actor = p.actor,
depends_on = p.depends_on,
on_error = p.on_error,
}

View File

@ -0,0 +1,29 @@
# step-nickel-export.ncl — reusable ActionStep template for `nickel export`.
#
# Usage in a mode:
# let t = import "../../reflection/templates/step-nickel-export.ncl" in
# steps = [
# t { id = "export-core", file = ".ontology/core.ncl" },
# t { id = "export-state", file = ".ontology/state.ncl", import_path = "ontology/defaults" },
# ]
#
# Required params: id (String), file (String)
# Optional params: import_path, action, actor, depends_on, on_error
fun params =>
let defaults = {
import_path | default = "ontology/defaults",
action | default = "Export Nickel file and validate the result against its contracts.",
actor | default = 'Both,
depends_on | default = [],
on_error | default = { strategy = 'Stop },
} in
let p = defaults & params in
{
id = p.id,
action = p.action,
cmd = "nickel export --import-path %{p.import_path} %{p.file}",
actor = p.actor,
depends_on = p.depends_on,
on_error = p.on_error,
}

View File

@ -0,0 +1,258 @@
# Ontoref Protocol Update — Ontology Enrichment Prompt
**Purpose:** Bring `{project_name}` up to the current ontoref protocol version and enrich its
ontology to reflect the project's actual state. Run this prompt in the project's Claude Code
session with ontoref available.
**Substitutions required before use:**
- `{project_name}` — kebab-case project identifier
- `{project_dir}` — absolute path to project root
- `{ontoref_dir}` — absolute path to the ontoref checkout
---
## Context for the agent
You are enriching the ontoref ontology for project `{project_name}`. The ontology lives in
`.ontology/` and the reflection layer in `reflection/`. Your goal is to make the ontology
reflect current architectural reality — not aspirational state, not stale state.
Read the project's `.claude/CLAUDE.md` and any `CLAUDE.md` at root before starting. Understand
what the project actually does. All changes must pass `nickel export` cleanly.
---
## Phase 1 — Infrastructure: add missing v2 files
Run the infrastructure detection and update steps. These are additive — nothing is overwritten.
```sh
cd {project_dir}
# Step 1a: detect missing files
test -f .ontology/manifest.ncl && echo "manifest: present" || echo "manifest: MISSING"
test -f .ontology/connections.ncl && echo "connections: present" || echo "connections: MISSING"
# Step 1b: add manifest.ncl if missing
test -f .ontology/manifest.ncl || \
sed 's/{{ project_name }}/{project_name}/g' \
{ontoref_dir}/templates/ontology/manifest.ncl > .ontology/manifest.ncl
# Step 1c: add connections.ncl if missing
test -f .ontology/connections.ncl || \
sed 's/{{ project_name }}/{project_name}/g' \
{ontoref_dir}/templates/ontology/connections.ncl > .ontology/connections.ncl
# Step 1d: validate both files parse
nickel export --import-path {ontoref_dir}/ontology:{ontoref_dir}/ontology/schemas:{ontoref_dir}/ontology/defaults:{ontoref_dir} \
.ontology/manifest.ncl > /dev/null && echo "manifest: ok"
nickel export --import-path {ontoref_dir}/reflection/schemas:{ontoref_dir} \
.ontology/connections.ncl > /dev/null && echo "connections: ok"
```
If either validation fails, read the file, fix the import path or schema mismatch, and revalidate
before continuing.
---
## Phase 2 — Audit: understand current state
Run these commands and read the output before making any changes to core.ncl or state.ncl.
```sh
# Full project self-description (identity, axioms, practices, gate)
./scripts/ontoref describe project
# Semantic diff vs HEAD — shows what changed since last commit
./scripts/ontoref describe diff
# What modes are available, what gates allow
./scripts/ontoref describe guides
# Current gate state and dimension health
./scripts/ontoref describe gate
# API surface available (requires daemon running)
./scripts/ontoref describe api
```
Read the output of each command. Note:
- Which dimensions are in non-ideal states and why
- Which practices have no corresponding nodes in core.ncl
- What the diff reports as added/removed/changed since HEAD
- Whether the gate is aligned with what the project actually does today
---
## Phase 3 — Enrich core.ncl
Open `.ontology/core.ncl`. For each of the following, apply only what is actually true:
### 3a. Nodes — add missing, update stale descriptions
For any practice or capability the project has implemented since the last ontology update,
add a node with:
- `id` — kebab-case, stable identifier
- `level``'Protocol | 'Integration | 'Application | 'Tooling`
- `pole``'Positive | 'Negative | 'Tension`
- `description` — one sentence, present tense, what it IS (not what it should be)
- `adrs` — list any ADR IDs that govern this node
- `practices` — list practice slugs if declared in `.ontology/state.ncl`
Do NOT add aspirational nodes. If a feature is not yet implemented, do not add it.
### 3b. Edges — declare real dependencies
For any new nodes, declare edges to the nodes they depend on or implement:
```nickel
{ from = "new-node-id", to = "existing-node-id", kind = 'Implements }
```
Valid edge kinds: `'Implements | 'Depends | 'Extends | 'Supersedes | 'Tensions`
### 3c. Tensions — update descriptions
For tension nodes (pole = 'Tension), update the description to reflect the current root cause
if it has changed. Tensions describe real trade-offs the project has made, not theoretical ones.
After editing, validate:
```sh
nickel export --import-path {ontoref_dir}/ontology:{ontoref_dir} .ontology/core.ncl > /dev/null
```
---
## Phase 4 — Update state.ncl
Open `.ontology/state.ncl`. For each dimension:
1. Read `current_state` and `transition` conditions
2. Check whether a transition condition has been met based on recent work
3. If the project has advanced: update `current_state` to the new state
4. Update `blocker` and `catalyst` to reflect current reality (not stale reasoning)
Do NOT advance a dimension optimistically. Only advance if the transition condition is
demonstrably met (code exists, tests pass, ADR written — not "in progress").
After editing:
```sh
nickel export --import-path {ontoref_dir}/ontology:{ontoref_dir}/ontology/defaults:{ontoref_dir} \
.ontology/state.ncl > /dev/null
```
---
## Phase 5 — Fill manifest.ncl
Open `.ontology/manifest.ncl`. Declare any content assets the project has:
- Branding assets (logos, icons) in `assets/branding/` or equivalent
- Architecture diagrams in `docs/`, `assets/`, or `architecture/`
- Screenshots or demo recordings
- Agent prompt templates in `reflection/templates/`
- Mode step templates in `reflection/templates/`
For each asset, use `m.make_asset` with accurate `source_path` (relative to project root),
correct `kind`, and a one-sentence `description`. Only declare assets that actually exist on disk.
Check:
```sh
ls assets/ 2>/dev/null; ls docs/ 2>/dev/null; ls reflection/templates/ 2>/dev/null
```
After editing:
```sh
nickel export \
--import-path {ontoref_dir}/ontology:{ontoref_dir}/ontology/schemas:{ontoref_dir}/ontology/defaults:{ontoref_dir} \
.ontology/manifest.ncl > /dev/null
```
---
## Phase 6 — Declare connections.ncl
Open `.ontology/connections.ncl`. Declare cross-project relationships if they exist:
- `upstream` — projects this one depends on or consumes APIs from
- `downstream` — projects that consume this one's APIs or outputs
- `peers` — symmetric sibling services with shared concerns
For each connection: `project` must be a slug registered in the shared daemon, `node` is an
ontology node id in that project, `via` is `"http" | "local" | "nats"`.
If no cross-project relationships exist, leave all arrays empty — that is valid and correct.
Do NOT invent connections.
After editing:
```sh
nickel export --import-path {ontoref_dir}/reflection/schemas:{ontoref_dir} \
.ontology/connections.ncl > /dev/null
```
---
## Phase 7 — Migrate ADR check_hint (if present)
Check for deprecated `check_hint` fields:
```sh
grep -rl 'check_hint' {project_dir}/adrs/ 2>/dev/null
```
If any files are found, for each ADR:
1. Read the `check_hint` string — it describes what to verify
2. Map it to the closest typed `check` variant:
- Shell command → `'NuCmd { cmd = "...", expect_exit = 0 }`
- File search (grep/rg) → `'Grep { pattern = "...", paths = ["..."], must_be_empty = true }`
- Cargo.toml dep check → `'Cargo { crate = "...", forbidden_deps = ["..."] }`
- File presence → `'FileExists { path = "...", present = true }`
- API response → `'ApiCall { endpoint = "...", json_path = "...", expected = ... }`
3. Replace `check_hint` with `check` using the typed variant
4. Validate: `nickel export --import-path {ontoref_dir}/adrs:{ontoref_dir} adrs/adr-NNN-*.ncl > /dev/null`
---
## Phase 8 — Final validation
Run all validations in sequence:
```sh
# All .ontology/ files
for f in .ontology/core.ncl .ontology/state.ncl .ontology/gate.ncl \
.ontology/manifest.ncl .ontology/connections.ncl; do
nickel export \
--import-path {ontoref_dir}/ontology:{ontoref_dir}/ontology/schemas:\
{ontoref_dir}/ontology/defaults:{ontoref_dir}/reflection/schemas:{ontoref_dir} \
"$f" > /dev/null && echo "ok: $f" || echo "FAIL: $f"
done
# All ADRs
for f in adrs/adr-*.ncl; do
nickel export --import-path {ontoref_dir}/adrs:{ontoref_dir} $f > /dev/null && echo "ok: $f" || echo "FAIL: $f"
done
# Re-run describe diff to confirm changes are coherent
./scripts/ontoref describe diff
```
After all files pass, run the protocol update report:
```sh
./scripts/ontoref describe project
```
---
## Delivery
Report:
1. Files changed and what was changed in each
2. Nodes added / updated / removed in core.ncl
3. Dimension state transitions applied in state.ncl
4. Assets declared in manifest.ncl
5. Connections declared in connections.ncl
6. ADRs migrated from check_hint to typed check
7. Any validation errors that could not be resolved (with reason)
Do NOT commit. The developer reviews the diff before committing.

View File

@ -0,0 +1,27 @@
# .ontology/connections.ncl — Project: {{ project_name }}
# Declares cross-project relationships for federation and impact analysis.
# Used by: GET /graph/impact?include_external=true, describe impact --include-external
#
# Each connection:
# project — slug registered in the shared daemon
# node — ontology node id in that project (for impact graph traversal)
# via — transport: "http" | "local" | "nats"
#
# Directions:
# upstream — projects this one depends on / consumes from
# downstream — projects that depend on / consume this one
# peers — symmetric relationships (shared concerns, sibling services)
let s = import "connections" in
{
upstream = [
# { project = "platform-core", node = "api-contract", via = "http" },
],
downstream = [
# { project = "consumer-app", node = "integration-layer", via = "http" },
],
peers = [
# { project = "sibling-service", node = "shared-domain", via = "local" },
],
} | s.Connections

View File

@ -0,0 +1,38 @@
# .ontology/manifest.ncl — Project: {{ project_name }}
# Declares content assets (branding, diagrams, docs) and mode templates.
# Run: nickel export --import-path <ontoref>/ontology .ontology/manifest.ncl
#
# content_assets — typed content declarations consumed by describe guides,
# sync diff --level full, and the content verification mode step.
#
# AssetKind: 'Logo | 'Icon | 'Diagram | 'Screenshot | 'Video | 'Document
# TemplateKind: 'ModeStep | 'AgentPrompt | 'PublicationCard | 'ContentSection
#
# publish_to — list of target slugs where this asset should be deployed.
let m = import "ontology/defaults/manifest.ncl" in
m.make_manifest {
project = "{{ project_name }}",
repo_kind = 'Library,
content_assets = [
# m.make_asset {
# id = "logo-horizontal",
# kind = 'Logo,
# source_path = "assets/branding/logo-h.svg",
# variants = ["assets/branding/logo-h-dark.svg"],
# description = "Primary horizontal logo.",
# },
],
templates = [
# m.make_template {
# id = "agent-system-prompt",
# kind = 'AgentPrompt,
# source_path = "reflection/templates/agent-prompt.md",
# parameters = ["project_name", "actor"],
# description = "Base system prompt for agents operating in this project.",
# },
],
}