feat: mode guards, convergence, manifest coverage, doc authoring pattern
## Mode guards and convergence loops (ADR-011)
- `Guard` and `Converge` types added to `reflection/schema.ncl` and
`reflection/defaults.ncl`. Guards run pre-flight checks (Block/Warn);
converge loops iterate until a condition is met (RetryFailed/RetryAll).
- `sync-ontology.ncl`: 3 guards + converge (zero-drift condition, max 2 iter).
- `coder-workflow.ncl`: guard (coder-dir-exists) + `novelty-check` step.
- Rust types in `ontoref-reflection/src/mode.rs`; executor in `executor.rs`
evaluates guards before steps and convergence loop after.
- `adrs/adr-011-mode-guards-and-convergence.ncl` added.
## Manifest capability completeness
- `.ontology/manifest.ncl`: 3 → 19 declared capabilities covering the full
action surface (daemon API, modes, Task Composer, QA, bookmarks, etc.).
- `sync.nu`: `audit-manifest-coverage` + `sync manifest-check` command.
- `validate-project.ncl`: 6th category `manifest-cov`.
- Pre-commit hook `manifest-coverage` added.
- Migrations `0010-manifest-capability-completeness`,
`0011-manifest-coverage-hooks`.
## Rust doc authoring pattern — canonical `///` convention
- `#[onto_api]`: `description = "..."` optional when `///` doc comment exists
above handler — first line used as fallback. `#[derive(OntologyNode)]` same.
- `ontoref-daemon/src/api.rs`: 42 handlers migrated to `///` doc comments;
`description = "..."` removed from all `#[onto_api]` blocks.
- `sync diff --docs --fail-on-drift`: exits 1 on crate `//!` drift; used by
new `docs-drift` pre-commit hook. `docs-links` hook checks rustdoc broken links.
- `generator.nu`: mdBook `crates/` chapter — per-crate page from `//!` doc,
coverage badge, feature flags, implementing practice nodes.
- `.claude/CLAUDE.md`: `### Documentation Authoring (Rust)` section added.
- Migration `0012-rust-doc-authoring-pattern`.
## OntologyNode derive fixes
- `#[derive(OntologyNode)]`: `name` and `paths` attributes supported; `///`
doc fallback for `description`; `artifact_paths` correctly populated.
- `Core::from_value` calls `merge_contributors()` behind `#[cfg(feature = "derive")]`.
## Bug fixes
- `sync.nu` drift check: exact crate path match (not `str starts-with`);
first-path-only rule; split on `. ` not `.` to avoid `.ontology/` truncation.
- `find-unclaimed-artifacts`: fixed absolute vs relative path comparison.
- Rustdoc broken intra-doc links fixed across all three crates.
- `ci-docs` recipe now sets `RUSTDOCFLAGS` and actually fails on errors.
mode guards/converge, manifest coverage validation, 19 capabilities (ADR-011)
Extend the mode schema with Guard (pre-flight Block/Warn checks) and Converge
(RetryFailed/RetryAll post-execution loops) — protocol pushes back on invalid
state and iterates until convergence. ADR-011 records the decision to extend
modes rather than create a separate action subsystem.
Manifest expanded from 3 to 19 capabilities covering the full action surface
(compose, plans, backlog graduation, notifications, coder pipeline, forms,
templates, drift, quick actions, migrations, config, onboarding). New
audit-manifest-coverage validator + pre-commit hook + SessionStart hook
ensure agents always see complete project self-description.
Bug fix: find-unclaimed-artifacts absolute vs relative path comparison —
19 phantom MISSING items resolved. Health 43% → 100%.
Anti-slop: coder novelty-check step (Jaccard overlap against published+QA)
inserted between triage and publish in coder-workflow.
Justfile restructured into 5 modules (build/test/dev/ci/assets).
Migrations 0010-0011 propagate requirements to consumer projects.
This commit is contained in:
parent
183b8dcb3f
commit
13b03d6edf
@ -83,9 +83,10 @@ let d = import "../ontology/defaults/core.ncl" in
|
|||||||
"adrs/adr-008-ncl-first-config-validation-and-override-layer.ncl",
|
"adrs/adr-008-ncl-first-config-validation-and-override-layer.ncl",
|
||||||
"adrs/adr-009-manifest-self-interrogation-layer-three-semantic-axes.ncl",
|
"adrs/adr-009-manifest-self-interrogation-layer-three-semantic-axes.ncl",
|
||||||
"adrs/adr-010-protocol-migration-system.ncl",
|
"adrs/adr-010-protocol-migration-system.ncl",
|
||||||
|
"adrs/adr-011-mode-guards-and-convergence.ncl",
|
||||||
"CHANGELOG.md",
|
"CHANGELOG.md",
|
||||||
],
|
],
|
||||||
adrs = ["adr-001", "adr-002", "adr-003", "adr-004", "adr-005", "adr-006", "adr-007", "adr-008", "adr-009", "adr-010"],
|
adrs = ["adr-001", "adr-002", "adr-003", "adr-004", "adr-005", "adr-006", "adr-007", "adr-008", "adr-009", "adr-010", "adr-011"],
|
||||||
},
|
},
|
||||||
|
|
||||||
d.make_node {
|
d.make_node {
|
||||||
@ -93,8 +94,8 @@ let d = import "../ontology/defaults/core.ncl" in
|
|||||||
name = "Reflection Modes",
|
name = "Reflection Modes",
|
||||||
pole = 'Yang,
|
pole = 'Yang,
|
||||||
level = 'Practice,
|
level = 'Practice,
|
||||||
description = "Operational procedures are first-class artifacts encoded as NCL DAG contracts. Modes declare actors, steps, dependencies, and error strategies — not prose.",
|
description = "Operational procedures are first-class artifacts encoded as NCL DAG contracts. Modes declare actors, steps, dependencies, and error strategies — not prose. Forms (reflection/forms/) provide structured input schemas that feed into modes and the compose pipeline.",
|
||||||
artifact_paths = ["reflection/modes/", "reflection/schemas/", "crates/ontoref-reflection/"],
|
artifact_paths = ["reflection/modes/", "reflection/schemas/", "crates/ontoref-reflection/", "reflection/forms/"],
|
||||||
},
|
},
|
||||||
|
|
||||||
d.make_node {
|
d.make_node {
|
||||||
@ -212,7 +213,7 @@ let d = import "../ontology/defaults/core.ncl" in
|
|||||||
name = "Ontoref Daemon",
|
name = "Ontoref Daemon",
|
||||||
pole = 'Yang,
|
pole = 'Yang,
|
||||||
level = 'Practice,
|
level = 'Practice,
|
||||||
description = "Runtime support daemon for the ontoref protocol. Provides NCL export caching, file watching, actor registry, notification barrier, HTTP API (11 pages), MCP server (29 tools, stdio + streamable-HTTP), Q&A NCL persistence, quick-actions catalog, passive drift observation, unified auth/session management, per-file ontology version counters (GET /projects/{slug}/ontology/versions), and annotated API catalog (GET /api/catalog). API catalog populated at link time via #[onto_api] proc-macro + inventory — zero runtime overhead. Launched via ADR-004 NCL pipe bootstrap: nickel export config.ncl | ontoref-daemon.bin --config-stdin. Graph, search, and api_catalog UI pages carry browser-style panel navigation (back/forward history stack). File artifact paths open in external tabs: card.repo (Gitea source URL) for most files, card.docs (cargo docs) for .rs files — no inline file loading. card_repo/card_docs injected into Tera context from insert_brand_ctx; | safe filter required for URL values inside <script> blocks.",
|
description = "HTTP daemon for NCL export caching, file watching, actor registry, and MCP surface. Provides notification barrier, HTTP API (11 pages), MCP server (29 tools, stdio + streamable-HTTP), Q&A NCL persistence, quick-actions catalog, passive drift observation, unified auth/session management, per-file ontology version counters (GET /projects/{slug}/ontology/versions), and annotated API catalog (GET /api/catalog). API catalog populated at link time via #[onto_api] proc-macro + inventory — zero runtime overhead. Launched via ADR-004 NCL pipe bootstrap: nickel export config.ncl | ontoref-daemon.bin --config-stdin. Graph, search, and api_catalog UI pages carry browser-style panel navigation (back/forward history stack). File artifact paths open in external tabs: card.repo (Gitea source URL) for most files, card.docs (cargo docs) for .rs files — no inline file loading. card_repo/card_docs injected into Tera context from insert_brand_ctx; | safe filter required for URL values inside <script> blocks.",
|
||||||
invariant = false,
|
invariant = false,
|
||||||
artifact_paths = [
|
artifact_paths = [
|
||||||
"crates/ontoref-daemon/",
|
"crates/ontoref-daemon/",
|
||||||
@ -425,6 +426,20 @@ let d = import "../ontology/defaults/core.ncl" in
|
|||||||
adrs = ["adr-007", "adr-008"],
|
adrs = ["adr-007", "adr-008"],
|
||||||
},
|
},
|
||||||
|
|
||||||
|
d.make_node {
|
||||||
|
id = "ci-pipelines",
|
||||||
|
name = "CI/CD Pipelines",
|
||||||
|
pole = 'Yang,
|
||||||
|
level = 'Practice,
|
||||||
|
description = "Continuous integration pipelines: GitHub Actions (Nickel typecheck, Rust CI) and Woodpecker (standard + advanced CI). Pre-commit hooks enforce formatting, linting, testing, dependency auditing, docs-links, docs-drift, and manifest coverage before code reaches CI.",
|
||||||
|
artifact_paths = [
|
||||||
|
".github/workflows/",
|
||||||
|
".woodpecker/",
|
||||||
|
".pre-commit-config.yaml",
|
||||||
|
".claude/hooks/",
|
||||||
|
],
|
||||||
|
},
|
||||||
|
|
||||||
],
|
],
|
||||||
|
|
||||||
edges = [
|
edges = [
|
||||||
|
|||||||
@ -99,10 +99,22 @@ m.make_manifest {
|
|||||||
consumer = 'Agent,
|
consumer = 'Agent,
|
||||||
needs = ['OntologyExport, 'JsonSchema],
|
needs = ['OntologyExport, 'JsonSchema],
|
||||||
audit_level = 'Quick,
|
audit_level = 'Quick,
|
||||||
description = "Reads .ontology/core.ncl via nickel export. Queries nodes, edges, and ADRs to understand project constraints before acting.",
|
description = "Queries 19 capabilities, 31 ontology nodes, 10 ADRs, and 19 executable modes via describe capabilities (CLI), MCP tools (33), or daemon API. SessionStart hook auto-loads project identity and manifest health. Guards block mode execution on constraint violations.",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
|
|
||||||
|
justfile = {
|
||||||
|
system = 'Import,
|
||||||
|
required_modules = ["build", "test", "dev", "ci", "assets"],
|
||||||
|
required_recipes = ["default"],
|
||||||
|
},
|
||||||
|
|
||||||
|
claude = {
|
||||||
|
guidelines = ["rust", "nushell", "nickel"],
|
||||||
|
session_hook = true,
|
||||||
|
stratum_commands = true,
|
||||||
|
},
|
||||||
|
|
||||||
config_surface = m.make_config_surface {
|
config_surface = m.make_config_surface {
|
||||||
config_root = ".ontoref",
|
config_root = ".ontoref",
|
||||||
entry_point = "config.ncl",
|
entry_point = "config.ncl",
|
||||||
@ -245,17 +257,17 @@ m.make_manifest {
|
|||||||
m.make_capability {
|
m.make_capability {
|
||||||
id = "protocol-spec",
|
id = "protocol-spec",
|
||||||
name = "Protocol Specification",
|
name = "Protocol Specification",
|
||||||
summary = "Typed NCL schemas for nodes, edges, ADRs, state, gates, and manifests.",
|
summary = "Typed NCL schemas for nodes, edges, ADRs, state, gates, and manifests — the contract layer that projects implement to describe themselves.",
|
||||||
rationale = "Projects need a contract layer to describe what they are — not just code comments. NCL provides typed, queryable, git-versionable schemas with contract enforcement at export time. Alternatives (TOML/JSON/YAML) lack contracts; Rust-only structs are not adoption-friendly.",
|
rationale = "Projects need a contract layer to describe what they are — not just code comments. NCL provides typed, queryable, git-versionable schemas with contract enforcement at export time. Alternatives (TOML/JSON/YAML) lack contracts; Rust-only structs are not adoption-friendly.",
|
||||||
how = "ontology/schemas/ defines all type contracts. adrs/adr-schema.ncl defines the ADR lifecycle contract. ontology/defaults/ exposes builders (make_node, make_edge, make_adr) so consumer projects never write raw NCL records. nickel export validates against declared contracts before any JSON reaches Rust or Nushell.",
|
how = "ontology/schemas/ defines all type contracts (core, manifest, gate, state, content). adrs/adr-schema.ncl defines the ADR lifecycle contract with typed constraints (Cargo/Grep/NuCmd/ApiCall/FileExists checks). ontology/defaults/ exposes builders (make_node, make_edge, make_adr) so consumer projects never write raw NCL records. nickel export validates against declared contracts before any JSON reaches Rust or Nushell.",
|
||||||
artifacts = ["ontology/schemas/", "ontology/defaults/", "adrs/adr-schema.ncl", "adrs/adr-defaults.ncl"],
|
artifacts = ["ontology/schemas/", "ontology/defaults/", "adrs/adr-schema.ncl", "adrs/adr-defaults.ncl", "reflection/schemas/"],
|
||||||
nodes = ["dag-formalized", "protocol-not-runtime", "adr-lifecycle"],
|
nodes = ["dag-formalized", "protocol-not-runtime", "adr-lifecycle", "ontoref-ontology-crate", "ontology-three-file-split", "adr-node-linkage", "personal-ontology-schemas"],
|
||||||
},
|
},
|
||||||
m.make_capability {
|
m.make_capability {
|
||||||
id = "daemon-api",
|
id = "daemon-api",
|
||||||
name = "Daemon HTTP + MCP Surface",
|
name = "Daemon HTTP + MCP Surface",
|
||||||
summary = "HTTP UI (11 pages), 29 MCP tools, annotated API catalog, Q&A store, search bookmarks, config surface, per-file versioning.",
|
summary = "HTTP UI (11 pages), 33 MCP tools, annotated API catalog, SSE notifications, per-file versioning, and NCL export cache.",
|
||||||
rationale = "Agents and developers need a queryable interface to ontology state without spawning nickel on every request. The NCL export cache reduces full-sync from ~2m42s to <30s. 29 MCP tools give agents structured access to every capability without screen-scraping the CLI. ADR-002 records the architectural decision to extract the daemon; ADR-007 covers the #[onto_api] catalog pattern; ADR-008 covers the config override layer.",
|
rationale = "Agents and developers need a queryable interface to ontology state without spawning nickel on every request. The NCL export cache reduces full-sync from ~2m42s to <30s. MCP tools give agents structured access to every capability without screen-scraping the CLI. ADR-002 records the architectural decision to extract the daemon; ADR-007 covers the #[onto_api] catalog pattern; ADR-008 covers the config override layer.",
|
||||||
how = "crates/ontoref-daemon uses axum for HTTP. #[onto_api(...)] proc-macro + inventory::submit! registers every route at link time; GET /api/catalog aggregates via inventory::collect!. NclCache (DashMap<PathBuf, CachedExport>) keyed on path + mtime. File watcher (notify) triggers cache invalidation and drift detection after 15s debounce. MCP over stdio and streamable-HTTP.",
|
how = "crates/ontoref-daemon uses axum for HTTP. #[onto_api(...)] proc-macro + inventory::submit! registers every route at link time; GET /api/catalog aggregates via inventory::collect!. NclCache (DashMap<PathBuf, CachedExport>) keyed on path + mtime. File watcher (notify) triggers cache invalidation and drift detection after 15s debounce. MCP over stdio and streamable-HTTP.",
|
||||||
artifacts = [
|
artifacts = [
|
||||||
"crates/ontoref-daemon/",
|
"crates/ontoref-daemon/",
|
||||||
@ -265,16 +277,163 @@ m.make_manifest {
|
|||||||
"MCP: ontoref_guides, ontoref_api_catalog, ontoref_validate, ontoref_impact",
|
"MCP: ontoref_guides, ontoref_api_catalog, ontoref_validate, ontoref_impact",
|
||||||
],
|
],
|
||||||
adrs = ["adr-002", "adr-004", "adr-007", "adr-008"],
|
adrs = ["adr-002", "adr-004", "adr-007", "adr-008"],
|
||||||
nodes = ["ontoref-daemon", "api-catalog-surface", "config-surface"],
|
nodes = ["ontoref-daemon", "api-catalog-surface", "config-surface", "unified-auth-model"],
|
||||||
},
|
},
|
||||||
m.make_capability {
|
m.make_capability {
|
||||||
id = "reflection-modes",
|
id = "reflection-modes",
|
||||||
name = "Reflection Mode Executor",
|
name = "Reflection Mode DAG Engine",
|
||||||
summary = "NCL DAG workflow engine executing ADR lifecycle, project adoption, content modes, and operational procedures.",
|
summary = "19 executable NCL DAG workflows with typed steps, dependency graphs, 5 error strategies, actor filtering, guards (pre-flight constraint checks that block execution on violations), and convergence loops (re-execute until a condition is met). Modes cover sync, validation, content generation, project scaffolding, ADR lifecycle, and protocol adoption.",
|
||||||
rationale = "Structured procedures expressed as typed DAGs rather than ad-hoc scripts. Every step has a declared dep graph — the executor validates it before running. Agent-safe: modes are NCL contracts, not imperative scripts, so agents can read and reason about them before execution.",
|
rationale = "Structured procedures expressed as typed DAGs rather than ad-hoc scripts. Every step has a declared dep graph — the executor validates it before running. Agent-safe: modes are NCL contracts, not imperative scripts, so agents can read and reason about them before execution. Guards implement the Active Partner pattern — the protocol pushes back before executing if constraints are violated. Convergence implements the Refinement Loop pattern — modes iterate until a condition is met rather than running once blindly.",
|
||||||
how = "crates/ontoref-reflection loads a mode NCL file, validates the DAG contract (no cycles, declared deps exist), then executes steps via Nushell subprocesses. reflection/modules/ contains 16 Nushell modules that implement the actual step logic. reflection/modes/ contains the typed NCL DAG definitions.",
|
how = "crates/ontoref-reflection loads a mode NCL file, validates the DAG contract (no cycles, declared deps exist), then executes steps via Nushell subprocesses. Each step declares: id, action, cmd, actor, depends_on (Always/OnSuccess/OnFailure), on_error (Stop/Continue/Retry/Fallback/Branch), and verify. Guards run before steps — each has a cmd, reason, and severity (Block/Warn). Converge runs after steps — condition cmd checked, re-executes up to max_iterations using RetryFailed or RetryAll strategy.",
|
||||||
artifacts = ["reflection/modes/", "reflection/modules/", "crates/ontoref-reflection/"],
|
artifacts = ["reflection/modes/", "reflection/modules/", "crates/ontoref-reflection/", "reflection/schema.ncl"],
|
||||||
nodes = ["reflection-modes", "adopt-ontoref-tooling"],
|
nodes = ["reflection-modes", "adopt-ontoref-tooling", "ontoref-reflection-crate", "content-modes"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "run-step-tracking",
|
||||||
|
name = "Run/Step Execution Tracking",
|
||||||
|
summary = "Persistent execution tracking for mode runs: start runs, report steps with status/artifacts/warnings, validate dependency satisfaction, verify mode completion. File-based storage under .coder/<actor>/runs/ as JSONL.",
|
||||||
|
rationale = "Mode execution without tracking is fire-and-forget. Agents and CI need to resume interrupted runs, audit which steps passed or failed, and verify all required steps completed before declaring success. File-based storage keeps tracking git-versionable and debuggable without requiring a database.",
|
||||||
|
how = "ontoref run start <mode> creates a run directory with run.json metadata and empty steps.jsonl. step report validates the step exists in the mode DAG and that blocking dependencies (OnSuccess) are satisfied before appending. mode complete checks all steps are reported and no Stop-strategy failures block completion. current.json tracks the active run per actor.",
|
||||||
|
artifacts = ["reflection/modules/run.nu", "reflection/bin/ontoref.nu", ".coder/"],
|
||||||
|
nodes = ["reflection-modes", "coder-process-memory"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "agent-task-composer",
|
||||||
|
name = "Agent Task Composer",
|
||||||
|
summary = "Form-driven prompt composition UI: select NCL form templates, fill structured fields, preview assembled markdown, send to AI providers, or export as .plan.ncl with execution DAG and lifecycle FSM (Draft/Sent/Accepted/Executed/Archived).",
|
||||||
|
rationale = "Agents need structured task input, not free-text prompts. Forms enforce required fields and typed options. Plans decouple task definition from execution — a plan can be reviewed before running, and its status tracked across sessions. The compose UI bridges form schemas to AI provider APIs without custom integration per provider.",
|
||||||
|
how = "reflection/forms/*.ncl define form schemas (elements: section_header/text/select/multiselect/editor/confirm). Daemon UI (/ui/compose) renders forms dynamically, assembles markdown from field values, and offers two actions: (1) send to AI provider via HTTP, (2) save as .plan.md + .plan.ncl. Plan NCL carries template ID, field values, linked backlog items, linked ADRs, and an optional execution DAG referencing modes. Plan status is a 5-state FSM tracked in the NCL file.",
|
||||||
|
artifacts = ["reflection/forms/", "reflection/schemas/plan.ncl", "reflection/templates/", "crates/ontoref-daemon/templates/pages/compose.html"],
|
||||||
|
nodes = ["reflection-modes", "ontoref-daemon"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "backlog-graduation",
|
||||||
|
name = "Backlog with Typed Graduation Paths",
|
||||||
|
summary = "Work item tracking (Todo/Wish/Idea/Bug/Debt) with priority, status lifecycle, and typed graduation: items promote to ADRs, reflection modes, state transitions, or PR items via ontoref backlog promote. Notification-based approval workflow for status changes.",
|
||||||
|
rationale = "Work items are not flat lists — they have destinations. A bug graduates to a fix PR. An idea graduates to an ADR. A wish graduates to a reflection mode. Typed graduation makes the promotion path explicit and machine-readable, enabling agents to propose promotions and humans to approve them via the notification system.",
|
||||||
|
how = "reflection/backlog.ncl stores items typed by reflection/schemas/backlog.ncl (Item with graduates_to: Adr/Mode/StateTransition/PrItem). CLI commands: backlog add/done/cancel/promote/propose-status/approve. propose-status emits a custom notification (backlog_review) to the daemon; admin approves via notification UI, triggering the status update. roadmap command cross-references items with state.ncl dimensions.",
|
||||||
|
artifacts = ["reflection/backlog.ncl", "reflection/schemas/backlog.ncl", "reflection/modules/backlog.nu"],
|
||||||
|
nodes = ["reflection-modes"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "notification-system",
|
||||||
|
name = "Notification & Approval Workflows",
|
||||||
|
summary = "Event broadcast system with SSE streaming, per-actor acknowledgment, custom event emission, and cross-project notifications. Supports approval workflows via custom notification kinds (e.g. backlog_review). Ring buffer storage with configurable retention.",
|
||||||
|
rationale = "File watchers detect changes but actors need to be notified, not poll. SSE provides real-time push without WebSocket complexity. Per-actor ACK prevents one actor's acknowledgment from hiding notifications from others. Custom events enable approval workflows without a separate messaging system.",
|
||||||
|
how = "crates/ontoref-daemon/src/notifications.rs implements a per-project ring buffer (DashMap<project, Vec<Notification>>, default 256 entries). File changes emit OntologyChanged/AdrChanged/ReflectionChanged events. User code emits custom events via push_custom(kind, title, payload, source_actor, source_project). SSE broadcast via tokio::broadcast::Sender. Per-token ACK tracking (ack_all/ack_one). Backlog propose-status uses this for approval workflows.",
|
||||||
|
artifacts = ["crates/ontoref-daemon/src/notifications.rs", "reflection/modules/backlog.nu"],
|
||||||
|
nodes = ["ontoref-daemon"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "coder-process-memory",
|
||||||
|
name = "Coder Process Memory Pipeline",
|
||||||
|
summary = "Structured knowledge capture across actor sessions: init author workspaces, record JSON entries to queryable JSONL, triage inbox markdown into categories, publish to shared space with attribution, graduate to committed knowledge. 9-step DAG workflow.",
|
||||||
|
rationale = "Session knowledge evaporates between conversations. The coder pipeline captures insights, decisions, and investigations as structured entries — immediately queryable via coder log, promotable across actor boundaries, and graduable to committed project knowledge. Process memory bridges the gap between ephemeral sessions and persistent project knowledge.",
|
||||||
|
how = "coder-workflow mode (9 steps): init-author creates .coder/<author>/ with inbox/ and author.ncl. record-json appends structured entries to entries.jsonl. dump-markdown copies raw files to inbox. triage classifies inbox files into categories with companion NCL. query filters entries by tag/kind/domain/author. publish promotes entries to .coder/general/<category>/ with attribution. graduate copies to reflection/knowledge/. validate-ontology-core/state ensure ontology reflects new knowledge.",
|
||||||
|
artifacts = ["reflection/modes/coder-workflow.ncl", "reflection/modules/coder.nu", "reflection/schemas/coder.ncl", ".coder/"],
|
||||||
|
nodes = ["coder-process-memory"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "qa-knowledge-store",
|
||||||
|
name = "Q&A Knowledge Store",
|
||||||
|
summary = "Persistent question-answer pairs captured during development sessions, typed by QaEntry schema, with concurrent-safe NCL mutations, tag-based queries, verification status, and cross-references to ontology nodes and ADRs.",
|
||||||
|
rationale = "Recurring questions deserve persistent answers. Q&A entries bridge session boundaries — knowledge captured in one conversation is available in all future sessions. NCL storage keeps entries git-versionable and queryable without a database. ADR-003 records the decision to persist Q&A as NCL rather than browser storage.",
|
||||||
|
how = "reflection/qa.ncl stores entries typed by reflection/schemas/qa.ncl (QaEntry: id, question, answer, actor, tags, related, verified). crates/ontoref-daemon/src/ui/qa_ncl.rs performs line-level NCL surgery for add/update/remove with NclWriteLock for concurrency safety. Auto-incrementing IDs (qa-001, qa-002...). Accessible via MCP (ontoref_qa_list/add), HTTP (/qa-json), and CLI.",
|
||||||
|
artifacts = ["reflection/qa.ncl", "reflection/schemas/qa.ncl", "crates/ontoref-daemon/src/ui/qa_ncl.rs"],
|
||||||
|
adrs = ["adr-003"],
|
||||||
|
nodes = ["qa-knowledge-store"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "form-input-system",
|
||||||
|
name = "NCL Form Input System",
|
||||||
|
summary = "Declarative form schemas in NCL with typed elements (text/select/multiselect/editor/confirm/section), dual backend execution (CLI interactive prompts or daemon HTTP), and integration with the compose pipeline and template generation.",
|
||||||
|
rationale = "Structured input collection avoids free-text ambiguity. Forms declared as NCL schemas are versionable, composable, and renderable by both CLI and web UI without separate implementations. The same form drives interactive terminal prompts and web form submission.",
|
||||||
|
how = "reflection/forms/*.ncl define form schemas as arrays of typed elements. form list discovers available forms. form run <name> --backend cli|daemon executes the form interactively or via HTTP. Collected field values feed into the compose pipeline (prompt assembly), template rendering (J2), or direct mode execution. Forms for ADR creation, project onboarding, backlog items, and config editing.",
|
||||||
|
artifacts = ["reflection/forms/", "reflection/modules/form.nu", "crates/ontoref-daemon/templates/pages/compose.html"],
|
||||||
|
nodes = ["reflection-modes", "ontoref-daemon"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "template-generation",
|
||||||
|
name = "Template Generation Pipeline",
|
||||||
|
summary = "Jinja2 template rendering composing data from ontology nodes, crate metadata, ADRs, and mode definitions to generate NCL files, Nushell scripts, config variants, and adoption artifacts.",
|
||||||
|
rationale = "Code generation from project knowledge eliminates manual transcription between the ontology and implementation artifacts. Templates bridge the gap between typed NCL declarations and executable scripts or config files.",
|
||||||
|
how = "reflection/templates/*.j2 are Jinja2 templates processed by generator.nu. Templates extract data from ontology (nodes, edges), crate metadata (Cargo.toml), ADRs (constraints), and modes (steps). Output includes: adr.ncl.j2 (ADR files from form data), adopt_ontoref.nu.j2 (adoption scripts), create_project.nu.j2 (project scaffolding), config-production.ncl.j2 (config variants). describe.nu also supports per-section Tera templates in layouts/.",
|
||||||
|
artifacts = ["reflection/templates/", "reflection/modules/generator.nu", "reflection/modules/describe.nu"],
|
||||||
|
nodes = ["adopt-ontoref-tooling"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "describe-query-layer",
|
||||||
|
name = "Self-Knowledge Query Layer",
|
||||||
|
summary = "10 describe subcommands providing multi-level views of project knowledge: project (identity/axioms/tensions/practices), capabilities (modes/commands/flags/backlog), constraints (ADR hard/soft), state (FSM dimensions), tools, features, impact analysis, guides (full agent onboarding context), diff (recent changes), and workspace (multi-project overview).",
|
||||||
|
rationale = "Projects need queryable self-knowledge at different abstraction levels. A human needs a different view than an agent. describe provides semantic zoom — from one-line project identity to full constraint sets with check commands. This is the primary interface for agents to orient themselves before acting.",
|
||||||
|
how = "reflection/modules/describe.nu implements 10 subcommands, each collecting data from NCL exports (ontology, ADRs, manifest, backlog, modes) and rendering as text (human) or JSON (agent). Actor filtering applies to API routes, mode visibility, and constraint relevance. describe guides aggregates all subcommands into a single comprehensive output for agent cold-start.",
|
||||||
|
artifacts = ["reflection/modules/describe.nu", "reflection/bin/ontoref.nu"],
|
||||||
|
nodes = ["describe-query-layer", "manifest-self-description"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "drift-observation",
|
||||||
|
name = "Passive Drift Detection & Sync",
|
||||||
|
summary = "Background file watcher detecting divergence between .ontology/ declarations and actual project artifacts. 7-step sync mode: scan project structure, diff against ontology, detect doc drift (Jaccard similarity), propose patches, review, apply, verify. Emits ontology_drift notifications when drift is found.",
|
||||||
|
rationale = "Ontology declarations rot when code evolves without updating .ontology/core.ncl. Passive detection catches drift before it compounds. The sync mode bridges observation to action — detect, propose, and apply in a structured DAG rather than manual file editing.",
|
||||||
|
how = "crates/ontoref-daemon/src/ui/drift_watcher.rs watches crates/, .ontology/, adrs/, reflection/modes/ for changes. After 15s debounce, runs sync scan + sync diff. If MISSING/STALE/DRIFT/BROKEN items found, emits ontology_drift notification. sync-ontology mode (7 steps) provides the full remediation workflow: scan extracts pub API via cargo doc JSON, diff categorizes divergence, propose generates NCL patches, apply writes changes, verify runs nickel typecheck.",
|
||||||
|
artifacts = ["reflection/modes/sync-ontology.ncl", "reflection/modules/sync.nu", "crates/ontoref-daemon/src/ui/drift_watcher.rs"],
|
||||||
|
nodes = ["drift-observation", "reflection-modes"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "quick-actions",
|
||||||
|
name = "Quick Actions Catalog",
|
||||||
|
summary = "Configurable shortcut grid mapping UI buttons to reflection mode execution. Declared in .ontoref/config.ncl with id, label, icon, category, mode reference, and actor ACL. One-click mode execution from the daemon UI without CLI.",
|
||||||
|
rationale = "Frequently used modes (sync-ontology, generate-mdbook, coder-workflow) need one-click access. The quick actions grid reduces friction between knowing a mode exists and executing it — especially for developers who prefer the UI over CLI.",
|
||||||
|
how = ".ontoref/config.ncl declares quick_actions array. Daemon UI (/actions) renders a categorized grid of action cards. Click triggers daemon handler run_action_by_id() which spawns tokio::process::Command with ./ontoref run <mode_id>. New actions can be added via MCP (ontoref_action_add) or by editing config.ncl directly.",
|
||||||
|
artifacts = [".ontoref/config.ncl", "crates/ontoref-daemon/templates/pages/actions.html", "reflection/modes/"],
|
||||||
|
nodes = ["quick-actions", "ontoref-daemon"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "protocol-migration",
|
||||||
|
name = "Protocol Migration System",
|
||||||
|
summary = "Progressive, idempotent protocol upgrades for consumer projects. Each migration carries a typed check (FileExists/Grep/NuCmd) and human-readable instructions. No state file — the check result IS the applied state. Consumer projects run ontoref migrate pending to discover and apply upgrades.",
|
||||||
|
rationale = "Protocol changes that only apply to ontoref itself are useless for the ecosystem. The migration system is the propagation mechanism — without it, consumer projects never learn about protocol evolution. ADR-010 records this decision.",
|
||||||
|
how = "reflection/migrations/NNNN-slug.ncl files define migrations with id, slug, description, check (tag + parameters), and instructions. ontoref migrate list shows all migrations. migrate pending filters to unapplied (check fails). migrate show <id> displays instructions. Checks are idempotent: FileExists tests path existence, Grep tests pattern presence, NuCmd runs a Nushell expression and checks exit code.",
|
||||||
|
artifacts = ["reflection/migrations/", "reflection/modules/migrate.nu", "reflection/bin/ontoref.nu"],
|
||||||
|
adrs = ["adr-010"],
|
||||||
|
nodes = ["protocol-migration-system"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "config-surface-management",
|
||||||
|
name = "Config Surface with NCL Validation",
|
||||||
|
summary = "Structured config system: NCL contracts validate all fields, override-layer mutation preserves original files, zero-maintenance field registry via #[derive(ConfigFields)] + inventory auto-discovers which Rust struct consumes which NCL field, coherence endpoint detects unclaimed fields and consumer fields missing from NCL export.",
|
||||||
|
rationale = "Config drift between NCL declarations and Rust consumers is invisible until runtime. The config coherence system makes it detectable at build time. Override-layer mutation ensures original NCL files (with comments, contracts, formatting) are never modified — changes are always additive. ADR-008 records this decision.",
|
||||||
|
how = "crates/ontoref-daemon/src/config.rs defines DaemonRuntimeConfig. #[derive(ConfigFields)] on Rust structs registers consumed fields via inventory::submit!. GET /config/coherence cross-references NCL export keys against registered consumers. .ontoref/contracts.ncl defines LogConfig, DaemonConfig contracts. config show/verify/audit/apply CLI commands for inspection and mutation.",
|
||||||
|
artifacts = [".ontoref/config.ncl", ".ontoref/contracts.ncl", "crates/ontoref-daemon/src/config.rs", "crates/ontoref-daemon/src/config_coherence.rs", "crates/ontoref-derive/src/lib.rs"],
|
||||||
|
adrs = ["adr-008"],
|
||||||
|
nodes = ["config-surface", "ontoref-daemon", "daemon-config-management"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "project-onboarding",
|
||||||
|
name = "Project Onboarding",
|
||||||
|
summary = "CLI and form-driven onboarding for new projects into the ontoref protocol: scaffolds .ontology/, adrs/, reflection/, .ontoref/config.ncl, and registers the project with the daemon. Includes templates for project.ncl and remote-project.ncl.",
|
||||||
|
rationale = "Adoption friction is the main barrier to protocol spread. Onboarding must be a guided, repeatable process — not a manual checklist. The adopt_ontoref mode and forms reduce the first adoption to answering structured questions.",
|
||||||
|
how = "reflection/modes/adopt_ontoref.ncl orchestrates the full onboarding DAG. reflection/forms/adopt_ontoref.ncl collects project metadata. reflection/templates/adopt_ontoref.nu.j2 generates the adoption script. templates/ provides starter files for all protocol directories. install/gen-projects.nu handles daemon project registration.",
|
||||||
|
artifacts = ["reflection/modes/adopt_ontoref.ncl", "reflection/forms/adopt_ontoref.ncl", "templates/", "install/gen-projects.nu", "reflection/bin/ontoref.nu"],
|
||||||
|
nodes = ["project-onboarding", "adopt-ontoref-tooling", "ci-pipelines"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "web-presence",
|
||||||
|
name = "Web Presence",
|
||||||
|
summary = "Landing page at assets/web/ describing the ontoref protocol to external audiences. Bilingual (EN/ES), covers protocol layers, yin/yang duality, crates, and adoption path.",
|
||||||
|
rationale = "The protocol needs a public-facing explanation beyond the README. The landing page serves as the first point of contact for potential adopters.",
|
||||||
|
how = "assets/web/src/index.html is the source. assets/web/index.html is the built output. README.md links to it. assets/architecture.svg provides the visual architecture diagram.",
|
||||||
|
artifacts = ["assets/web/src/index.html", "assets/web/index.html", "README.md", "assets/architecture.svg"],
|
||||||
|
nodes = ["web-presence"],
|
||||||
|
},
|
||||||
|
m.make_capability {
|
||||||
|
id = "search-bookmarks",
|
||||||
|
name = "Search Bookmarks",
|
||||||
|
summary = "Persistent bookmark store for ontology graph search results. Entries typed as BookmarkEntry with node cross-references, sequential IDs, concurrent-safe NCL mutations via NclWriteLock. Accessible from daemon search UI and MCP.",
|
||||||
|
rationale = "Graph exploration sessions produce valuable search paths that are lost when the page reloads. Bookmarks persist the trail of ontology exploration across sessions.",
|
||||||
|
how = "reflection/search_bookmarks.ncl stores entries typed by reflection/schemas/search_bookmarks.ncl. crates/ontoref-daemon/src/ui/search_bookmarks_ncl.rs performs line-level NCL surgery with NclWriteLock. Sequential IDs (sb-001, sb-002...). MCP tool ontoref_bookmark_add for agent access.",
|
||||||
|
artifacts = ["reflection/search_bookmarks.ncl", "reflection/schemas/search_bookmarks.ncl", "crates/ontoref-daemon/src/ui/search_bookmarks_ncl.rs"],
|
||||||
|
nodes = ["search-bookmarks"],
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
|
|
||||||
|
|||||||
@ -24,7 +24,7 @@ let d = import "../ontology/defaults/state.ncl" in
|
|||||||
from = "adoption-tooling-complete",
|
from = "adoption-tooling-complete",
|
||||||
to = "protocol-stable",
|
to = "protocol-stable",
|
||||||
condition = "ADR-001 accepted, ontoref.dev published, at least two external projects consuming the protocol.",
|
condition = "ADR-001 accepted, ontoref.dev published, at least two external projects consuming the protocol.",
|
||||||
catalyst = "10 projects consuming the protocol: vapora, stratumiops, kogral, typedialog, secretumvault, rustelo, librecloud_renew, website-impl, jpl_ontology, provisioning. ADR-001 Accepted. Auth model, install pipeline, personal/career schemas, content modes, API catalog (#[onto_api], ADR-007), config surface (ADR-008), manifest self-interrogation (ADR-009), protocol migration system (ADR-010) all complete.",
|
catalyst = "10 projects consuming the protocol: vapora, stratumiops, kogral, typedialog, secretumvault, rustelo, librecloud_renew, website-impl, jpl_ontology, provisioning. ADR-001 Accepted. Auth model, install pipeline, personal/career schemas, content modes, API catalog (#[onto_api], ADR-007), config surface (ADR-008), manifest self-interrogation (ADR-009), protocol migration system (ADR-010), mode guards and convergence (ADR-011) all complete. Session 2026-03-30: manifest expanded to 19 capabilities; manifest coverage validation (audit + pre-commit + SessionStart); 3 new migrations (0010-0012).",
|
||||||
blocker = "ontoref.dev not yet published.",
|
blocker = "ontoref.dev not yet published.",
|
||||||
horizon = 'Months,
|
horizon = 'Months,
|
||||||
},
|
},
|
||||||
@ -52,7 +52,7 @@ let d = import "../ontology/defaults/state.ncl" in
|
|||||||
from = "modes-and-web-present",
|
from = "modes-and-web-present",
|
||||||
to = "fully-self-described",
|
to = "fully-self-described",
|
||||||
condition = "At least 3 ADRs accepted, reflection/backlog.ncl present, describe project returns complete picture.",
|
condition = "At least 3 ADRs accepted, reflection/backlog.ncl present, describe project returns complete picture.",
|
||||||
catalyst = "ADR-001–ADR-006 authored (6 ADRs present). Auth model, project onboarding, and session management nodes added in 2026-03-13. Personal/career/project-card schemas, 5 content modes, search bookmarks, and ADR-006 (Nu 0.111 compat) added in session 2026-03-15. Session 2026-03-23: api-catalog-surface node added (#[onto_api] proc-macro + inventory catalog), describe-query-layer updated (diff + api subcommands), adopt-ontoref-tooling updated (update_ontoref mode + manifest/connections templates + enrichment prompt), ontoref-daemon updated (11 pages, 29 MCP tools, per-file versioning, API catalog endpoint). Session 2026-03-26: config-surface node added — typed DaemonNclConfig (parse-at-boundary pattern), #[derive(ConfigFields)] coherence registry, override-layer mutation API (PUT /config/{section}), NCL contracts (.ontoref/contracts.ncl: LogConfig + DaemonConfig), manifest config_surface with multi-consumer sections. ADR-007 (inventory/onto_api) extended to ConfigFields; ADR-008 (NCL-first config validation + override-layer mutation). Session 2026-03-26 (2nd): manifest-self-description node added — capability_type (id/name/summary/rationale/how/artifacts/adrs/nodes), requirement_type (env_target: Production/Development/Both; kind: Tool/Service/EnvVar/Infrastructure; impact/provision), critical_dep_type (failure_impact required; mitigation). describe requirements new subcommand. describe guides extended with capabilities/requirements/critical_deps. Bug fix: collect-identity read manifest.kind? (never existed) instead of manifest.repo_kind?; description field added to manifest_type. Ontoref self-described with 3 capabilities, 5 requirements, 3 critical deps. ADR-009. Session 2026-03-29: graph/search/api_catalog UI pages gain browser-style panel navigation (PanelNav/dpNav back/forward history stack, cursor-into-array model). File artifact paths route to external tabs via card.repo (Gitea source URL format {repo}/src/branch/main/{path}) or card.docs (cargo docs URL for .rs files when configured) — inline file loading removed from all three pages. card.ncl gains repo field. insert_brand_ctx injects card_repo/card_docs into Tera context. Tera | safe filter applied to URL values in <script> blocks to prevent HTML entity escaping of slashes.",
|
catalyst = "ADR-001–ADR-006 authored (6 ADRs present). Auth model, project onboarding, and session management nodes added in 2026-03-13. Personal/career/project-card schemas, 5 content modes, search bookmarks, and ADR-006 (Nu 0.111 compat) added in session 2026-03-15. Session 2026-03-23: api-catalog-surface node added (#[onto_api] proc-macro + inventory catalog), describe-query-layer updated (diff + api subcommands), adopt-ontoref-tooling updated (update_ontoref mode + manifest/connections templates + enrichment prompt), ontoref-daemon updated (11 pages, 29 MCP tools, per-file versioning, API catalog endpoint). Session 2026-03-26: config-surface node added — typed DaemonNclConfig (parse-at-boundary pattern), #[derive(ConfigFields)] coherence registry, override-layer mutation API (PUT /config/{section}), NCL contracts (.ontoref/contracts.ncl: LogConfig + DaemonConfig), manifest config_surface with multi-consumer sections. ADR-007 (inventory/onto_api) extended to ConfigFields; ADR-008 (NCL-first config validation + override-layer mutation). Session 2026-03-26 (2nd): manifest-self-description node added. ADR-009. Session 2026-03-29: browser-style panel navigation. Session 2026-03-30: manifest expanded 3→19 capabilities (complete action surface: modes, compose, plans, backlog graduation, notifications, coder pipeline, forms, templates, drift, quick actions, migration, config, search bookmarks, onboarding, web presence). audit-manifest-coverage validator + pre-commit hook + SessionStart hook. Mode schema extended: Guard type (Block/Warn severity pre-flight checks), Converge type (RetryFailed/RetryAll post-execution loops). ADR-011. Migrations 0010-0012. Bug fix: find-unclaimed-artifacts absolute vs relative path comparison. Justfile split (build/test/dev/ci/assets). Anti-slop novelty-check in coder pipeline (Jaccard overlap against published+QA). Health 43%→100%.",
|
||||||
blocker = "none",
|
blocker = "none",
|
||||||
horizon = 'Weeks,
|
horizon = 'Weeks,
|
||||||
},
|
},
|
||||||
|
|||||||
@ -40,6 +40,30 @@ repos:
|
|||||||
pass_filenames: false
|
pass_filenames: false
|
||||||
stages: [pre-commit]
|
stages: [pre-commit]
|
||||||
|
|
||||||
|
- id: docs-links
|
||||||
|
name: Rustdoc broken intra-doc links
|
||||||
|
entry: bash -c 'RUSTDOCFLAGS="-D rustdoc::broken-intra-doc-links -D rustdoc::private-intra-doc-links" cargo doc --no-deps --workspace -q'
|
||||||
|
language: system
|
||||||
|
types: [rust]
|
||||||
|
pass_filenames: false
|
||||||
|
stages: [pre-commit]
|
||||||
|
|
||||||
|
- id: docs-drift
|
||||||
|
name: Crate //! doc drift check
|
||||||
|
entry: bash -c 'nu -c "use ./reflection/modules/sync.nu; sync diff --docs --fail-on-drift"'
|
||||||
|
language: system
|
||||||
|
types: [rust]
|
||||||
|
pass_filenames: false
|
||||||
|
stages: [pre-commit]
|
||||||
|
|
||||||
|
- id: manifest-coverage
|
||||||
|
name: Manifest capability completeness
|
||||||
|
entry: bash -c 'ONTOREF_ROOT="$(pwd)" ONTOREF_PROJECT_ROOT="$(pwd)" nu --no-config-file -c "use ./reflection/modules/sync.nu *; sync manifest-check"'
|
||||||
|
language: system
|
||||||
|
files: (\.ontology/|reflection/modes/|reflection/forms/).*\.ncl$
|
||||||
|
pass_filenames: false
|
||||||
|
stages: [pre-commit]
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Nushell Hooks (optional - enable if using Nushell)
|
# Nushell Hooks (optional - enable if using Nushell)
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|||||||
125
CHANGELOG.md
125
CHANGELOG.md
@ -7,6 +7,131 @@ ADRs referenced below live in `adrs/` as typed Nickel records.
|
|||||||
|
|
||||||
## [Unreleased]
|
## [Unreleased]
|
||||||
|
|
||||||
|
### Rust doc authoring pattern — canonical `///` convention
|
||||||
|
|
||||||
|
#### `#[onto_api]` — `description` now optional
|
||||||
|
|
||||||
|
- `description = "..."` parameter is no longer required when a `///` doc comment exists
|
||||||
|
above the handler. The proc-macro reads the first `///` line as the fallback description.
|
||||||
|
- Same fallback applied to `#[derive(OntologyNode)]` — `///` first line used when
|
||||||
|
`description` attribute is absent.
|
||||||
|
- `ontoref-daemon/src/api.rs`: 42 handlers migrated — `description = "..."` removed from
|
||||||
|
all `#[onto_api]` blocks, canonical `///` first line added above each handler.
|
||||||
|
|
||||||
|
#### `sync diff --docs --fail-on-drift`
|
||||||
|
|
||||||
|
- New `--fail-on-drift` flag on `sync diff`: exits 1 when any crate `//!` has drifted
|
||||||
|
from its ontology node description. Intended for pre-commit enforcement; without the
|
||||||
|
flag, the command remains non-destructive and returns the table as before.
|
||||||
|
|
||||||
|
#### mdBook `crates/` chapter
|
||||||
|
|
||||||
|
- `generator.nu`: two helpers added — `read-crate-module-doc` (parses `//!` from
|
||||||
|
`lib.rs`/`main.rs`) and `count-pub-coverage` (ratio of documented pub items).
|
||||||
|
- `render-mdbook` generates `docs/src/crates/<name>.md` per workspace member: `//!`
|
||||||
|
content, pub item coverage badge, feature flags from `Cargo.toml`, and which practice
|
||||||
|
nodes list the crate as primary `artifact_paths`. Missing `//!` renders a warning block.
|
||||||
|
- `SUMMARY.md` gains a `# Crates` section with links to all generated pages.
|
||||||
|
|
||||||
|
#### Pre-commit hooks
|
||||||
|
|
||||||
|
- `.pre-commit-config.yaml`: `docs-links` hook runs rustdoc broken-link check
|
||||||
|
(`RUSTDOCFLAGS` with `-D rustdoc::broken-intra-doc-links`) on `.rs` changes.
|
||||||
|
- `.pre-commit-config.yaml`: `docs-drift` hook runs
|
||||||
|
`sync diff --docs --fail-on-drift` on `.rs` changes.
|
||||||
|
|
||||||
|
#### Agent and developer directives
|
||||||
|
|
||||||
|
- `.claude/CLAUDE.md`: `### Documentation Authoring (Rust)` section added — three-layer
|
||||||
|
table, four authoring rules, agent discovery commands (`describe workspace`,
|
||||||
|
`describe features`, `sync diff --docs`), crate registration procedure.
|
||||||
|
|
||||||
|
#### Migration
|
||||||
|
|
||||||
|
- `0012-rust-doc-authoring-pattern`: consumer projects receive the
|
||||||
|
`### Documentation Authoring (Rust)` section for their `CLAUDE.md` and optional
|
||||||
|
pre-commit hooks (`docs-links`, `docs-drift`).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Mode guards, convergence loops, and manifest coverage enforcement
|
||||||
|
|
||||||
|
#### Mode schema extension (ADR-011)
|
||||||
|
|
||||||
|
- **Guard type** added to `reflection/schema.ncl`: pre-flight executable checks with
|
||||||
|
`Block` (abort) or `Warn` (continue) severity. Guards run before any step — the
|
||||||
|
protocol pushes back on invalid state instead of failing silently mid-execution.
|
||||||
|
- **Converge type** added to `reflection/schema.ncl`: post-execution convergence loop
|
||||||
|
with `condition` command, `max_iterations` cap, and `RetryFailed`/`RetryAll` strategy.
|
||||||
|
Modes iterate until a condition is met rather than running once blindly.
|
||||||
|
- Both types exposed in `reflection/defaults.ncl` as `Guard` and `Converge`.
|
||||||
|
- Reference implementation: `sync-ontology.ncl` gains 3 guards (ontology-exists,
|
||||||
|
nickel-available, manifest-capabilities) and converge (iterate until zero drift,
|
||||||
|
max 2 iterations).
|
||||||
|
- `coder-workflow.ncl` gains guard (coder-dir-exists) and new `novelty-check` step
|
||||||
|
(anti-slop Jaccard overlap detection between pending entries and published+QA).
|
||||||
|
- Nushell executor (`reflection/nulib/modes.nu`): guard execution before steps,
|
||||||
|
convergence loop after steps.
|
||||||
|
- Rust types: `Guard`, `GuardSeverity`, `Converge`, `ConvergeStrategy` in
|
||||||
|
`ontoref-reflection/src/mode.rs`.
|
||||||
|
- Rust executor: guards evaluated pre-execution (Block returns error, Warn logs),
|
||||||
|
convergence loop post-execution.
|
||||||
|
- Backward compatible: `guards` defaults to `[]`, `converge` is optional.
|
||||||
|
All 19 existing modes export unchanged.
|
||||||
|
|
||||||
|
#### Manifest capability completeness (19 capabilities)
|
||||||
|
|
||||||
|
- `.ontology/manifest.ncl` expanded from 3 to 19 declared capabilities covering the
|
||||||
|
full action surface: protocol spec, daemon API, reflection modes, run/step tracking,
|
||||||
|
Agent Task Composer, backlog graduation, notifications, coder process memory, QA store,
|
||||||
|
form system, template generation, describe query layer, drift detection, quick actions,
|
||||||
|
protocol migration, config surface, search bookmarks, project onboarding, web presence.
|
||||||
|
- `audit-manifest-coverage` function in `reflection/modules/sync.nu`: cross-references
|
||||||
|
Practice nodes, reflection modes, and daemon UI pages against declared capabilities.
|
||||||
|
- `sync manifest-check` exported command for pre-commit hooks and CI.
|
||||||
|
- `validate-project.ncl` gains 6th validation category: `manifest-cov`.
|
||||||
|
- Pre-commit hook `manifest-coverage`: fires on `.ontology/`, `reflection/modes/`,
|
||||||
|
`reflection/forms/` changes.
|
||||||
|
- SessionStart hook (`session-context.sh`): shows manifest coverage status at session start.
|
||||||
|
- Agent consumption mode description updated in manifest.
|
||||||
|
|
||||||
|
#### Bug fixes
|
||||||
|
|
||||||
|
- `find-unclaimed-artifacts` in `sync.nu`: fixed absolute vs relative path comparison
|
||||||
|
for modes and forms. `path relative-to $root` applied before `starts-with` check.
|
||||||
|
19 phantom MISSING items resolved.
|
||||||
|
- `audit-claude` session-hook check: accepts both `.claude/hooks/session-context.sh`
|
||||||
|
and legacy `.claude/ontoref-session-start.sh`.
|
||||||
|
- `describe mode` now shows guards and converge sections.
|
||||||
|
- `scan-reflection-mode-dags` JSON output for agents now includes guards, converge,
|
||||||
|
preconditions, postconditions, and step-level cmd/on_error/verify.
|
||||||
|
|
||||||
|
#### Infrastructure
|
||||||
|
|
||||||
|
- Justfile restructured: `justfiles/build.just`, `justfiles/test.just`,
|
||||||
|
`justfiles/dev.just` added. CI recipes delegated to canonical modules.
|
||||||
|
Manifest justfile convention updated to `Import` system with 5 modules.
|
||||||
|
- Manifest `claude` baseline declared: `["rust", "nushell", "nickel"]` guidelines, session_hook enabled.
|
||||||
|
- `.ontology/core.ncl`: `ci-pipelines` Practice node added. `reflection/forms/` added to `reflection-modes` artifact_paths.
|
||||||
|
|
||||||
|
#### Migrations
|
||||||
|
|
||||||
|
- `0010-manifest-capability-completeness`: consumer projects must declare ≥3 capabilities.
|
||||||
|
- `0011-manifest-coverage-hooks`: consumer projects must add pre-commit and SessionStart
|
||||||
|
hooks for manifest coverage.
|
||||||
|
|
||||||
|
#### on+re update
|
||||||
|
|
||||||
|
| Artifact | Change |
|
||||||
|
|----------|--------|
|
||||||
|
| `.ontology/manifest.ncl` | 3 → 19 capabilities; justfile/claude baselines declared; agent consumption mode updated |
|
||||||
|
| `.ontology/core.ncl` | `ci-pipelines` node; `reflection/forms/` in reflection-modes; adr-011 in adr-lifecycle |
|
||||||
|
| `.ontology/state.ncl` | protocol-maturity catalyst updated (ADR-011, 19 caps, migrations 0010-0012); self-description-coverage catalyst updated (session 2026-03-30) |
|
||||||
|
| `adrs/adr-011-mode-guards-and-convergence.ncl` | New ADR: guards and converge extend mode schema rather than separate action subsystem |
|
||||||
|
| Health | 43.2% → 100.0% (31 OK / 0 MISSING / 0 STALE) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Browser-style panel navigation + repo file routing
|
### Browser-style panel navigation + repo file routing
|
||||||
|
|
||||||
Graph, search, and api_catalog pages now share a uniform browser-style navigation model:
|
Graph, search, and api_catalog pages now share a uniform browser-style navigation model:
|
||||||
|
|||||||
32
README.md
32
README.md
@ -35,9 +35,9 @@ crates/ Rust implementation — typed struct loaders and mode executo
|
|||||||
| Crate | Purpose |
|
| Crate | Purpose |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| `ontoref-ontology` | `.ontology/` NCL → typed Rust structs: Node, Edge, Dimension, Gate, Membrane. `Node` carries `artifact_paths` and `adrs` (`Vec<String>`, both `serde(default)`). Graph traversal, invariant queries. Zero deps. |
|
| `ontoref-ontology` | `.ontology/` NCL → typed Rust structs: Node, Edge, Dimension, Gate, Membrane. `Node` carries `artifact_paths` and `adrs` (`Vec<String>`, both `serde(default)`). Graph traversal, invariant queries. Zero deps. |
|
||||||
| `ontoref-reflection` | NCL DAG contract executor: ADR lifecycle, step dep resolution, config seal. `stratum-graph` + `stratum-state` required. |
|
| `ontoref-reflection` | NCL DAG contract executor with guards (pre-flight Block/Warn checks) and convergence loops (RetryFailed/RetryAll). ADR lifecycle, step dep resolution, config seal. `stratum-graph` + `stratum-state` required. |
|
||||||
| `ontoref-daemon` | HTTP UI (11 pages), actor registry, notification barrier, MCP (29 tools), search engine, search bookmarks, SurrealDB, NCL export cache, per-file ontology versioning, annotated API catalog. |
|
| `ontoref-daemon` | HTTP UI (11 pages), actor registry, notification barrier, MCP (33 tools), search engine, search bookmarks, SurrealDB, NCL export cache, per-file ontology versioning, annotated API catalog, Agent Task Composer. |
|
||||||
| `ontoref-derive` | Proc-macro crate. `#[onto_api(...)]` annotates HTTP handlers; `#[derive(ConfigFields)]` + `#[config_section(id, ncl_file)]` registers config struct fields — both emit `inventory::submit!` at link time. `GET /api/catalog` and `GET /config/coherence` aggregate via `inventory::collect!`. |
|
| `ontoref-derive` | Proc-macro crate. `#[onto_api(...)]` annotates HTTP handlers — `description` is optional when a `///` doc comment exists (first line used as fallback). `#[derive(OntologyNode)]` + `#[onto(id, name, paths, description, adrs)]` auto-registers nodes via `inventory::submit!` at link time, merged into `Core` by `merge_contributors()`. `#[derive(ConfigFields)]` + `#[config_section(id, ncl_file)]` registers config struct fields. All three aggregate via `inventory::collect!`. |
|
||||||
|
|
||||||
`ontoref-daemon` caches `nickel export` results (keyed by path + mtime), reducing full sync
|
`ontoref-daemon` caches `nickel export` results (keyed by path + mtime), reducing full sync
|
||||||
scans from ~2m42s to <30s. The daemon is always optional — every module falls back to direct
|
scans from ~2m42s to <30s. The daemon is always optional — every module falls back to direct
|
||||||
@ -64,7 +64,7 @@ state, ontology, backlog, validation, Q&A, bookmarks, API surface. Representativ
|
|||||||
| `ontoref_api_catalog` | Annotated HTTP surface — all routes with auth, actors, params, tags |
|
| `ontoref_api_catalog` | Annotated HTTP surface — all routes with auth, actors, params, tags |
|
||||||
| `ontoref_file_versions` | Per-file reload counters — detect which ontology files changed |
|
| `ontoref_file_versions` | Per-file reload counters — detect which ontology files changed |
|
||||||
| `ontoref_validate_adrs` | Run typed ADR constraint checks; returns pass/fail per constraint |
|
| `ontoref_validate_adrs` | Run typed ADR constraint checks; returns pass/fail per constraint |
|
||||||
| `ontoref_validate` | Full project validation: ADRs, content assets, connections, gate consistency |
|
| `ontoref_validate` | Full project validation: ADRs, content assets, connections, gate consistency, manifest coverage |
|
||||||
| `ontoref_impact` | BFS impact graph from a node, optionally across project connections |
|
| `ontoref_impact` | BFS impact graph from a node, optionally across project connections |
|
||||||
| `ontoref_qa_list` | List Q&A entries with optional filter |
|
| `ontoref_qa_list` | List Q&A entries with optional filter |
|
||||||
| `ontoref_qa_add` | Append a new Q&A entry to `reflection/qa.ncl` |
|
| `ontoref_qa_add` | Append a new Q&A entry to `reflection/qa.ncl` |
|
||||||
@ -82,7 +82,8 @@ core ontology node IDs — bridging career artifacts into the DAG. Five content/
|
|||||||
modes (`draft-application`, `draft-email`, `generate-article`, `update-cv`, `write-cfp`) query
|
modes (`draft-application`, `draft-email`, `generate-article`, `update-cv`, `write-cfp`) query
|
||||||
these schemas to ground output in declared project artifacts rather than free-form prose.
|
these schemas to ground output in declared project artifacts rather than free-form prose.
|
||||||
|
|
||||||
**API Catalog** — every HTTP handler carries `#[onto_api(method, path, description, auth, actors, params, tags)]`.
|
**API Catalog** — every HTTP handler carries `#[onto_api(method, path, auth, actors, params, tags)]`.
|
||||||
|
`description` is sourced from the first `///` doc line above the handler — no duplication with doc comments.
|
||||||
At link time `inventory::submit!` registers each route. `GET /api/catalog` returns the full annotated
|
At link time `inventory::submit!` registers each route. `GET /api/catalog` returns the full annotated
|
||||||
surface as JSON. The `/ui/{slug}/api` page renders it with client-side filtering (method, auth, path).
|
surface as JSON. The `/ui/{slug}/api` page renders it with client-side filtering (method, auth, path).
|
||||||
`describe api [--actor] [--tag] [--fmt]` renders the catalog in the CLI. `ontoref_api_catalog` exposes
|
`describe api [--actor] [--tag] [--fmt]` renders the catalog in the CLI. `ontoref_api_catalog` exposes
|
||||||
@ -143,8 +144,9 @@ NuCmd`) whose result IS the applied state — no state file, fully idempotent. `
|
|||||||
migrations with applied/pending status; `migrate pending` lists only what is missing; `migrate show <id>`
|
migrations with applied/pending status; `migrate pending` lists only what is missing; `migrate show <id>`
|
||||||
renders runtime-interpolated instructions (project_root and project_name auto-detected). NuCmd checks are
|
renders runtime-interpolated instructions (project_root and project_name auto-detected). NuCmd checks are
|
||||||
valid Nushell (no bash `&&`, `$env.VAR` not `$VAR`). Grep checks targeting ADR files scope to
|
valid Nushell (no bash `&&`, `$env.VAR` not `$VAR`). Grep checks targeting ADR files scope to
|
||||||
`adr-[0-9][0-9][0-9]-*.ncl` to exclude schema/template infrastructure files. 7 migrations shipped;
|
`adr-[0-9][0-9][0-9]-*.ncl` to exclude schema/template infrastructure files. 12 migrations shipped;
|
||||||
`0007-card-repo-field` checks for `repo =` in `card.ncl`. ([ADR-010](adrs/adr-010-protocol-migration-system.ncl))
|
`0012-rust-doc-authoring-pattern` adds the `/// → //! → node description` three-layer doc convention
|
||||||
|
and optional pre-commit hooks (`docs-links`, `docs-drift`) to consumer `CLAUDE.md`. ([ADR-010](adrs/adr-010-protocol-migration-system.ncl))
|
||||||
|
|
||||||
**Manifest Self-Interrogation** — `manifest_type` gains three typed arrays that answer self-knowledge
|
**Manifest Self-Interrogation** — `manifest_type` gains three typed arrays that answer self-knowledge
|
||||||
queries agents and operators need on cold start: `capabilities[]` (what the project does, why it was
|
queries agents and operators need on cold start: `capabilities[]` (what the project does, why it was
|
||||||
@ -226,11 +228,27 @@ cargo build -p ontoref-ontology # always standalone
|
|||||||
cargo check-all # check all targets + features
|
cargo check-all # check all targets + features
|
||||||
cargo test-all # run full test suite
|
cargo test-all # run full test suite
|
||||||
just ci-lint # clippy + TOML + Nickel + Markdown
|
just ci-lint # clippy + TOML + Nickel + Markdown
|
||||||
|
just ci-docs # rustdoc broken intra-doc link check
|
||||||
just ci-full # all CI checks
|
just ci-full # all CI checks
|
||||||
nu --ide-check 50 reflection/modules/<file>.nu # validate a Nushell module
|
nu --ide-check 50 reflection/modules/<file>.nu # validate a Nushell module
|
||||||
./ontoref --actor developer <mode> # run a reflection mode
|
./ontoref --actor developer <mode> # run a reflection mode
|
||||||
|
./ontoref sync diff --docs # crate //! drift against ontology nodes
|
||||||
|
./ontoref describe workspace # per-crate doc coverage + drift status
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Doc authoring convention
|
||||||
|
|
||||||
|
Three canonical layers — no duplication across them:
|
||||||
|
|
||||||
|
| Layer | Where | Read by |
|
||||||
|
|-------|-------|---------|
|
||||||
|
| `///` first line | handlers, structs, types | `#[onto_api]`, `#[derive(OntologyNode)]`, MCP |
|
||||||
|
| `//!` first sentence | `lib.rs` | `describe features`, mdBook crates chapter, drift check |
|
||||||
|
| node `description` | `.ontology/core.ncl` | UI graph, `describe project`, CLI |
|
||||||
|
|
||||||
|
`sync diff --docs --fail-on-drift` (used by pre-commit `docs-drift` hook) enforces that `//!` first
|
||||||
|
sentence stays aligned with the practice node description (Jaccard ≥ 0.20 threshold).
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
MIT OR Apache-2.0
|
MIT OR Apache-2.0
|
||||||
|
|||||||
@ -56,6 +56,14 @@ d.make_adr {
|
|||||||
],
|
],
|
||||||
|
|
||||||
constraints = [
|
constraints = [
|
||||||
|
{
|
||||||
|
id = "protocol-changes-require-migration",
|
||||||
|
claim = "Any change to templates/, reflection/schemas/*.ncl, .claude/CLAUDE.md, or consumer-facing reflection/modes/ that consumer projects need to adopt must be accompanied by a new migration in reflection/migrations/",
|
||||||
|
scope = "all commits that touch templates/, reflection/schemas/, .claude/CLAUDE.md, or consumer-facing reflection/modes/",
|
||||||
|
severity = 'Hard,
|
||||||
|
check = { tag = 'Grep, pattern = "Protocol Evolution", paths = [".claude/CLAUDE.md"], must_be_empty = false },
|
||||||
|
rationale = "Migrations are the sole propagation mechanism for protocol changes. A change without a migration only applies to ontoref itself — consumer projects have no machine-queryable way to discover the change via `migrate pending`.",
|
||||||
|
},
|
||||||
{
|
{
|
||||||
id = "nucmd-checks-must-be-nushell",
|
id = "nucmd-checks-must-be-nushell",
|
||||||
claim = "NuCmd check cmd fields must be valid Nushell — no bash operators (&&, ||, 2>/dev/null), no $VARNAME (must be $env.VARNAME)",
|
claim = "NuCmd check cmd fields must be valid Nushell — no bash operators (&&, ||, 2>/dev/null), no $VARNAME (must be $env.VARNAME)",
|
||||||
|
|||||||
113
adrs/adr-011-mode-guards-and-convergence.ncl
Normal file
113
adrs/adr-011-mode-guards-and-convergence.ncl
Normal file
@ -0,0 +1,113 @@
|
|||||||
|
let d = import "adr-defaults.ncl" in
|
||||||
|
|
||||||
|
d.make_adr {
|
||||||
|
id = "adr-011",
|
||||||
|
title = "Mode Guards and Convergence — Active Partner and Refinement Loop in the Mode Schema",
|
||||||
|
status = 'Accepted,
|
||||||
|
date = "2026-03-30",
|
||||||
|
|
||||||
|
context = "Reflection modes executed procedures as typed DAGs but lacked two capabilities that caused real failures: (1) modes could run against projects in invalid states — missing ontology files, unavailable tools, incomplete manifests — because preconditions were informational text, not executable checks. An agent following the protocol would execute sync-ontology on a project without core.ncl and get an opaque nickel error instead of a clear block. (2) Modes like sync-ontology require iteration — scan, diff, propose, apply, then verify that drift is zero. If drift remained after one pass, the mode reported success and the agent moved on. There was no mechanism for the protocol to say 'keep going until this condition is met'. Both gaps were identified during a systematic comparison against 45 augmented coding patterns (lexler.github.io/augmented-coding-patterns): Active Partner (#1) requires the system to push back on invalid actions, and Refinement Loop (#36) requires iteration until convergence. A separate action subsystem (ext/action) was considered and rejected in favor of extending the existing mode schema.",
|
||||||
|
|
||||||
|
decision = "Extend reflection/schema.ncl with two new optional fields on ModeBase: guards (Array Guard) for pre-flight executable checks, and converge (Converge) for post-execution convergence loops. Guards run before any step and can Block (abort) or Warn (continue with message). Converge evaluates a condition command after all steps complete and re-executes failed or all steps up to max_iterations times. Both are backward-compatible — all existing modes export unchanged with guards defaulting to [] and converge being optional.",
|
||||||
|
|
||||||
|
rationale = [
|
||||||
|
{
|
||||||
|
claim = "Guards formalize the Active Partner pattern: the protocol pushes back before acting",
|
||||||
|
detail = "Guards are executable shell commands with a severity (Block/Warn) and a human-readable reason. Unlike preconditions (informational text), guards run and block. This prevents agents from executing modes against projects in invalid states. The guard reason tells the agent exactly what to fix.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
claim = "Converge formalizes the Refinement Loop pattern: modes iterate until a condition is met",
|
||||||
|
detail = "Modes like sync-ontology need to iterate: apply changes, re-diff, apply again if drift remains. The converge field lets the mode contract declare this expectation. The executor handles the loop logic — mode authors just declare the condition, max iterations, and retry strategy (RetryFailed or RetryAll).",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
claim = "Extending the mode schema is simpler than creating a separate action subsystem",
|
||||||
|
detail = "The alternative was ext/action — a new NCL schema for action contracts with gates, events, and convergence conditions. This would have created a parallel concept to modes with overlapping responsibilities. Since modes are already the execution abstraction, adding guards and converge to the same schema keeps a single concept for all executable procedures. Consumer projects learn one schema, not two.",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
|
||||||
|
consequences = {
|
||||||
|
positive = [
|
||||||
|
"Agents get clear, early feedback when a mode cannot run (guard reason instead of opaque errors)",
|
||||||
|
"Iterative workflows like sync-ontology converge automatically instead of requiring manual re-execution",
|
||||||
|
"The mode schema remains the single execution abstraction — no competing action/workflow concept",
|
||||||
|
"Backward compatible: all 19 existing modes export unchanged",
|
||||||
|
"describe mode shows guards and converge sections — agents see the full execution contract",
|
||||||
|
],
|
||||||
|
negative = [
|
||||||
|
"Guard commands add shell execution before steps — latency increases for guarded modes",
|
||||||
|
"Convergence loops can mask underlying issues if max_iterations is set too high — a mode that never converges wastes resources silently",
|
||||||
|
"Mode authors must write correct shell commands for guard checks and convergence conditions — no NCL-level validation of command correctness",
|
||||||
|
],
|
||||||
|
},
|
||||||
|
|
||||||
|
alternatives_considered = [
|
||||||
|
{
|
||||||
|
option = "ext/action — separate action contract schema with gates, events, and convergence",
|
||||||
|
why_rejected = "Creates a parallel execution abstraction competing with modes. Consumer projects would need to learn both modes and actions, and the boundary between them would be ambiguous. The mode schema already has steps, dependencies, and error strategies — guards and converge are natural extensions of the same concept.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
option = "Executable preconditions — make existing preconditions[] run commands instead of being text",
|
||||||
|
why_rejected = "Preconditions serve a different purpose: they document what the human should verify. Making them executable would lose the documentation function. Guards are a separate concept: machine-checked pre-flight blocks. Both can coexist — preconditions for humans, guards for machines.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
option = "External convergence via Vapora workflows — let the orchestrator handle iteration",
|
||||||
|
why_rejected = "Convergence is a property of the mode itself, not of the orchestrator. sync-ontology should declare that it iterates until zero drift regardless of whether it runs via CLI, Vapora, or CI. Pushing this to the orchestrator means every orchestrator must know which modes need iteration and under what conditions — that knowledge belongs in the mode contract.",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
|
||||||
|
constraints = [
|
||||||
|
{
|
||||||
|
id = "guard-schema-present",
|
||||||
|
claim = "reflection/schema.ncl exports a Guard type with id, cmd, reason, and severity fields",
|
||||||
|
scope = "reflection/schema.ncl",
|
||||||
|
severity = 'Hard,
|
||||||
|
rationale = "The Guard type is the contract that all mode consumers rely on. Removing or renaming its fields breaks all guarded modes and the executor.",
|
||||||
|
check = {
|
||||||
|
tag = 'Grep,
|
||||||
|
pattern = "Guard.*=.*_Guard",
|
||||||
|
paths = ["reflection/schema.ncl"],
|
||||||
|
must_be_empty = false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id = "converge-schema-present",
|
||||||
|
claim = "reflection/schema.ncl exports a Converge type with condition, max_iterations, and strategy fields",
|
||||||
|
scope = "reflection/schema.ncl",
|
||||||
|
severity = 'Hard,
|
||||||
|
rationale = "The Converge type is the contract for iterative modes. Removing it breaks the convergence loop in the executor.",
|
||||||
|
check = {
|
||||||
|
tag = 'Grep,
|
||||||
|
pattern = "Converge.*=.*_Converge",
|
||||||
|
paths = ["reflection/schema.ncl"],
|
||||||
|
must_be_empty = false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id = "executor-guards-implemented",
|
||||||
|
claim = "The mode executor in reflection/nulib/modes.nu evaluates guards before executing steps",
|
||||||
|
scope = "reflection/nulib/modes.nu",
|
||||||
|
severity = 'Hard,
|
||||||
|
rationale = "Guards without executor support are declarations without effect. The executor must run each guard cmd and abort on Block failures.",
|
||||||
|
check = {
|
||||||
|
tag = 'Grep,
|
||||||
|
pattern = "Guards.*Active Partner",
|
||||||
|
paths = ["reflection/nulib/modes.nu"],
|
||||||
|
must_be_empty = false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
|
||||||
|
ontology_check = {
|
||||||
|
decision_string = "Extend mode schema with guards and converge",
|
||||||
|
invariants_at_risk = ["protocol-not-runtime"],
|
||||||
|
verdict = 'RequiresJustification,
|
||||||
|
},
|
||||||
|
|
||||||
|
invariant_justification = {
|
||||||
|
invariant = "protocol-not-runtime",
|
||||||
|
claim = "Guards and converge are declarative NCL fields — the protocol defines what to check and when to iterate, not how to execute. The executor in modes.nu is tooling, not protocol. A consumer project could implement its own executor that reads the same guards and converge declarations.",
|
||||||
|
mitigation = "The schema (reflection/schema.ncl) is protocol. The executor (reflection/nulib/modes.nu) is tooling. Both are clearly separated. A project can use the schema without the executor.",
|
||||||
|
},
|
||||||
|
|
||||||
|
related_adrs = ["adr-002"],
|
||||||
|
}
|
||||||
@ -440,12 +440,13 @@ struct HealthResponse {
|
|||||||
db_enabled: Option<bool>,
|
db_enabled: Option<bool>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Return the full API catalog — all endpoints registered via `#[onto_api]`,
|
/// Full catalog of daemon HTTP endpoints with metadata: auth, actors, params,
|
||||||
/// sorted by path then method.
|
/// tags
|
||||||
|
///
|
||||||
|
/// Returns all entries sorted by path then method.
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/api/catalog",
|
path = "/api/catalog",
|
||||||
description = "Full catalog of daemon HTTP endpoints with metadata: auth, actors, params, tags",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci, admin",
|
actors = "agent, developer, ci, admin",
|
||||||
tags = "meta, catalog"
|
tags = "meta, catalog"
|
||||||
@ -455,10 +456,10 @@ async fn api_catalog_handler() -> impl IntoResponse {
|
|||||||
Json(serde_json::json!({ "count": routes.len(), "routes": routes }))
|
Json(serde_json::json!({ "count": routes.len(), "routes": routes }))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Daemon health check: uptime, version, feature flags, active projects
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/health",
|
path = "/health",
|
||||||
description = "Daemon health check: uptime, version, feature flags, active projects",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci, admin",
|
actors = "agent, developer, ci, admin",
|
||||||
tags = "meta"
|
tags = "meta"
|
||||||
@ -507,10 +508,10 @@ struct ExportResponse {
|
|||||||
elapsed_ms: u64,
|
elapsed_ms: u64,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Export a Nickel file to JSON, using the cache when the file is unchanged
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/nickel/export",
|
path = "/nickel/export",
|
||||||
description = "Export a Nickel file to JSON, using the cache when the file is unchanged",
|
|
||||||
auth = "viewer",
|
auth = "viewer",
|
||||||
actors = "developer, agent",
|
actors = "developer, agent",
|
||||||
params = "file:string:required:Absolute path to the .ncl file to export; \
|
params = "file:string:required:Absolute path to the .ncl file to export; \
|
||||||
@ -589,10 +590,10 @@ struct CacheStatsResponse {
|
|||||||
hit_rate: f64,
|
hit_rate: f64,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// NCL export cache statistics: entry count, hit/miss counters
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/cache/stats",
|
path = "/cache/stats",
|
||||||
description = "NCL export cache statistics: entry count, hit/miss counters",
|
|
||||||
auth = "viewer",
|
auth = "viewer",
|
||||||
actors = "developer, admin",
|
actors = "developer, admin",
|
||||||
tags = "cache, meta"
|
tags = "cache, meta"
|
||||||
@ -630,10 +631,10 @@ struct InvalidateResponse {
|
|||||||
entries_remaining: usize,
|
entries_remaining: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Invalidate one or all NCL cache entries, forcing re-export on next request
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/cache/invalidate",
|
path = "/cache/invalidate",
|
||||||
description = "Invalidate one or all NCL cache entries, forcing re-export on next request",
|
|
||||||
auth = "admin",
|
auth = "admin",
|
||||||
actors = "developer, admin",
|
actors = "developer, admin",
|
||||||
params = "file:string:optional:Specific file path to invalidate (omit to invalidate all)",
|
params = "file:string:optional:Specific file path to invalidate (omit to invalidate all)",
|
||||||
@ -703,10 +704,10 @@ struct RegisterResponse {
|
|||||||
actors_connected: usize,
|
actors_connected: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Register an actor session and receive a bearer token for subsequent calls
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/actors/register",
|
path = "/actors/register",
|
||||||
description = "Register an actor session and receive a bearer token for subsequent calls",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci",
|
actors = "agent, developer, ci",
|
||||||
params = "actor:string:required:Actor type (agent|developer|ci|admin); \
|
params = "actor:string:required:Actor type (agent|developer|ci|admin); \
|
||||||
@ -747,10 +748,10 @@ async fn actor_register(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Deregister an actor session and invalidate its bearer token
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "DELETE",
|
method = "DELETE",
|
||||||
path = "/actors/{token}",
|
path = "/actors/{token}",
|
||||||
description = "Deregister an actor session and invalidate its bearer token",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci",
|
actors = "agent, developer, ci",
|
||||||
tags = "actors, auth"
|
tags = "actors, auth"
|
||||||
@ -772,10 +773,11 @@ async fn actor_deregister(State(state): State<AppState>, Path(token): Path<Strin
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Extend actor session TTL; prevents the session from expiring due to
|
||||||
|
/// inactivity
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/actors/{token}/touch",
|
path = "/actors/{token}/touch",
|
||||||
description = "Extend actor session TTL; prevents the session from expiring due to inactivity",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci",
|
actors = "agent, developer, ci",
|
||||||
tags = "actors"
|
tags = "actors"
|
||||||
@ -806,10 +808,10 @@ struct ProfileRequest {
|
|||||||
preferences: Option<serde_json::Value>,
|
preferences: Option<serde_json::Value>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Update actor profile metadata: display name, role, and custom context fields
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/actors/{token}/profile",
|
path = "/actors/{token}/profile",
|
||||||
description = "Update actor profile metadata: display name, role, and custom context fields",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
tags = "actors"
|
tags = "actors"
|
||||||
@ -848,11 +850,11 @@ struct ActorsQuery {
|
|||||||
project: Option<String>,
|
project: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// List all registered actor sessions with their last-seen timestamp and
|
||||||
|
/// pending notification count
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/actors",
|
path = "/actors",
|
||||||
description = "List all registered actor sessions with their last-seen timestamp and pending \
|
|
||||||
notification count",
|
|
||||||
auth = "viewer",
|
auth = "viewer",
|
||||||
actors = "developer, admin",
|
actors = "developer, admin",
|
||||||
params = "project:string:optional:Filter by project slug",
|
params = "project:string:optional:Filter by project slug",
|
||||||
@ -892,10 +894,10 @@ struct PendingResponse {
|
|||||||
notifications: Option<Vec<NotificationView>>,
|
notifications: Option<Vec<NotificationView>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Poll pending notifications for an actor; optionally marks them as seen
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/notifications/pending",
|
path = "/notifications/pending",
|
||||||
description = "Poll pending notifications for an actor; optionally marks them as seen",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci",
|
actors = "agent, developer, ci",
|
||||||
params = "token:string:required:Actor bearer token; project:string:optional:Project slug \
|
params = "token:string:required:Actor bearer token; project:string:optional:Project slug \
|
||||||
@ -943,10 +945,10 @@ struct AckResponse {
|
|||||||
acknowledged: usize,
|
acknowledged: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Acknowledge one or more notifications; removes them from the pending queue
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/notifications/ack",
|
path = "/notifications/ack",
|
||||||
description = "Acknowledge one or more notifications; removes them from the pending queue",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci",
|
actors = "agent, developer, ci",
|
||||||
params = "token:string:required:Actor bearer token; ids:string:required:Comma-separated \
|
params = "token:string:required:Actor bearer token; ids:string:required:Comma-separated \
|
||||||
@ -995,17 +997,15 @@ struct StreamQuery {
|
|||||||
project: Option<String>,
|
project: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// `GET /notifications/stream?token=<actor_token>[&project=<slug>]`
|
/// SSE push stream: actor subscribes once and receives notification events as
|
||||||
|
/// they occur
|
||||||
///
|
///
|
||||||
/// Server-Sent Events stream. Each event is a JSON-serialized
|
/// Each event is a JSON-serialized `NotificationView`. Clients receive push
|
||||||
/// `NotificationView`. Clients receive push notifications without polling.
|
/// notifications without polling. Reconnects automatically pick up new events
|
||||||
/// Reconnects automatically pick up new events (no replay of missed events —
|
/// (no replay of missed events — use `/notifications/pending` for that).
|
||||||
/// use `/notifications/pending` for that).
|
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/notifications/stream",
|
path = "/notifications/stream",
|
||||||
description = "SSE push stream: actor subscribes once and receives notification events as \
|
|
||||||
they occur",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "token:string:required:Actor bearer token; project:string:optional:Project slug \
|
params = "token:string:required:Actor bearer token; project:string:optional:Project slug \
|
||||||
@ -1085,7 +1085,8 @@ struct OntologyChangedRequest {
|
|||||||
project: Option<String>,
|
project: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// `POST /ontology/changed` — actor-attributed file-change notification.
|
/// Git hook endpoint: actor signs a file-change event it caused to suppress
|
||||||
|
/// self-notification
|
||||||
///
|
///
|
||||||
/// Called by git hooks (post-merge, post-commit) so the daemon knows *who*
|
/// Called by git hooks (post-merge, post-commit) so the daemon knows *who*
|
||||||
/// caused the change. Creates a notification with `source_actor` set, enabling
|
/// caused the change. Creates a notification with `source_actor` set, enabling
|
||||||
@ -1093,8 +1094,6 @@ struct OntologyChangedRequest {
|
|||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/ontology/changed",
|
path = "/ontology/changed",
|
||||||
description = "Git hook endpoint: actor signs a file-change event it caused to suppress \
|
|
||||||
self-notification",
|
|
||||||
auth = "viewer",
|
auth = "viewer",
|
||||||
actors = "developer, ci",
|
actors = "developer, ci",
|
||||||
params = "token:string:required:Actor bearer token; files:string:required:JSON array of \
|
params = "token:string:required:Actor bearer token; files:string:required:JSON array of \
|
||||||
@ -1190,10 +1189,10 @@ struct SearchResponse {
|
|||||||
results: Vec<crate::search::SearchResult>,
|
results: Vec<crate::search::SearchResult>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Full-text search over ontology nodes, ADRs, practices and Q&A entries
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/search",
|
path = "/search",
|
||||||
description = "Full-text search over ontology nodes, ADRs, practices and Q&A entries",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "q:string:required:Search query string; slug:string:optional:Project slug (ui \
|
params = "q:string:required:Search query string; slug:string:optional:Project slug (ui \
|
||||||
@ -1297,11 +1296,11 @@ fn resolve_project_ctx(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Project self-description: identity, axioms, tensions, practices, gates,
|
||||||
|
/// ADRs, dimensions
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/describe/project",
|
path = "/describe/project",
|
||||||
description = "Project self-description: identity, axioms, tensions, practices, gates, ADRs, \
|
|
||||||
dimensions",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci, admin",
|
actors = "agent, developer, ci, admin",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -1332,11 +1331,11 @@ async fn describe_project(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Cross-project connection declarations: upstream, downstream, peers with
|
||||||
|
/// addressing
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/describe/connections",
|
path = "/describe/connections",
|
||||||
description = "Cross-project connection declarations: upstream, downstream, peers with \
|
|
||||||
addressing",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -1367,10 +1366,11 @@ async fn describe_connections(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Execute typed ADR constraint checks and return per-constraint pass/fail
|
||||||
|
/// results
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/validate/adrs",
|
path = "/validate/adrs",
|
||||||
description = "Execute typed ADR constraint checks and return per-constraint pass/fail results",
|
|
||||||
auth = "viewer",
|
auth = "viewer",
|
||||||
actors = "developer, ci, agent",
|
actors = "developer, ci, agent",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -1434,11 +1434,11 @@ fn default_depth() -> u32 {
|
|||||||
2
|
2
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// BFS impact graph from an ontology node; optionally traverses cross-project
|
||||||
|
/// connections
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/graph/impact",
|
path = "/graph/impact",
|
||||||
description = "BFS impact graph from an ontology node; optionally traverses cross-project \
|
|
||||||
connections",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "node:string:required:Ontology node id to start from; depth:u32:default=2:Max BFS \
|
params = "node:string:required:Ontology node id to start from; depth:u32:default=2:Max BFS \
|
||||||
@ -1471,10 +1471,11 @@ async fn graph_impact(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Resolve a single ontology node by id from the local cache (used by
|
||||||
|
/// federation)
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/graph/node/{id}",
|
path = "/graph/node/{id}",
|
||||||
description = "Resolve a single ontology node by id from the local cache (used by federation)",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -1523,11 +1524,11 @@ async fn graph_node(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Complete operational context for an actor: identity, axioms, practices,
|
||||||
|
/// constraints, gate state, modes, actor policy, connections, content assets
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/describe/guides",
|
path = "/describe/guides",
|
||||||
description = "Complete operational context for an actor: identity, axioms, practices, \
|
|
||||||
constraints, gate state, modes, actor policy, connections, content assets",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci",
|
actors = "agent, developer, ci",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary); \
|
params = "slug:string:optional:Project slug (defaults to primary); \
|
||||||
@ -1579,11 +1580,11 @@ async fn describe_guides(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Available reflection modes, just recipes, Claude capabilities and CI tools
|
||||||
|
/// for the project
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/describe/capabilities",
|
path = "/describe/capabilities",
|
||||||
description = "Available reflection modes, just recipes, Claude capabilities and CI tools for \
|
|
||||||
the project",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer, ci",
|
actors = "agent, developer, ci",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -1677,11 +1678,11 @@ async fn describe_capabilities(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Minimal onboarding payload for a new actor session: what to register as and
|
||||||
|
/// what to do first
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/describe/actor-init",
|
path = "/describe/actor-init",
|
||||||
description = "Minimal onboarding payload for a new actor session: what to register as and \
|
|
||||||
what to do first",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent",
|
actors = "agent",
|
||||||
params = "actor:string:optional:Actor type to onboard as; slug:string:optional:Project slug",
|
params = "actor:string:optional:Actor type to onboard as; slug:string:optional:Project slug",
|
||||||
@ -1745,10 +1746,10 @@ struct AdrQuery {
|
|||||||
slug: Option<String>,
|
slug: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Read a single ADR by id, exported from NCL as structured JSON
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/adr/{id}",
|
path = "/adr/{id}",
|
||||||
description = "Read a single ADR by id, exported from NCL as structured JSON",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -1810,11 +1811,11 @@ struct FileQuery {
|
|||||||
slug: Option<String>,
|
slug: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Read a project-relative file and return its content as text (for UI artifact
|
||||||
|
/// links)
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/file",
|
path = "/file",
|
||||||
description = "Read a project-relative file and return its content as text (for UI artifact \
|
|
||||||
links)",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "developer",
|
actors = "developer",
|
||||||
params = "path:string:required:Project-relative file path, slug:string:optional:Project slug",
|
params = "path:string:required:Project-relative file path, slug:string:optional:Project slug",
|
||||||
@ -1884,10 +1885,10 @@ struct OntologyQuery {
|
|||||||
slug: Option<String>,
|
slug: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// List available ontology extension files beyond core, state, gate, manifest
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/ontology",
|
path = "/ontology",
|
||||||
description = "List available ontology extension files beyond core, state, gate, manifest",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -1934,10 +1935,10 @@ async fn list_ontology_extensions(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Export a specific ontology extension file to JSON
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/ontology/{file}",
|
path = "/ontology/{file}",
|
||||||
description = "Export a specific ontology extension file to JSON",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -1984,10 +1985,10 @@ async fn get_ontology_extension(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Export the project backlog as structured JSON from reflection/backlog.ncl
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/backlog-json",
|
path = "/backlog-json",
|
||||||
description = "Export the project backlog as structured JSON from reflection/backlog.ncl",
|
|
||||||
auth = "viewer",
|
auth = "viewer",
|
||||||
actors = "developer, agent",
|
actors = "developer, agent",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -2037,11 +2038,11 @@ struct BacklogProposeRequest {
|
|||||||
pub slug: Option<String>,
|
pub slug: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Propose a status change for a backlog item. Creates a backlog_review
|
||||||
|
/// notification that requires admin approval via the UI or CLI.
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/backlog/propose-status",
|
path = "/backlog/propose-status",
|
||||||
description = "Propose a status change for a backlog item. Creates a backlog_review \
|
|
||||||
notification that requires admin approval via the UI or CLI.",
|
|
||||||
auth = "viewer",
|
auth = "viewer",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "id:string:required:Backlog item id (e.g. bl-001) | \
|
params = "id:string:required:Backlog item id (e.g. bl-001) | \
|
||||||
@ -2133,10 +2134,10 @@ async fn backlog_propose_status(
|
|||||||
|
|
||||||
// ── Q&A endpoints ───────────────────────────────────────────────────────
|
// ── Q&A endpoints ───────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Export the Q&A knowledge store as structured JSON from reflection/qa.ncl
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/qa-json",
|
path = "/qa-json",
|
||||||
description = "Export the Q&A knowledge store as structured JSON from reflection/qa.ncl",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:optional:Project slug (defaults to primary)",
|
params = "slug:string:optional:Project slug (defaults to primary)",
|
||||||
@ -2187,10 +2188,10 @@ struct ProjectView {
|
|||||||
ontology_version: u64,
|
ontology_version: u64,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// List all registered projects with slug, root, push_only flag and import path
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/projects",
|
path = "/projects",
|
||||||
description = "List all registered projects with slug, root, push_only flag and import path",
|
|
||||||
auth = "admin",
|
auth = "admin",
|
||||||
actors = "admin",
|
actors = "admin",
|
||||||
tags = "projects, registry"
|
tags = "projects, registry"
|
||||||
@ -2239,10 +2240,10 @@ fn validate_slug(slug: &str) -> std::result::Result<(), (StatusCode, String)> {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Register a new project at runtime without daemon restart
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/projects",
|
path = "/projects",
|
||||||
description = "Register a new project at runtime without daemon restart",
|
|
||||||
auth = "admin",
|
auth = "admin",
|
||||||
actors = "admin",
|
actors = "admin",
|
||||||
tags = "projects, registry"
|
tags = "projects, registry"
|
||||||
@ -2282,10 +2283,10 @@ async fn project_add(
|
|||||||
.into_response()
|
.into_response()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Deregister a project and stop its file watcher
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "DELETE",
|
method = "DELETE",
|
||||||
path = "/projects/{slug}",
|
path = "/projects/{slug}",
|
||||||
description = "Deregister a project and stop its file watcher",
|
|
||||||
auth = "admin",
|
auth = "admin",
|
||||||
actors = "admin",
|
actors = "admin",
|
||||||
tags = "projects, registry"
|
tags = "projects, registry"
|
||||||
@ -2327,7 +2328,8 @@ struct UpdateKeysResponse {
|
|||||||
keys_active: usize,
|
keys_active: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// `PUT /projects/{slug}/keys` — replace the key set for a registered project.
|
/// Hot-rotate credentials for a project; invalidates all existing actor and UI
|
||||||
|
/// sessions
|
||||||
///
|
///
|
||||||
/// Auth:
|
/// Auth:
|
||||||
/// - If the project has existing keys, requires `Authorization: Bearer
|
/// - If the project has existing keys, requires `Authorization: Bearer
|
||||||
@ -2339,8 +2341,6 @@ struct UpdateKeysResponse {
|
|||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "PUT",
|
method = "PUT",
|
||||||
path = "/projects/{slug}/keys",
|
path = "/projects/{slug}/keys",
|
||||||
description = "Hot-rotate credentials for a project; invalidates all existing actor and UI \
|
|
||||||
sessions",
|
|
||||||
auth = "admin",
|
auth = "admin",
|
||||||
actors = "admin",
|
actors = "admin",
|
||||||
tags = "projects, auth"
|
tags = "projects, auth"
|
||||||
@ -2432,11 +2432,11 @@ async fn project_update_keys(
|
|||||||
.into_response()
|
.into_response()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Per-file ontology change counters for a project; incremented on every cache
|
||||||
|
/// invalidation
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/projects/{slug}/ontology/versions",
|
path = "/projects/{slug}/ontology/versions",
|
||||||
description = "Per-file ontology change counters for a project; incremented on every cache \
|
|
||||||
invalidation",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
tags = "projects, ontology, cache"
|
tags = "projects, ontology, cache"
|
||||||
@ -2507,10 +2507,11 @@ macro_rules! require_config_surface {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Full config export for a registered project (merged with any active
|
||||||
|
/// overrides)
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/projects/{slug}/config",
|
path = "/projects/{slug}/config",
|
||||||
description = "Full config export for a registered project (merged with any active overrides)",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:required:Project slug",
|
params = "slug:string:required:Project slug",
|
||||||
@ -2535,11 +2536,11 @@ async fn project_config(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Config surface schema: sections with descriptions, rationales, contracts,
|
||||||
|
/// and declared consumers
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/projects/{slug}/config/schema",
|
path = "/projects/{slug}/config/schema",
|
||||||
description = "Config surface schema: sections with descriptions, rationales, contracts, and \
|
|
||||||
declared consumers",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:required:Project slug",
|
params = "slug:string:required:Project slug",
|
||||||
@ -2585,10 +2586,10 @@ async fn project_config_schema(
|
|||||||
.into_response()
|
.into_response()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Values for a single config section (from the merged NCL export)
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/projects/{slug}/config/{section}",
|
path = "/projects/{slug}/config/{section}",
|
||||||
description = "Values for a single config section (from the merged NCL export)",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:required:Project slug; section:string:required:Section id",
|
params = "slug:string:required:Project slug; section:string:required:Section id",
|
||||||
@ -2628,11 +2629,11 @@ async fn project_config_section(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Multi-consumer coherence report: unclaimed NCL fields, consumer field
|
||||||
|
/// mismatches
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/projects/{slug}/config/coherence",
|
path = "/projects/{slug}/config/coherence",
|
||||||
description = "Multi-consumer coherence report: unclaimed NCL fields, consumer field \
|
|
||||||
mismatches",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:required:Project slug; section:string:optional:Filter to one section",
|
params = "slug:string:required:Project slug; section:string:optional:Filter to one section",
|
||||||
@ -2662,11 +2663,11 @@ async fn project_config_coherence(
|
|||||||
Json(serde_json::to_value(&report).unwrap_or(serde_json::Value::Null)).into_response()
|
Json(serde_json::to_value(&report).unwrap_or(serde_json::Value::Null)).into_response()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Generated config documentation with rationales, override history, and
|
||||||
|
/// coherence status
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/projects/{slug}/config/quickref",
|
path = "/projects/{slug}/config/quickref",
|
||||||
description = "Generated config documentation with rationales, override history, and \
|
|
||||||
coherence status",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:required:Project slug; section:string:optional:Filter to one section; \
|
params = "slug:string:required:Project slug; section:string:optional:Filter to one section; \
|
||||||
@ -2714,11 +2715,11 @@ fn index_section_fields(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Compare config surfaces across all registered projects: shared values,
|
||||||
|
/// conflicts, coverage gaps
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "GET",
|
method = "GET",
|
||||||
path = "/config/cross-project",
|
path = "/config/cross-project",
|
||||||
description = "Compare config surfaces across all registered projects: shared values, \
|
|
||||||
conflicts, coverage gaps",
|
|
||||||
auth = "none",
|
auth = "none",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
tags = "config"
|
tags = "config"
|
||||||
@ -2871,11 +2872,11 @@ fn default_dry_run() -> bool {
|
|||||||
true
|
true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Mutate a config section via the override layer. dry_run=true (default)
|
||||||
|
/// returns the proposed change without writing.
|
||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "PUT",
|
method = "PUT",
|
||||||
path = "/projects/{slug}/config/{section}",
|
path = "/projects/{slug}/config/{section}",
|
||||||
description = "Mutate a config section via the override layer. dry_run=true (default) returns \
|
|
||||||
the proposed change without writing.",
|
|
||||||
auth = "admin",
|
auth = "admin",
|
||||||
actors = "agent, developer",
|
actors = "agent, developer",
|
||||||
params = "slug:string:required:Project slug; section:string:required:Section id",
|
params = "slug:string:required:Project slug; section:string:required:Section id",
|
||||||
|
|||||||
@ -1,3 +1,15 @@
|
|||||||
|
//! HTTP daemon for NCL export caching, file watching, actor registry, and MCP
|
||||||
|
//! surface.
|
||||||
|
//!
|
||||||
|
//! Provides a long-running axum server that caches `nickel export` results,
|
||||||
|
//! watches `.ontology/` for changes, maintains an actor session registry with
|
||||||
|
//! bearer-token auth (ADR-005), serves the annotated API catalog built from
|
||||||
|
//! `#[onto_api]` proc-macro registrations (ADR-007), and exposes an MCP tool
|
||||||
|
//! surface for Claude Code integration.
|
||||||
|
//!
|
||||||
|
//! Feature flags: `db` (stratum-db persistence), `nats` (event publishing),
|
||||||
|
//! `ui` (Tera templates + static assets), `mcp` (rmcp tool server).
|
||||||
|
|
||||||
pub mod actors;
|
pub mod actors;
|
||||||
pub mod api;
|
pub mod api;
|
||||||
pub mod api_catalog;
|
pub mod api_catalog;
|
||||||
|
|||||||
@ -21,7 +21,10 @@ pub struct SyncPayload {
|
|||||||
pub gate: Option<serde_json::Value>,
|
pub gate: Option<serde_json::Value>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// `POST /sync` — accept a pre-exported NCL payload and upsert into DB.
|
/// Push-based sync — remote projects POST their pre-exported NCL payload to
|
||||||
|
/// upsert into the daemon cache.
|
||||||
|
///
|
||||||
|
/// `POST /sync`
|
||||||
///
|
///
|
||||||
/// Authentication:
|
/// Authentication:
|
||||||
/// - Looks up the project by `payload.slug` in the registry.
|
/// - Looks up the project by `payload.slug` in the registry.
|
||||||
@ -35,8 +38,6 @@ pub struct SyncPayload {
|
|||||||
#[ontoref_derive::onto_api(
|
#[ontoref_derive::onto_api(
|
||||||
method = "POST",
|
method = "POST",
|
||||||
path = "/sync",
|
path = "/sync",
|
||||||
description = "Push-based sync: remote projects POST their NCL export JSON here to update the \
|
|
||||||
daemon cache",
|
|
||||||
auth = "viewer",
|
auth = "viewer",
|
||||||
actors = "ci, agent",
|
actors = "ci, agent",
|
||||||
params = "slug:string:required:Project slug from Authorization header context",
|
params = "slug:string:required:Project slug from Authorization header context",
|
||||||
|
|||||||
@ -1,8 +1,17 @@
|
|||||||
|
//! Proc-macro crate for the ontoref protocol.
|
||||||
|
//!
|
||||||
|
//! Provides `#[onto_api(...)]` — an attribute macro for daemon HTTP handler
|
||||||
|
//! functions that registers each endpoint in the `api_catalog` at link time
|
||||||
|
//! via `inventory::submit!`. Metadata declared in the attribute (auth level,
|
||||||
|
//! actor set, tags, params) is exported to `artifacts/api-catalog-*.ncl` by
|
||||||
|
//! `just export-api-catalog`, making the full HTTP surface queryable as typed
|
||||||
|
//! NCL without requiring a running daemon (ADR-007).
|
||||||
|
|
||||||
use proc_macro::TokenStream;
|
use proc_macro::TokenStream;
|
||||||
use proc_macro2::Span;
|
use proc_macro2::Span;
|
||||||
use quote::quote;
|
use quote::quote;
|
||||||
use syn::{
|
use syn::{
|
||||||
parse_macro_input, punctuated::Punctuated, DeriveInput, Expr, ExprLit, Lit, LitStr,
|
parse_macro_input, punctuated::Punctuated, DeriveInput, Expr, ExprLit, ItemFn, Lit, LitStr,
|
||||||
MetaNameValue, Token,
|
MetaNameValue, Token,
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -17,7 +26,9 @@ use syn::{
|
|||||||
/// # Required keys
|
/// # Required keys
|
||||||
/// - `method = "GET"` — HTTP verb
|
/// - `method = "GET"` — HTTP verb
|
||||||
/// - `path = "/graph/impact"` — URL path pattern (axum syntax)
|
/// - `path = "/graph/impact"` — URL path pattern (axum syntax)
|
||||||
/// - `description = "..."` — one-line description of what the endpoint does
|
/// - `description = "..."` — one-line description (optional if a `///` doc
|
||||||
|
/// comment is present; explicit attribute value takes priority over the doc
|
||||||
|
/// comment)
|
||||||
///
|
///
|
||||||
/// # Optional keys
|
/// # Optional keys
|
||||||
/// - `auth = "none"` — authentication level: "none" | "viewer" | "admin"
|
/// - `auth = "none"` — authentication level: "none" | "viewer" | "admin"
|
||||||
@ -70,6 +81,12 @@ struct OntoApiParam {
|
|||||||
fn expand_onto_api(args: TokenStream, input: TokenStream) -> syn::Result<proc_macro2::TokenStream> {
|
fn expand_onto_api(args: TokenStream, input: TokenStream) -> syn::Result<proc_macro2::TokenStream> {
|
||||||
let item = proc_macro2::TokenStream::from(input);
|
let item = proc_macro2::TokenStream::from(input);
|
||||||
|
|
||||||
|
// Extract first non-empty `///` doc comment from the annotated function.
|
||||||
|
// `/// text` compiles to `#[doc = " text"]` before the macro sees it.
|
||||||
|
let doc_desc: Option<String> = syn::parse2::<ItemFn>(item.clone())
|
||||||
|
.ok()
|
||||||
|
.and_then(|fn_item| fn_item.attrs.into_iter().find_map(doc_attr_text));
|
||||||
|
|
||||||
let kv_args = syn::parse::Parser::parse(
|
let kv_args = syn::parse::Parser::parse(
|
||||||
Punctuated::<MetaNameValue, Token![,]>::parse_terminated,
|
Punctuated::<MetaNameValue, Token![,]>::parse_terminated,
|
||||||
args,
|
args,
|
||||||
@ -126,10 +143,10 @@ fn expand_onto_api(args: TokenStream, input: TokenStream) -> syn::Result<proc_ma
|
|||||||
})?;
|
})?;
|
||||||
let path = path
|
let path = path
|
||||||
.ok_or_else(|| syn::Error::new(Span::call_site(), "#[onto_api] requires path = \"...\""))?;
|
.ok_or_else(|| syn::Error::new(Span::call_site(), "#[onto_api] requires path = \"...\""))?;
|
||||||
let desc = description.ok_or_else(|| {
|
let desc = description.or(doc_desc).ok_or_else(|| {
|
||||||
syn::Error::new(
|
syn::Error::new(
|
||||||
Span::call_site(),
|
Span::call_site(),
|
||||||
"#[onto_api] requires description = \"...\"",
|
"#[onto_api] requires description = \"...\" or a /// doc comment on the function",
|
||||||
)
|
)
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
@ -256,10 +273,12 @@ fn emit_onto_api(attr: OntoApiAttr, item: proc_macro2::TokenStream) -> proc_macr
|
|||||||
#[derive(Default)]
|
#[derive(Default)]
|
||||||
struct OntoAttr {
|
struct OntoAttr {
|
||||||
id: Option<String>,
|
id: Option<String>,
|
||||||
|
name: Option<String>,
|
||||||
level: Option<String>,
|
level: Option<String>,
|
||||||
pole: Option<String>,
|
pole: Option<String>,
|
||||||
description: Option<String>,
|
description: Option<String>,
|
||||||
adrs: Vec<String>,
|
adrs: Vec<String>,
|
||||||
|
paths: Vec<String>,
|
||||||
invariant: Option<bool>,
|
invariant: Option<bool>,
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -277,11 +296,12 @@ fn parse_onto_attr(attr: &syn::Attribute) -> syn::Result<OntoAttr> {
|
|||||||
.to_string();
|
.to_string();
|
||||||
|
|
||||||
match key.as_str() {
|
match key.as_str() {
|
||||||
"id" | "level" | "pole" | "description" => {
|
"id" | "name" | "level" | "pole" | "description" => {
|
||||||
let s = lit_str(&kv.value)
|
let s = lit_str(&kv.value)
|
||||||
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string literal"))?;
|
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string literal"))?;
|
||||||
match key.as_str() {
|
match key.as_str() {
|
||||||
"id" => out.id = Some(s),
|
"id" => out.id = Some(s),
|
||||||
|
"name" => out.name = Some(s),
|
||||||
"level" => out.level = Some(s),
|
"level" => out.level = Some(s),
|
||||||
"pole" => out.pole = Some(s),
|
"pole" => out.pole = Some(s),
|
||||||
"description" => out.description = Some(s),
|
"description" => out.description = Some(s),
|
||||||
@ -292,7 +312,17 @@ fn parse_onto_attr(attr: &syn::Attribute) -> syn::Result<OntoAttr> {
|
|||||||
// adrs = "adr-001, adr-002" — comma-separated list in a single string
|
// adrs = "adr-001, adr-002" — comma-separated list in a single string
|
||||||
let s = lit_str(&kv.value)
|
let s = lit_str(&kv.value)
|
||||||
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string literal"))?;
|
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string literal"))?;
|
||||||
out.adrs = s.split(',').map(|a| a.trim().to_owned()).collect();
|
out.adrs.extend(s.split(',').map(|a| a.trim().to_owned()));
|
||||||
|
}
|
||||||
|
"paths" => {
|
||||||
|
// paths = "crates/foo/, docs/foo.md" — comma-separated artifact paths
|
||||||
|
let s = lit_str(&kv.value)
|
||||||
|
.ok_or_else(|| syn::Error::new_spanned(&kv.value, "expected string literal"))?;
|
||||||
|
out.paths.extend(
|
||||||
|
s.split(',')
|
||||||
|
.map(|p| p.trim().to_owned())
|
||||||
|
.filter(|p| !p.is_empty()),
|
||||||
|
);
|
||||||
}
|
}
|
||||||
"invariant" => {
|
"invariant" => {
|
||||||
out.invariant =
|
out.invariant =
|
||||||
@ -303,7 +333,10 @@ fn parse_onto_attr(attr: &syn::Attribute) -> syn::Result<OntoAttr> {
|
|||||||
other => {
|
other => {
|
||||||
return Err(syn::Error::new_spanned(
|
return Err(syn::Error::new_spanned(
|
||||||
&kv.path,
|
&kv.path,
|
||||||
format!("unknown onto key: {other}"),
|
format!(
|
||||||
|
"unknown onto key: {other}; expected id, name, level, pole, description, \
|
||||||
|
adrs, paths, invariant"
|
||||||
|
),
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -311,6 +344,29 @@ fn parse_onto_attr(attr: &syn::Attribute) -> syn::Result<OntoAttr> {
|
|||||||
Ok(out)
|
Ok(out)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Extract the text of a `#[doc = "..."]` attribute, or `None` if it is empty
|
||||||
|
/// or not a doc attribute.
|
||||||
|
fn doc_attr_text(attr: syn::Attribute) -> Option<String> {
|
||||||
|
if !attr.path().is_ident("doc") {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
let syn::Meta::NameValue(mnv) = attr.meta else {
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
let Expr::Lit(ExprLit {
|
||||||
|
lit: Lit::Str(s), ..
|
||||||
|
}) = mnv.value
|
||||||
|
else {
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
let t = s.value().trim().to_owned();
|
||||||
|
if t.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some(t)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
fn lit_str(expr: &Expr) -> Option<String> {
|
fn lit_str(expr: &Expr) -> Option<String> {
|
||||||
if let Expr::Lit(ExprLit {
|
if let Expr::Lit(ExprLit {
|
||||||
lit: Lit::Str(s), ..
|
lit: Lit::Str(s), ..
|
||||||
@ -336,8 +392,8 @@ fn lit_bool(expr: &Expr) -> Option<bool> {
|
|||||||
// ── #[derive(OntologyNode)]
|
// ── #[derive(OntologyNode)]
|
||||||
// ───────────────────────────────────────────────────
|
// ───────────────────────────────────────────────────
|
||||||
|
|
||||||
/// Derive macro that registers a Rust type as an
|
/// Derive macro that registers a Rust type as a
|
||||||
/// [`ontoref_ontology::NodeContribution`].
|
/// `NodeContribution` (see `ontoref_ontology::contrib::NodeContribution`).
|
||||||
///
|
///
|
||||||
/// The `#[onto(...)]` attribute declares the node's identity in the ontology
|
/// The `#[onto(...)]` attribute declares the node's identity in the ontology
|
||||||
/// DAG. All `#[onto]` helper attributes on the type are merged in declaration
|
/// DAG. All `#[onto]` helper attributes on the type are merged in declaration
|
||||||
@ -345,25 +401,27 @@ fn lit_bool(expr: &Expr) -> Option<bool> {
|
|||||||
///
|
///
|
||||||
/// # Required attributes
|
/// # Required attributes
|
||||||
/// - `id = "my-node-id"` — unique node identifier (must match NCL convention)
|
/// - `id = "my-node-id"` — unique node identifier (must match NCL convention)
|
||||||
/// - `level = "Practice"` — [`AbstractionLevel`] variant name
|
/// - `level = "Practice"` — `AbstractionLevel` variant name
|
||||||
/// - `pole = "Yang"` — [`Pole`] variant name
|
/// - `pole = "Yang"` — `Pole` variant name
|
||||||
///
|
///
|
||||||
/// # Optional attributes
|
/// # Optional attributes
|
||||||
/// - `description = "..."` — human-readable description
|
/// - `name = "Human Name"` — display name (defaults to `id` if absent)
|
||||||
/// - `adrs = "adr-001, adr-002"` — comma-separated ADR references
|
/// - `description = "..."` — one-line description; omit to fall back to the
|
||||||
|
/// `///` doc comment on the type
|
||||||
|
/// - `adrs = "adr-001, adr-002"` — comma-separated ADR references (accumulates
|
||||||
|
/// across multiple `#[onto]` attributes)
|
||||||
|
/// - `paths = "crates/foo/, docs/bar.md"` — comma-separated artifact paths
|
||||||
|
/// (accumulates across multiple `#[onto]` attributes)
|
||||||
/// - `invariant = true` — mark node as invariant (default: false)
|
/// - `invariant = true` — mark node as invariant (default: false)
|
||||||
///
|
///
|
||||||
/// # Example
|
/// # Example
|
||||||
/// ```ignore
|
/// ```ignore
|
||||||
|
/// /// Caches nickel export results to avoid re-eval on unchanged files.
|
||||||
/// #[derive(OntologyNode)]
|
/// #[derive(OntologyNode)]
|
||||||
/// #[onto(id = "ncl-cache", level = "Practice", pole = "Yang")]
|
/// #[onto(id = "ncl-cache", name = "NCL Cache", level = "Practice", pole = "Yang")]
|
||||||
/// #[onto(description = "Caches NCL exports to avoid re-eval on unchanged files")]
|
/// #[onto(adrs = "adr-002, adr-004", paths = "crates/ontoref-daemon/src/cache.rs")]
|
||||||
/// #[onto(adrs = "adr-002, adr-004")]
|
|
||||||
/// pub struct NclCache { /* ... */ }
|
/// pub struct NclCache { /* ... */ }
|
||||||
/// ```
|
/// ```
|
||||||
///
|
|
||||||
/// [`AbstractionLevel`]: ontoref_ontology::AbstractionLevel
|
|
||||||
/// [`Pole`]: ontoref_ontology::Pole
|
|
||||||
#[proc_macro_derive(OntologyNode, attributes(onto))]
|
#[proc_macro_derive(OntologyNode, attributes(onto))]
|
||||||
pub fn derive_ontology_node(input: TokenStream) -> TokenStream {
|
pub fn derive_ontology_node(input: TokenStream) -> TokenStream {
|
||||||
let ast = parse_macro_input!(input as DeriveInput);
|
let ast = parse_macro_input!(input as DeriveInput);
|
||||||
@ -394,6 +452,10 @@ fn expand_ontology_node(ast: DeriveInput) -> syn::Result<proc_macro2::TokenStrea
|
|||||||
merged.invariant = parsed.invariant;
|
merged.invariant = parsed.invariant;
|
||||||
}
|
}
|
||||||
merged.adrs.extend(parsed.adrs);
|
merged.adrs.extend(parsed.adrs);
|
||||||
|
merged.paths.extend(parsed.paths);
|
||||||
|
if parsed.name.is_some() {
|
||||||
|
merged.name = parsed.name;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let id = merged.id.ok_or_else(|| {
|
let id = merged.id.ok_or_else(|| {
|
||||||
@ -445,7 +507,14 @@ fn expand_ontology_node(ast: DeriveInput) -> syn::Result<proc_macro2::TokenStrea
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
let description = merged.description.as_deref().unwrap_or("");
|
// description: explicit attribute wins; fall back to /// doc comment on the
|
||||||
|
// type.
|
||||||
|
let doc_desc_type: Option<String> = ast.attrs.iter().cloned().find_map(doc_attr_text);
|
||||||
|
let description = merged.description.or(doc_desc_type).unwrap_or_default();
|
||||||
|
|
||||||
|
// name: explicit attribute wins; fall back to id.
|
||||||
|
let name = merged.name.unwrap_or_else(|| id.clone());
|
||||||
|
|
||||||
let invariant = merged.invariant.unwrap_or(false);
|
let invariant = merged.invariant.unwrap_or(false);
|
||||||
let adrs: Vec<LitStr> = merged
|
let adrs: Vec<LitStr> = merged
|
||||||
.adrs
|
.adrs
|
||||||
@ -453,10 +522,16 @@ fn expand_ontology_node(ast: DeriveInput) -> syn::Result<proc_macro2::TokenStrea
|
|||||||
.filter(|s| !s.is_empty())
|
.filter(|s| !s.is_empty())
|
||||||
.map(|s| LitStr::new(s, Span::call_site()))
|
.map(|s| LitStr::new(s, Span::call_site()))
|
||||||
.collect();
|
.collect();
|
||||||
|
let path_lits: Vec<LitStr> = merged
|
||||||
|
.paths
|
||||||
|
.iter()
|
||||||
|
.filter(|s| !s.is_empty())
|
||||||
|
.map(|s| LitStr::new(s, Span::call_site()))
|
||||||
|
.collect();
|
||||||
|
|
||||||
let id_lit = LitStr::new(&id, Span::call_site());
|
let id_lit = LitStr::new(&id, Span::call_site());
|
||||||
let id_lit2 = id_lit.clone();
|
let name_lit = LitStr::new(&name, Span::call_site());
|
||||||
let description_lit = LitStr::new(description, Span::call_site());
|
let description_lit = LitStr::new(&description, Span::call_site());
|
||||||
|
|
||||||
// Derive a unique identifier for the inventory submission from the type name.
|
// Derive a unique identifier for the inventory submission from the type name.
|
||||||
let type_name = &ast.ident;
|
let type_name = &ast.ident;
|
||||||
@ -472,12 +547,12 @@ fn expand_ontology_node(ast: DeriveInput) -> syn::Result<proc_macro2::TokenStrea
|
|||||||
pub fn ontology_node() -> ::ontoref_ontology::Node {
|
pub fn ontology_node() -> ::ontoref_ontology::Node {
|
||||||
::ontoref_ontology::Node {
|
::ontoref_ontology::Node {
|
||||||
id: #id_lit.to_owned(),
|
id: #id_lit.to_owned(),
|
||||||
name: #id_lit2.to_owned(),
|
name: #name_lit.to_owned(),
|
||||||
pole: #pole_variant,
|
pole: #pole_variant,
|
||||||
level: #level_variant,
|
level: #level_variant,
|
||||||
description: #description_lit.to_owned(),
|
description: #description_lit.to_owned(),
|
||||||
invariant: #invariant,
|
invariant: #invariant,
|
||||||
artifact_paths: vec![],
|
artifact_paths: vec![#(#path_lits.to_owned()),*],
|
||||||
adrs: vec![#(#adrs.to_owned()),*],
|
adrs: vec![#(#adrs.to_owned()),*],
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -11,9 +11,9 @@ pub struct ApiParam {
|
|||||||
|
|
||||||
/// Static metadata for an HTTP endpoint.
|
/// Static metadata for an HTTP endpoint.
|
||||||
///
|
///
|
||||||
/// Registered at link time via [`inventory::submit!`] — generated by
|
/// Registered at link time via `inventory::submit!` — generated by
|
||||||
/// `#[onto_api(...)]` proc-macro attribute on each handler function.
|
/// `#[onto_api(...)]` proc-macro attribute on each handler function.
|
||||||
/// Collected via [`inventory::iter::<ApiRouteEntry>()`].
|
/// Collected via `inventory::iter::<ApiRouteEntry>()`.
|
||||||
#[derive(serde::Serialize, Clone)]
|
#[derive(serde::Serialize, Clone)]
|
||||||
pub struct ApiRouteEntry {
|
pub struct ApiRouteEntry {
|
||||||
pub method: &'static str,
|
pub method: &'static str,
|
||||||
|
|||||||
@ -3,7 +3,8 @@
|
|||||||
/// Crates that derive `OntologyNode` emit an
|
/// Crates that derive `OntologyNode` emit an
|
||||||
/// `inventory::submit!(NodeContribution { ... })` call, which is collected
|
/// `inventory::submit!(NodeContribution { ... })` call, which is collected
|
||||||
/// here. All submissions are merged into [`Core`] via
|
/// here. All submissions are merged into [`Core`] via
|
||||||
/// [`Core::merge_contributors`] — NCL-loaded nodes always win on id collision.
|
/// [`Core::merge_contributors`](crate::ontology::Core::merge_contributors) —
|
||||||
|
/// NCL-loaded nodes always win on id collision.
|
||||||
///
|
///
|
||||||
/// [`Core`]: crate::ontology::Core
|
/// [`Core`]: crate::ontology::Core
|
||||||
pub struct NodeContribution {
|
pub struct NodeContribution {
|
||||||
|
|||||||
@ -1,3 +1,13 @@
|
|||||||
|
//! Load and query `.ontology/*.ncl` files as typed Rust structs.
|
||||||
|
//!
|
||||||
|
//! Provides [`ontology::Core`], [`ontology::State`], and [`ontology::Gate`] for
|
||||||
|
//! ecosystem-level introspection of a project's self-knowledge graph. Each node
|
||||||
|
//! carries `artifact_paths` and `adrs` fields (`serde(default)`) for
|
||||||
|
//! zero-migration backward compatibility across protocol versions.
|
||||||
|
//!
|
||||||
|
//! This crate has zero stratumiops dependencies (ADR-001) and is the minimal
|
||||||
|
//! adoption surface for any project consuming the ontoref protocol.
|
||||||
|
|
||||||
pub mod error;
|
pub mod error;
|
||||||
pub mod ontology;
|
pub mod ontology;
|
||||||
pub mod types;
|
pub mod types;
|
||||||
|
|||||||
@ -88,11 +88,16 @@ impl Core {
|
|||||||
.map(|(i, n)| (n.id.clone(), i))
|
.map(|(i, n)| (n.id.clone(), i))
|
||||||
.collect();
|
.collect();
|
||||||
|
|
||||||
Ok(Self {
|
let mut core = Self {
|
||||||
nodes: cfg.nodes,
|
nodes: cfg.nodes,
|
||||||
edges: cfg.edges,
|
edges: cfg.edges,
|
||||||
by_id,
|
by_id,
|
||||||
})
|
};
|
||||||
|
// Merge any NodeContribution registrations from #[derive(OntologyNode)].
|
||||||
|
// NCL-loaded nodes win on id collision — contributor nodes only fill gaps.
|
||||||
|
#[cfg(feature = "derive")]
|
||||||
|
core.merge_contributors();
|
||||||
|
Ok(core)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[deprecated(note = "use Core::from_value() with daemon-provided JSON instead")]
|
#[deprecated(note = "use Core::from_value() with daemon-provided JSON instead")]
|
||||||
|
|||||||
@ -10,7 +10,7 @@ use tracing::{info, warn};
|
|||||||
use crate::{
|
use crate::{
|
||||||
dag,
|
dag,
|
||||||
error::ReflectionError,
|
error::ReflectionError,
|
||||||
mode::{ActionStep, Actor, ErrorStrategy, ReflectionMode},
|
mode::{ActionStep, Actor, ErrorStrategy, GuardSeverity, ReflectionMode},
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Context provided to a mode execution run.
|
/// Context provided to a mode execution run.
|
||||||
@ -43,6 +43,33 @@ impl ReflectionMode {
|
|||||||
self.validate()
|
self.validate()
|
||||||
.with_context(|| format!("mode '{}' failed pre-execution DAG validation", self.id))?;
|
.with_context(|| format!("mode '{}' failed pre-execution DAG validation", self.id))?;
|
||||||
|
|
||||||
|
// ── Guards (Active Partner) ──
|
||||||
|
for guard in &self.guards {
|
||||||
|
let result = run_cmd(&guard.cmd).await;
|
||||||
|
match (&guard.severity, result) {
|
||||||
|
(_, Ok(())) => {
|
||||||
|
info!(guard = %guard.id, mode = %self.id, "guard passed");
|
||||||
|
}
|
||||||
|
(GuardSeverity::Block, Err(e)) => {
|
||||||
|
return Err(anyhow!(
|
||||||
|
"mode '{}' blocked by guard '{}': {} — {}",
|
||||||
|
self.id,
|
||||||
|
guard.id,
|
||||||
|
guard.reason,
|
||||||
|
e
|
||||||
|
));
|
||||||
|
}
|
||||||
|
(GuardSeverity::Warn, Err(_)) => {
|
||||||
|
warn!(
|
||||||
|
guard = %guard.id,
|
||||||
|
mode = %self.id,
|
||||||
|
reason = %guard.reason,
|
||||||
|
"guard warning — continuing execution"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let trigger_subject = format!("ecosystem.reflection.{}.{}", self.id, ctx.project);
|
let trigger_subject = format!("ecosystem.reflection.{}.{}", self.id, ctx.project);
|
||||||
let trigger_payload = serde_json::to_value(&ctx.params)
|
let trigger_payload = serde_json::to_value(&ctx.params)
|
||||||
.context("serializing RunContext.params as trigger payload")?;
|
.context("serializing RunContext.params as trigger payload")?;
|
||||||
@ -54,21 +81,37 @@ impl ReflectionMode {
|
|||||||
.await
|
.await
|
||||||
.context("creating PipelineRun in state tracker")?;
|
.context("creating PipelineRun in state tracker")?;
|
||||||
|
|
||||||
let layers = dag::topological_layers(&self.steps)
|
// ── Step execution with convergence loop ──
|
||||||
.with_context(|| format!("computing execution layers for mode '{}'", self.id))?;
|
let max_iterations = self
|
||||||
|
.converge
|
||||||
|
.as_ref()
|
||||||
|
.map_or(1, |c| (c.max_iterations.max(1) + 1) as usize);
|
||||||
|
|
||||||
let step_index: HashMap<&str, &ActionStep> =
|
for iteration in 0..max_iterations {
|
||||||
self.steps.iter().map(|s| (s.id.as_str(), s)).collect();
|
let had_failure = run_all_layers(
|
||||||
|
&self.steps,
|
||||||
|
ctx,
|
||||||
|
&run_id,
|
||||||
|
&self.id,
|
||||||
|
&self.converge,
|
||||||
|
iteration,
|
||||||
|
max_iterations,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
for layer in &layers {
|
if self.converge.is_none() {
|
||||||
let failure = run_layer(layer, &step_index, ctx, &run_id, &self.id).await?;
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
if let Some(e) = failure {
|
let converge = self.converge.as_ref().expect("checked above");
|
||||||
ctx.state
|
if iteration != 0 && !had_failure {
|
||||||
.update_status(&run_id, PipelineStatus::Failed)
|
break;
|
||||||
.await
|
}
|
||||||
.context("updating pipeline status to Failed")?;
|
|
||||||
return Err(e);
|
match check_convergence(converge, &self.id, iteration, max_iterations).await {
|
||||||
|
ConvergeOutcome::Converged => break,
|
||||||
|
ConvergeOutcome::Iterate => continue,
|
||||||
|
ConvergeOutcome::Exhausted => break,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -87,6 +130,76 @@ impl ReflectionMode {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
enum ConvergeOutcome {
|
||||||
|
Converged,
|
||||||
|
Iterate,
|
||||||
|
Exhausted,
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn check_convergence(
|
||||||
|
converge: &crate::mode::Converge,
|
||||||
|
mode_id: &str,
|
||||||
|
iteration: usize,
|
||||||
|
max_iterations: usize,
|
||||||
|
) -> ConvergeOutcome {
|
||||||
|
match run_cmd(&converge.condition).await {
|
||||||
|
Ok(()) => {
|
||||||
|
info!(mode = %mode_id, iteration = iteration + 1, "convergence condition met");
|
||||||
|
ConvergeOutcome::Converged
|
||||||
|
}
|
||||||
|
Err(_) if iteration + 1 < max_iterations => {
|
||||||
|
info!(
|
||||||
|
mode = %mode_id,
|
||||||
|
iteration = iteration + 1,
|
||||||
|
strategy = ?converge.strategy,
|
||||||
|
"convergence condition not met — iterating"
|
||||||
|
);
|
||||||
|
ConvergeOutcome::Iterate
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
mode = %mode_id,
|
||||||
|
"convergence condition not met after {} iterations: {e}",
|
||||||
|
iteration + 1
|
||||||
|
);
|
||||||
|
ConvergeOutcome::Exhausted
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn run_all_layers(
|
||||||
|
steps: &[ActionStep],
|
||||||
|
ctx: &RunContext,
|
||||||
|
run_id: &PipelineRunId,
|
||||||
|
mode_id: &str,
|
||||||
|
converge: &Option<crate::mode::Converge>,
|
||||||
|
iteration: usize,
|
||||||
|
max_iterations: usize,
|
||||||
|
) -> Result<bool> {
|
||||||
|
let layers = dag::topological_layers(steps)
|
||||||
|
.with_context(|| format!("computing execution layers for mode '{mode_id}'"))?;
|
||||||
|
|
||||||
|
let step_index: HashMap<&str, &ActionStep> = steps.iter().map(|s| (s.id.as_str(), s)).collect();
|
||||||
|
|
||||||
|
for layer in &layers {
|
||||||
|
let failure = run_layer(layer, &step_index, ctx, run_id, mode_id).await?;
|
||||||
|
|
||||||
|
if let Some(e) = failure {
|
||||||
|
if converge.is_none() || iteration + 1 >= max_iterations {
|
||||||
|
ctx.state
|
||||||
|
.update_status(run_id, PipelineStatus::Failed)
|
||||||
|
.await
|
||||||
|
.context("updating pipeline status to Failed")?;
|
||||||
|
return Err(e);
|
||||||
|
}
|
||||||
|
warn!(mode = %mode_id, iteration = iteration + 1, "step failure — will retry");
|
||||||
|
return Ok(true);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
|
|
||||||
/// Execute all steps in a layer concurrently. Returns the first fatal error, if
|
/// Execute all steps in a layer concurrently. Returns the first fatal error, if
|
||||||
/// any.
|
/// any.
|
||||||
async fn run_layer(
|
async fn run_layer(
|
||||||
@ -308,7 +421,7 @@ async fn run_with_retry(cmd: &str, on_error: &crate::mode::OnError) -> Result<()
|
|||||||
Err(anyhow!("retry loop exited without result"))
|
Err(anyhow!("retry loop exited without result"))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn run_cmd(cmd: &str) -> Result<()> {
|
pub(crate) async fn run_cmd(cmd: &str) -> Result<()> {
|
||||||
let output = tokio::process::Command::new("sh")
|
let output = tokio::process::Command::new("sh")
|
||||||
.arg("-c")
|
.arg("-c")
|
||||||
.arg(cmd)
|
.arg(cmd)
|
||||||
|
|||||||
@ -1,3 +1,14 @@
|
|||||||
|
//! Load, validate, and execute Reflection modes as NCL DAG contracts.
|
||||||
|
//!
|
||||||
|
//! A [`mode::ReflectionMode`] is a typed DAG of [`mode::ActionStep`]s with
|
||||||
|
//! explicit actor assignments, dependency edges, and error strategies. The
|
||||||
|
//! [`executor`] validates the DAG contract before executing any step, ensuring
|
||||||
|
//! that declared `depends_on` ordering and actor policies are respected.
|
||||||
|
//!
|
||||||
|
//! Depends on `stratum-graph` and `stratum-state` for DAG traversal and FSM
|
||||||
|
//! state tracking. The `nats` feature gates the `nats` module and event
|
||||||
|
//! publishing.
|
||||||
|
|
||||||
pub mod dag;
|
pub mod dag;
|
||||||
pub mod error;
|
pub mod error;
|
||||||
pub mod executor;
|
pub mod executor;
|
||||||
@ -6,5 +17,6 @@ pub mod mode;
|
|||||||
pub use error::ReflectionError;
|
pub use error::ReflectionError;
|
||||||
pub use executor::{ModeRun, RunContext};
|
pub use executor::{ModeRun, RunContext};
|
||||||
pub use mode::{
|
pub use mode::{
|
||||||
ActionStep, Actor, Dependency, DependencyKind, ErrorStrategy, OnError, ReflectionMode,
|
ActionStep, Actor, Converge, ConvergeStrategy, Dependency, DependencyKind, ErrorStrategy,
|
||||||
|
Guard, GuardSeverity, OnError, ReflectionMode,
|
||||||
};
|
};
|
||||||
|
|||||||
@ -14,9 +14,55 @@ pub struct ReflectionMode {
|
|||||||
pub trigger: String,
|
pub trigger: String,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub preconditions: Vec<String>,
|
pub preconditions: Vec<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub guards: Vec<Guard>,
|
||||||
pub steps: Vec<ActionStep>,
|
pub steps: Vec<ActionStep>,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub postconditions: Vec<String>,
|
pub postconditions: Vec<String>,
|
||||||
|
pub converge: Option<Converge>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Pre-flight check executed before any step.
|
||||||
|
/// Block severity aborts mode execution; Warn prints a message and continues.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct Guard {
|
||||||
|
pub id: String,
|
||||||
|
pub cmd: String,
|
||||||
|
pub reason: String,
|
||||||
|
#[serde(default)]
|
||||||
|
pub severity: GuardSeverity,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Default, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
pub enum GuardSeverity {
|
||||||
|
#[default]
|
||||||
|
Block,
|
||||||
|
Warn,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Post-execution convergence loop.
|
||||||
|
/// After all steps complete, evaluates `condition` — if non-zero exit,
|
||||||
|
/// re-executes steps up to `max_iterations` times.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct Converge {
|
||||||
|
pub condition: String,
|
||||||
|
#[serde(default = "Converge::default_max_iterations")]
|
||||||
|
pub max_iterations: u32,
|
||||||
|
#[serde(default)]
|
||||||
|
pub strategy: ConvergeStrategy,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Converge {
|
||||||
|
fn default_max_iterations() -> u32 {
|
||||||
|
3
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Default, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
pub enum ConvergeStrategy {
|
||||||
|
#[default]
|
||||||
|
RetryFailed,
|
||||||
|
RetryAll,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ReflectionMode {
|
impl ReflectionMode {
|
||||||
|
|||||||
3
justfile
3
justfile
@ -1,3 +1,6 @@
|
|||||||
|
import 'justfiles/build.just'
|
||||||
|
import 'justfiles/test.just'
|
||||||
|
import 'justfiles/dev.just'
|
||||||
import 'justfiles/ci.just'
|
import 'justfiles/ci.just'
|
||||||
import 'justfiles/assets.just'
|
import 'justfiles/assets.just'
|
||||||
|
|
||||||
|
|||||||
25
justfiles/build.just
Normal file
25
justfiles/build.just
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
# Build Module — Core build operations for ontoref
|
||||||
|
# =================================================
|
||||||
|
|
||||||
|
# Build all crates (all targets, all features)
|
||||||
|
build-all:
|
||||||
|
@echo "Building all targets..."
|
||||||
|
cargo build --all-targets
|
||||||
|
|
||||||
|
# Build ontoref-daemon in release mode and install
|
||||||
|
build-daemon: build-css
|
||||||
|
cargo build --release -p ontoref-daemon
|
||||||
|
nu install/install.nu
|
||||||
|
|
||||||
|
# Build UnoCSS bundle for the daemon UI
|
||||||
|
build-css:
|
||||||
|
cd assets/css && pnpm install && pnpm run build
|
||||||
|
|
||||||
|
# Build without stratumiops dependencies (standalone)
|
||||||
|
build-standalone:
|
||||||
|
cargo build -p ontoref-ontology
|
||||||
|
cargo build -p ontoref-daemon --no-default-features
|
||||||
|
|
||||||
|
# Check all crates compile (fast feedback, no codegen)
|
||||||
|
check:
|
||||||
|
cargo check --all-targets --all-features
|
||||||
@ -32,7 +32,7 @@ help:
|
|||||||
@echo " just clean - Clean build artifacts"
|
@echo " just clean - Clean build artifacts"
|
||||||
|
|
||||||
# Run all CI checks
|
# Run all CI checks
|
||||||
ci-full: ci-lint-rust ci-fmt-toml ci-lint-toml ci-lint-nickel ci-lint-markdown ci-lint-prose ci-test ci-audit ci-check-config-sync
|
ci-full: ci-lint-rust ci-fmt-toml ci-lint-toml ci-lint-nickel ci-lint-markdown ci-lint-prose ci-test ci-audit ci-check-config-sync ci-docs
|
||||||
@echo "✅ All CI checks passed!"
|
@echo "✅ All CI checks passed!"
|
||||||
|
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
@ -50,17 +50,9 @@ ci-fmt-toml:
|
|||||||
taplo format --check
|
taplo format --check
|
||||||
|
|
||||||
|
|
||||||
# Format all code
|
# Format all code (alias — prefer dev-fmt)
|
||||||
fmt:
|
fmt:
|
||||||
@echo "🎨 Formatting code..."
|
just dev-fmt
|
||||||
cargo fmt --all
|
|
||||||
just fmt-toml
|
|
||||||
|
|
||||||
# Format TOML files
|
|
||||||
fmt-toml:
|
|
||||||
@echo "🎨 Formatting TOML files..."
|
|
||||||
@command -v taplo >/dev/null || (echo "❌ taplo not installed: cargo install taplo-cli"; exit 1)
|
|
||||||
taplo format
|
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
# Linting
|
# Linting
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
@ -159,71 +151,40 @@ ci-sbom:
|
|||||||
# Documentation
|
# Documentation
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
|
|
||||||
# Generate documentation
|
# Generate documentation (alias — prefer dev-docs)
|
||||||
docs:
|
docs:
|
||||||
@echo "📚 Generating documentation..."
|
just dev-docs
|
||||||
cargo doc --no-deps --open
|
|
||||||
|
|
||||||
# Check documentation
|
# Check documentation — broken intra-doc links are compile errors, not warnings.
|
||||||
|
# RUSTDOCFLAGS are inherited by cargo doc; -D flags turn lint groups into hard errors.
|
||||||
ci-docs:
|
ci-docs:
|
||||||
@echo "📚 Checking documentation..."
|
RUSTDOCFLAGS="-D rustdoc::broken-intra-doc-links -D rustdoc::private-intra-doc-links" \
|
||||||
cargo doc --no-deps --document-private-items 2>&1 | grep -i "warning:" && exit 1 || true
|
cargo doc --all-features --no-deps --workspace 2>&1
|
||||||
@echo "✓ Documentation check passed"
|
|
||||||
|
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
# Pre-commit Setup
|
# Pre-commit Setup
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
|
|
||||||
# Install pre-commit hooks + ontoref git hooks (post-merge, post-checkout)
|
# Install pre-commit hooks (alias — prefer dev-setup-hooks)
|
||||||
setup-hooks:
|
setup-hooks:
|
||||||
#!/usr/bin/env bash
|
just dev-setup-hooks
|
||||||
set -euo pipefail
|
|
||||||
echo "Installing pre-commit hooks..."
|
|
||||||
if command -v pre-commit &> /dev/null; then
|
|
||||||
pre-commit install && pre-commit install --hook-type pre-push
|
|
||||||
echo "✓ Pre-commit hooks installed"
|
|
||||||
else
|
|
||||||
echo "❌ pre-commit not found. Install with: pip install pre-commit"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
# ontoref operational hooks — auto-detect mode on merge/checkout
|
|
||||||
git_hooks_dir="$(git rev-parse --git-dir)/hooks"
|
|
||||||
hook_body='#!/usr/bin/env bash'$'\n''# ontoref git hook — mode auto-detection and ontology sync'$'\n''nu "$(git rev-parse --show-toplevel)/reflection/hooks/git-event.nu" "$1" 2>/dev/null || true'
|
|
||||||
for hook in post-merge post-checkout; do
|
|
||||||
printf '%s\n' "${hook_body}" > "${git_hooks_dir}/${hook}"
|
|
||||||
chmod +x "${git_hooks_dir}/${hook}"
|
|
||||||
echo "✓ ${hook} hook installed"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Run pre-commit on all files
|
# Run pre-commit on all files (alias — prefer dev-hooks-run-all)
|
||||||
hooks-run-all:
|
hooks-run-all:
|
||||||
@echo "🪝 Running pre-commit on all files..."
|
just dev-hooks-run-all
|
||||||
pre-commit run --all-files
|
|
||||||
|
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
# Install
|
# Install
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
|
|
||||||
# Build UnoCSS bundle for the daemon UI
|
# Build and install daemon (alias — prefer build-daemon)
|
||||||
build-css:
|
install-daemon:
|
||||||
cd assets/css && pnpm install && pnpm run build
|
just build-daemon
|
||||||
|
|
||||||
# Build UnoCSS in watch mode (development)
|
|
||||||
watch-css:
|
|
||||||
cd assets/css && pnpm install && pnpm run watch
|
|
||||||
|
|
||||||
# Build ontoref-daemon and install binary, assets, CLI wrapper, and bootstrapper
|
|
||||||
install-daemon: build-css
|
|
||||||
cargo build --release -p ontoref-daemon
|
|
||||||
nu install/install.nu
|
|
||||||
|
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
# Utility Commands
|
# Utility Commands
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
|
|
||||||
# Clean build artifacts
|
# Clean build artifacts (alias — prefer dev-clean)
|
||||||
clean:
|
clean:
|
||||||
@echo "🧹 Cleaning..."
|
just dev-clean
|
||||||
cargo clean
|
|
||||||
rm -rf target/
|
|
||||||
rm -f sbom.json lcov.info
|
|
||||||
|
|||||||
53
justfiles/dev.just
Normal file
53
justfiles/dev.just
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
# Dev Module — Development workflows for ontoref
|
||||||
|
# ================================================
|
||||||
|
|
||||||
|
# Format all code (Rust + TOML)
|
||||||
|
dev-fmt:
|
||||||
|
@echo "Formatting code..."
|
||||||
|
cargo fmt --all
|
||||||
|
@command -v taplo >/dev/null && taplo format || true
|
||||||
|
|
||||||
|
# Build UnoCSS in watch mode (development)
|
||||||
|
dev-watch-css:
|
||||||
|
cd assets/css && pnpm install && pnpm run watch
|
||||||
|
|
||||||
|
# Install pre-commit hooks + ontoref git hooks
|
||||||
|
dev-setup-hooks:
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
echo "Installing pre-commit hooks..."
|
||||||
|
if command -v pre-commit &> /dev/null; then
|
||||||
|
pre-commit install && pre-commit install --hook-type pre-push
|
||||||
|
echo "Pre-commit hooks installed"
|
||||||
|
else
|
||||||
|
echo "pre-commit not found. Install with: pip install pre-commit"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
git_hooks_dir="$(git rev-parse --git-dir)/hooks"
|
||||||
|
hook_body='#!/usr/bin/env bash'$'\n''nu "$(git rev-parse --show-toplevel)/reflection/hooks/git-event.nu" "$1" 2>/dev/null || true'
|
||||||
|
for hook in post-merge post-checkout; do
|
||||||
|
printf '%s\n' "${hook_body}" > "${git_hooks_dir}/${hook}"
|
||||||
|
chmod +x "${git_hooks_dir}/${hook}"
|
||||||
|
echo "${hook} hook installed"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Run pre-commit on all files
|
||||||
|
dev-hooks-run-all:
|
||||||
|
@echo "Running pre-commit on all files..."
|
||||||
|
pre-commit run --all-files
|
||||||
|
|
||||||
|
# Clean build artifacts
|
||||||
|
dev-clean:
|
||||||
|
@echo "Cleaning..."
|
||||||
|
cargo clean
|
||||||
|
rm -rf target/
|
||||||
|
rm -f sbom.json lcov.info
|
||||||
|
|
||||||
|
# Run ontoref sync audit (quick health check)
|
||||||
|
dev-audit:
|
||||||
|
ONTOREF_ACTOR=developer ./ontoref sync audit
|
||||||
|
|
||||||
|
# Generate documentation
|
||||||
|
dev-docs:
|
||||||
|
@echo "Generating documentation..."
|
||||||
|
cargo doc --no-deps --open
|
||||||
24
justfiles/test.just
Normal file
24
justfiles/test.just
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
# Test Module — Test execution for ontoref
|
||||||
|
# ==========================================
|
||||||
|
|
||||||
|
# Run all tests (workspace, all features)
|
||||||
|
test-all:
|
||||||
|
@echo "Running tests..."
|
||||||
|
cargo test --workspace --all-features
|
||||||
|
|
||||||
|
# Run tests for a specific crate
|
||||||
|
test crate:
|
||||||
|
cargo test -p {{crate}}
|
||||||
|
|
||||||
|
# Run tests with coverage (requires cargo-llvm-cov)
|
||||||
|
test-coverage:
|
||||||
|
@echo "Running tests with coverage..."
|
||||||
|
cargo llvm-cov --all-features --lcov --output-path lcov.info
|
||||||
|
|
||||||
|
# Run only ontology crate tests (fast, no external deps)
|
||||||
|
test-ontology:
|
||||||
|
cargo test -p ontoref-ontology
|
||||||
|
|
||||||
|
# Run only daemon tests (requires default features)
|
||||||
|
test-daemon:
|
||||||
|
cargo test -p ontoref-daemon
|
||||||
@ -406,10 +406,10 @@ def "main sync scan" [--level: string = "auto"] {
|
|||||||
opmode preflight "local"
|
opmode preflight "local"
|
||||||
sync scan --level $level
|
sync scan --level $level
|
||||||
}
|
}
|
||||||
def "main sync diff" [--quick] {
|
def "main sync diff" [--quick, --docs] {
|
||||||
log-action "sync diff" "read"
|
log-action "sync diff" "read"
|
||||||
opmode preflight "local"
|
opmode preflight "local"
|
||||||
if $quick { sync diff --quick } else { sync diff }
|
if $quick { sync diff --quick } else if $docs { sync diff --docs } else { sync diff }
|
||||||
}
|
}
|
||||||
def "main sync propose" [] {
|
def "main sync propose" [] {
|
||||||
log-action "sync propose" "read"
|
log-action "sync propose" "read"
|
||||||
@ -582,6 +582,11 @@ def "main describe workspace" [--fmt (-f): string = "", --actor: string = ""] {
|
|||||||
describe workspace --fmt $f --actor $actor
|
describe workspace --fmt $f --actor $actor
|
||||||
}
|
}
|
||||||
|
|
||||||
|
def "main describe guides" [--fmt (-f): string = "", --actor: string = ""] {
|
||||||
|
log-action "describe guides" "read"
|
||||||
|
describe guides --fmt $fmt --actor $actor
|
||||||
|
}
|
||||||
|
|
||||||
# ── Diagram ───────────────────────────────────────────────────────────────────
|
# ── Diagram ───────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
def "main diagram" [] {
|
def "main diagram" [] {
|
||||||
@ -742,6 +747,8 @@ def "main d state" [id?: string, --fmt (-f): string = "", --actor: string = ""]
|
|||||||
def "main d st" [id?: string, --fmt (-f): string = "", --actor: string = ""] { main describe state $id --fmt $fmt --actor $actor }
|
def "main d st" [id?: string, --fmt (-f): string = "", --actor: string = ""] { main describe state $id --fmt $fmt --actor $actor }
|
||||||
def "main d workspace" [--fmt (-f): string = "", --actor: string = ""] { main describe workspace --fmt $fmt --actor $actor }
|
def "main d workspace" [--fmt (-f): string = "", --actor: string = ""] { main describe workspace --fmt $fmt --actor $actor }
|
||||||
def "main d ws" [--fmt (-f): string = "", --actor: string = ""] { main describe workspace --fmt $fmt --actor $actor }
|
def "main d ws" [--fmt (-f): string = "", --actor: string = ""] { main describe workspace --fmt $fmt --actor $actor }
|
||||||
|
def "main d guides" [--fmt (-f): string = "", --actor: string = ""] { main describe guides --fmt $fmt --actor $actor }
|
||||||
|
def "main d g" [--fmt (-f): string = "", --actor: string = ""] { main describe guides --fmt $fmt --actor $actor }
|
||||||
|
|
||||||
def "main df" [--fmt (-f): string = "", --file: string = ""] { main describe diff --fmt $fmt --file $file }
|
def "main df" [--fmt (-f): string = "", --file: string = ""] { main describe diff --fmt $fmt --file $file }
|
||||||
def "main da" [--actor: string = "", --tag: string = "", --auth: string = "", --fmt (-f): string = ""] { main describe api --actor $actor --tag $tag --auth $auth --fmt $fmt }
|
def "main da" [--actor: string = "", --tag: string = "", --auth: string = "", --fmt (-f): string = ""] { main describe api --actor $actor --tag $tag --auth $auth --fmt $fmt }
|
||||||
|
|||||||
@ -23,4 +23,6 @@ let c = import "constraints.ncl" in
|
|||||||
ActionStep = s.ActionStep,
|
ActionStep = s.ActionStep,
|
||||||
Dependency = s.Dependency,
|
Dependency = s.Dependency,
|
||||||
OnError = s.OnError,
|
OnError = s.OnError,
|
||||||
|
Guard = s.Guard,
|
||||||
|
Converge = s.Converge,
|
||||||
}
|
}
|
||||||
|
|||||||
@ -15,6 +15,15 @@ def main [event: string = ""] {
|
|||||||
# Trigger mode detection + transition. Sync runs inside on-enter-daemon if mode changed.
|
# Trigger mode detection + transition. Sync runs inside on-enter-daemon if mode changed.
|
||||||
do { ^nu $wrapper mode-detect } | complete | null
|
do { ^nu $wrapper mode-detect } | complete | null
|
||||||
|
|
||||||
|
# Regenerate static project context for Claude Code sessions.
|
||||||
|
let context_file = $"($ontoref_root)/.claude/project-context.md"
|
||||||
|
if ($context_file | path parent | path exists) {
|
||||||
|
let r = (do { ^nu $wrapper describe project --fmt text } | complete)
|
||||||
|
if $r.exit_code == 0 {
|
||||||
|
$r.stdout | save --force $context_file
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
# If daemon mode is active, push ontology for current git event.
|
# If daemon mode is active, push ontology for current git event.
|
||||||
let lock_file = $"($ontoref_root)/.ontoref/mode.lock"
|
let lock_file = $"($ontoref_root)/.ontoref/mode.lock"
|
||||||
if not ($lock_file | path exists) { return }
|
if not ($lock_file | path exists) { return }
|
||||||
|
|||||||
@ -17,22 +17,19 @@ or append at end). Create .claude/CLAUDE.md if it does not exist.
|
|||||||
When arriving at this project as an agent, execute these discovery steps in order
|
When arriving at this project as an agent, execute these discovery steps in order
|
||||||
before any code changes:
|
before any code changes:
|
||||||
|
|
||||||
# 1. Project capabilities — what this project has and can do
|
# 1. Project capabilities — complete operational context (JSON for parsing)
|
||||||
|
# Includes: project_flags, just_recipes, reflection modes + full DAG steps,
|
||||||
|
# ADRs with constraint counts, backlog items (kind/priority), API routes
|
||||||
|
# filtered for agent actor, Cargo feature flags, Claude capabilities.
|
||||||
ONTOREF_ACTOR=agent ontoref describe capabilities
|
ONTOREF_ACTOR=agent ontoref describe capabilities
|
||||||
|
|
||||||
# 2. Available reflection modes — operational DAGs you can execute
|
# 2. Start a run before any structured work session
|
||||||
ONTOREF_ACTOR=agent ontoref describe mode
|
|
||||||
|
|
||||||
# 3. Check migration status — apply any pending protocol migrations
|
|
||||||
ONTOREF_ACTOR=agent ontoref migrate list
|
|
||||||
|
|
||||||
# 4. Start a run before any structured work session
|
|
||||||
ONTOREF_ACTOR=agent ontoref run start <mode-id> --task \"description\"
|
ONTOREF_ACTOR=agent ontoref run start <mode-id> --task \"description\"
|
||||||
|
|
||||||
# 5. Report each step as you complete it
|
# 3. Report each step as you complete it
|
||||||
ONTOREF_ACTOR=agent ontoref step report <mode-id> <step-id> --status pass|fail|skip
|
ONTOREF_ACTOR=agent ontoref step report <mode-id> <step-id> --status pass|fail|skip
|
||||||
|
|
||||||
# 6. Verify mode completion
|
# 4. Verify mode completion (checks all required steps are reported)
|
||||||
ONTOREF_ACTOR=agent ontoref mode complete <mode-id>
|
ONTOREF_ACTOR=agent ontoref mode complete <mode-id>
|
||||||
|
|
||||||
Graph output (Mermaid DSL, parseable):
|
Graph output (Mermaid DSL, parseable):
|
||||||
@ -43,5 +40,14 @@ Graph output (Mermaid DSL, parseable):
|
|||||||
|
|
||||||
Justfile validation:
|
Justfile validation:
|
||||||
ONTOREF_ACTOR=agent ontoref validate justfile
|
ONTOREF_ACTOR=agent ontoref validate justfile
|
||||||
|
|
||||||
|
The describe capabilities JSON output contains: project_flags (has_rust, has_ui,
|
||||||
|
has_mdbook, has_nats, has_precommit, has_backlog, has_git_remote, open_prs, crates),
|
||||||
|
just_recipes (categorized by canonical module), backlog (pending count + items array
|
||||||
|
with id/title/kind/priority/status), adrs (id/title/status/constraint_count for all
|
||||||
|
ADRs), api_routes (routes from artifact catalog filtered to agent-accessible endpoints
|
||||||
|
with method/path/auth/description), mode_dags (all reflection modes with full steps
|
||||||
|
array: id/actor/action/note/depends_on), feature_flags (Cargo features per crate with
|
||||||
|
enabled deps), and claude_capabilities/manifest_capabilities.
|
||||||
",
|
",
|
||||||
}
|
}
|
||||||
|
|||||||
40
reflection/migrations/0008-claude-ontology-context.ncl
Normal file
40
reflection/migrations/0008-claude-ontology-context.ncl
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
{
|
||||||
|
id = "0008",
|
||||||
|
slug = "claude-ontology-context",
|
||||||
|
description = "Add Ontological Context section to .claude/CLAUDE.md with describe command reference and constraint index",
|
||||||
|
check = {
|
||||||
|
tag = "Grep",
|
||||||
|
pattern = "Ontological Context",
|
||||||
|
paths = [".claude/CLAUDE.md"],
|
||||||
|
must_be_empty = false,
|
||||||
|
},
|
||||||
|
instructions = "
|
||||||
|
Add the following section to .claude/CLAUDE.md, immediately before the
|
||||||
|
Agent Entry-Point Protocol section (or before the last major section if that
|
||||||
|
section does not exist yet).
|
||||||
|
|
||||||
|
## Ontological Context
|
||||||
|
|
||||||
|
### Full context — single call (agents and complex sessions)
|
||||||
|
|
||||||
|
ONTOREF_ACTOR=agent ontoref describe guides --actor agent
|
||||||
|
|
||||||
|
Returns: identity, axioms, practices, hard constraints, gate state, FSM dimensions,
|
||||||
|
actor policy, API surface filtered for agent actor, available modes.
|
||||||
|
|
||||||
|
### Targeted reads
|
||||||
|
|
||||||
|
ontoref describe project — identity, axioms, tensions, practices, FSM dimensions
|
||||||
|
ontoref describe constraints — invariants + hard constraints from all accepted ADRs
|
||||||
|
ontoref describe state — FSM dimension current/desired/blocker per dimension
|
||||||
|
ontoref describe capabilities — tools, modes + full DAG steps, backlog items,
|
||||||
|
API routes, Cargo feature flags
|
||||||
|
|
||||||
|
### When to call what
|
||||||
|
|
||||||
|
- Before touching .ontology/: describe project + describe state
|
||||||
|
- Before touching Cargo.toml: describe constraints (check crate dep constraints)
|
||||||
|
- Before starting any task: describe capabilities (backlog bugs may be relevant)
|
||||||
|
- Full agent onboarding: describe guides --actor agent
|
||||||
|
",
|
||||||
|
}
|
||||||
73
reflection/migrations/0009-claude-session-context.ncl
Normal file
73
reflection/migrations/0009-claude-session-context.ncl
Normal file
@ -0,0 +1,73 @@
|
|||||||
|
{
|
||||||
|
id = "0009",
|
||||||
|
slug = "claude-session-context",
|
||||||
|
description = "Add SessionStart hook and static project-context.md for auto-loading project identity into Claude Code sessions",
|
||||||
|
check = {
|
||||||
|
tag = "NuCmd",
|
||||||
|
cmd = "let root = $env.ONTOREF_PROJECT_ROOT; let hook = $\"($root)/.claude/hooks/session-context.sh\"; let ctx = $\"($root)/.claude/project-context.md\"; if ($hook | path exists) and ($ctx | path exists) { exit 0 } else { exit 1 }",
|
||||||
|
expect_exit = 0,
|
||||||
|
},
|
||||||
|
instructions = "
|
||||||
|
1. Create .claude/hooks/session-context.sh with the following content and make it executable:
|
||||||
|
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
# Hook: SessionStart
|
||||||
|
# Injects current project identity, state dimensions, and hard constraints
|
||||||
|
# as initial context for every Claude Code session in this directory.
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
REPO_ROOT=\"$(git rev-parse --show-toplevel 2>/dev/null || pwd)\"
|
||||||
|
|
||||||
|
if ! command -v nu &>/dev/null; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
export ONTOREF_ACTOR=developer
|
||||||
|
|
||||||
|
echo \"## Project Context (auto-loaded at session start)\"
|
||||||
|
echo \"\"
|
||||||
|
|
||||||
|
ontoref describe project --fmt text 2>/dev/null || true
|
||||||
|
|
||||||
|
echo \"\"
|
||||||
|
|
||||||
|
ontoref describe state 2>/dev/null || true
|
||||||
|
|
||||||
|
chmod +x .claude/hooks/session-context.sh
|
||||||
|
|
||||||
|
2. Add the following to .claude/settings.local.json (merge into existing JSON):
|
||||||
|
|
||||||
|
\"hooks\": {
|
||||||
|
\"SessionStart\": [
|
||||||
|
{
|
||||||
|
\"hooks\": [
|
||||||
|
{
|
||||||
|
\"type\": \"command\",
|
||||||
|
\"command\": \".claude/hooks/session-context.sh\"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
3. Generate the static fallback file:
|
||||||
|
|
||||||
|
ONTOREF_ACTOR=developer ontoref describe project --fmt text > .claude/project-context.md
|
||||||
|
|
||||||
|
4. Add the import at the top of .claude/CLAUDE.md (immediately after the # heading):
|
||||||
|
|
||||||
|
@.claude/project-context.md
|
||||||
|
|
||||||
|
5. In reflection/hooks/git-event.nu, add regeneration of project-context.md on post-merge/post-checkout.
|
||||||
|
After the mode-detect call, insert:
|
||||||
|
|
||||||
|
let context_file = $\"($ontoref_root)/.claude/project-context.md\"
|
||||||
|
if ($context_file | path parent | path exists) {
|
||||||
|
let r = (do { ^nu $wrapper describe project --fmt text } | complete)
|
||||||
|
if $r.exit_code == 0 {
|
||||||
|
$r.stdout | save --force $context_file
|
||||||
|
}
|
||||||
|
}
|
||||||
|
",
|
||||||
|
}
|
||||||
@ -0,0 +1,39 @@
|
|||||||
|
{
|
||||||
|
id = "0010",
|
||||||
|
slug = "manifest-capability-completeness",
|
||||||
|
description = "Ensure manifest.ncl declares capabilities[] covering all project functionality — not just ontology structure. Without capabilities, agents see a partial view of the project via describe capabilities.",
|
||||||
|
check = {
|
||||||
|
tag = "NuCmd",
|
||||||
|
cmd = "let root = $env.ONTOREF_PROJECT_ROOT; let manifest = $\"($root)/.ontology/manifest.ncl\"; if not ($manifest | path exists) { exit 1 }; let caps = (nickel export --format json $manifest | from json | get capabilities? | default []); if ($caps | length) >= 3 { exit 0 } else { exit 1 }",
|
||||||
|
expect_exit = 0,
|
||||||
|
},
|
||||||
|
instructions = "
|
||||||
|
Your project's .ontology/manifest.ncl must declare capabilities[] that describe what the project
|
||||||
|
DOES — not just what it IS. Each capability should have:
|
||||||
|
|
||||||
|
id, name, summary (>30 chars), rationale, how, artifacts[], nodes[]
|
||||||
|
|
||||||
|
Without capabilities, `describe capabilities` shows an incomplete view and agents will miss
|
||||||
|
functionality that exists in the codebase.
|
||||||
|
|
||||||
|
1. Run `ontoref sync audit` and check for 'Manifest capability gaps':
|
||||||
|
- Practice nodes not referenced by any capability's nodes[]
|
||||||
|
- Reflection modes not referenced by any capability's artifacts[]
|
||||||
|
|
||||||
|
2. For each gap, either:
|
||||||
|
a. Add the missing node/artifact reference to an existing capability, OR
|
||||||
|
b. Create a new capability entry if the functionality is distinct
|
||||||
|
|
||||||
|
3. Verify with: nickel export .ontology/manifest.ncl | from json | get capabilities | length
|
||||||
|
Should return >= 3 (most real projects have 5-15 capabilities).
|
||||||
|
|
||||||
|
4. Run `ontoref sync audit` again — 'Manifest capability gaps' section should not appear.
|
||||||
|
|
||||||
|
Common capabilities to declare:
|
||||||
|
- Each major crate or module with distinct functionality
|
||||||
|
- Each daemon UI feature (compose, actions, notifications)
|
||||||
|
- Each CLI subsystem (backlog, forms, sync, coder)
|
||||||
|
- Each integration point (MCP, NATS, API catalog)
|
||||||
|
- Each operational workflow (migration, validation, onboarding)
|
||||||
|
",
|
||||||
|
}
|
||||||
44
reflection/migrations/0011-manifest-coverage-hooks.ncl
Normal file
44
reflection/migrations/0011-manifest-coverage-hooks.ncl
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
{
|
||||||
|
id = "0011",
|
||||||
|
slug = "manifest-coverage-hooks",
|
||||||
|
description = "Add pre-commit hook for manifest capability coverage and SessionStart hook showing manifest health. Prevents incomplete manifests from being committed and ensures agents see gaps at session start.",
|
||||||
|
check = {
|
||||||
|
tag = "NuCmd",
|
||||||
|
cmd = "let root = $env.ONTOREF_PROJECT_ROOT; let pcf = $\"($root)/.pre-commit-config.yaml\"; let hook = $\"($root)/.claude/hooks/session-context.sh\"; let has_manifest_hook = if ($pcf | path exists) { (open --raw $pcf | str contains 'manifest-coverage') } else { false }; let has_session_audit = if ($hook | path exists) { (open --raw $hook | str contains 'manifest-check') } else { false }; if $has_manifest_hook and $has_session_audit { exit 0 } else { exit 1 }",
|
||||||
|
expect_exit = 0,
|
||||||
|
},
|
||||||
|
instructions = "
|
||||||
|
Two hooks ensure manifest completeness is enforced and visible:
|
||||||
|
|
||||||
|
## 1. Pre-commit hook (blocks commits when manifest has Hard failures)
|
||||||
|
|
||||||
|
Add to .pre-commit-config.yaml in the local hooks section:
|
||||||
|
|
||||||
|
- id: manifest-coverage
|
||||||
|
name: Manifest capability completeness
|
||||||
|
entry: bash -c 'ONTOREF_ROOT=\"$(pwd)\" ONTOREF_PROJECT_ROOT=\"$(pwd)\" nu --no-config-file -c \"use ./reflection/modules/sync.nu *; sync manifest-check\"'
|
||||||
|
language: system
|
||||||
|
files: (\\.ontology/|reflection/modes/|reflection/forms/).*\\.ncl$
|
||||||
|
pass_filenames: false
|
||||||
|
stages: [pre-commit]
|
||||||
|
|
||||||
|
This fires when .ontology/, reflection/modes/, or reflection/forms/ NCL files change.
|
||||||
|
It blocks on Hard failures (no capabilities declared at all) and warns on Soft issues.
|
||||||
|
|
||||||
|
## 2. SessionStart hook (shows manifest health at session start)
|
||||||
|
|
||||||
|
Add to .claude/hooks/session-context.sh before the final line:
|
||||||
|
|
||||||
|
# quick manifest/ontology health check
|
||||||
|
HEALTH_OUT=$(cd \"$REPO_ROOT\" && ONTOREF_ROOT=\"$REPO_ROOT\" ONTOREF_PROJECT_ROOT=\"$REPO_ROOT\" \\
|
||||||
|
nu --no-config-file -c 'use ./reflection/modules/sync.nu *; sync manifest-check' 2>/dev/null || true)
|
||||||
|
|
||||||
|
if [[ -n \"$HEALTH_OUT\" ]]; then
|
||||||
|
echo \"Manifest Coverage\"
|
||||||
|
echo \"$HEALTH_OUT\"
|
||||||
|
fi
|
||||||
|
|
||||||
|
This ensures every Claude Code session starts with visibility into manifest completeness.
|
||||||
|
Agents see immediately if capabilities are missing and avoid reinventing existing functionality.
|
||||||
|
",
|
||||||
|
}
|
||||||
80
reflection/migrations/0012-rust-doc-authoring-pattern.ncl
Normal file
80
reflection/migrations/0012-rust-doc-authoring-pattern.ncl
Normal file
@ -0,0 +1,80 @@
|
|||||||
|
{
|
||||||
|
id = "0012",
|
||||||
|
slug = "rust-doc-authoring-pattern",
|
||||||
|
description = "Add Documentation Authoring (Rust) section to .claude/CLAUDE.md and optional pre-commit hooks for rustdoc broken links and crate //! drift detection.",
|
||||||
|
check = {
|
||||||
|
tag = "Grep",
|
||||||
|
pattern = "Documentation Authoring",
|
||||||
|
paths = [".claude/CLAUDE.md"],
|
||||||
|
must_be_empty = false,
|
||||||
|
},
|
||||||
|
instructions = "
|
||||||
|
## 1. Add to .claude/CLAUDE.md
|
||||||
|
|
||||||
|
Insert the following section under '## Key Conventions' (or equivalent conventions block).
|
||||||
|
Create .claude/CLAUDE.md if it does not exist.
|
||||||
|
|
||||||
|
### Documentation Authoring (Rust)
|
||||||
|
|
||||||
|
Three canonical doc layers — each answers a different question:
|
||||||
|
|
||||||
|
Layer | Where | Answers
|
||||||
|
-------------------|------------------|------------------------------------------
|
||||||
|
/// on items | handlers, types | \"what does this item do?\"
|
||||||
|
//! in lib.rs | crate root | \"what does this crate provide as a whole?\"
|
||||||
|
node description | .ontology/ | \"what is this concept in the architecture?\"
|
||||||
|
|
||||||
|
Rules that apply to every .rs file:
|
||||||
|
|
||||||
|
1. /// first line = one standalone sentence. No 'This function/handler/struct...' preamble.
|
||||||
|
This line becomes the API catalog entry, describe features summary, and MCP description.
|
||||||
|
Further lines after a blank /// are detail and do not appear in catalogs.
|
||||||
|
|
||||||
|
2. //! first sentence must be conceptually aligned with the practice node whose
|
||||||
|
artifact_paths[0] points to this crate. Run `ontoref sync diff --docs` to verify.
|
||||||
|
|
||||||
|
3. Omit `description = \"...\"` in #[onto_api] when a /// doc comment exists above the handler.
|
||||||
|
The proc-macro reads /// automatically. Only use explicit description = \"...\" for
|
||||||
|
generated or macro-expanded handlers that cannot carry ///.
|
||||||
|
|
||||||
|
4. //! is mandatory on every lib.rs. Missing //! on a crate with a practice node
|
||||||
|
fails the docs-drift pre-commit check.
|
||||||
|
|
||||||
|
Agent workflow — discovering doc state:
|
||||||
|
ontoref describe workspace # per-crate //! present?, coverage %, drift status
|
||||||
|
ontoref describe features <id> # node description + IMPLEMENTATION VIEW (//! excerpt)
|
||||||
|
ontoref sync diff --docs # explicit drift report before committing
|
||||||
|
|
||||||
|
Registering a new crate as a practice implementation:
|
||||||
|
1. Add/update a Practice node in .ontology/core.ncl with artifact_paths = [\"crates/my-crate/\"]
|
||||||
|
2. Write //! in crates/my-crate/src/lib.rs — first sentence aligned with the node description
|
||||||
|
3. Run `ontoref sync diff --docs` to confirm Jaccard OK
|
||||||
|
4. Run `ontoref generate-mdbook` — a docs/src/crates/my-crate.md page is generated
|
||||||
|
|
||||||
|
|
||||||
|
## 2. Add pre-commit hooks (Rust projects only)
|
||||||
|
|
||||||
|
Skip this step if the project has no Rust crates.
|
||||||
|
|
||||||
|
Add to .pre-commit-config.yaml in the local hooks section (alongside rust-clippy / rust-test):
|
||||||
|
|
||||||
|
- id: docs-links
|
||||||
|
name: Rustdoc broken intra-doc links
|
||||||
|
entry: bash -c 'RUSTDOCFLAGS=\"-D rustdoc::broken-intra-doc-links -D rustdoc::private-intra-doc-links\" cargo doc --no-deps --workspace -q'
|
||||||
|
language: system
|
||||||
|
types: [rust]
|
||||||
|
pass_filenames: false
|
||||||
|
stages: [pre-commit]
|
||||||
|
|
||||||
|
- id: docs-drift
|
||||||
|
name: Crate //! doc drift check
|
||||||
|
entry: bash -c 'nu -c \"use ./reflection/modules/sync.nu; sync diff --docs --fail-on-drift\"'
|
||||||
|
language: system
|
||||||
|
types: [rust]
|
||||||
|
pass_filenames: false
|
||||||
|
stages: [pre-commit]
|
||||||
|
|
||||||
|
docs-drift only fires meaningfully when the project has Practice nodes with artifact_paths
|
||||||
|
pointing to Rust crate directories. If no such nodes exist, the hook exits 0 silently.
|
||||||
|
",
|
||||||
|
}
|
||||||
@ -11,6 +11,15 @@ d.make_mode String {
|
|||||||
"Nushell >= 0.110.0 is available",
|
"Nushell >= 0.110.0 is available",
|
||||||
],
|
],
|
||||||
|
|
||||||
|
guards = [
|
||||||
|
{
|
||||||
|
id = "coder-dir-exists",
|
||||||
|
cmd = "test -d .coder",
|
||||||
|
reason = "No .coder/ directory — run coder init first.",
|
||||||
|
severity = 'Block,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
|
||||||
steps = [
|
steps = [
|
||||||
{
|
{
|
||||||
id = "init-author",
|
id = "init-author",
|
||||||
@ -50,12 +59,21 @@ d.make_mode String {
|
|||||||
depends_on = [{ step = "record-json" }],
|
depends_on = [{ step = "record-json" }],
|
||||||
on_error = { strategy = 'Continue },
|
on_error = { strategy = 'Continue },
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
id = "novelty-check",
|
||||||
|
action = "Compare pending entries against published entries and QA store. Flags entries with >60% Jaccard overlap as REDUNDANT (potential slop). SIMILAR entries (42-60%) get warnings. NOVEL entries pass. Prevents promoting content that duplicates existing knowledge.",
|
||||||
|
cmd = "./ontoref coder novelty-check <author>",
|
||||||
|
actor = 'Both,
|
||||||
|
depends_on = [{ step = "triage" }],
|
||||||
|
on_error = { strategy = 'Continue },
|
||||||
|
note = "Anti-slop guard: entries flagged REDUNDANT should be reviewed before publish. High overlap means the knowledge already exists in the project.",
|
||||||
|
},
|
||||||
{
|
{
|
||||||
id = "publish",
|
id = "publish",
|
||||||
action = "Promote entries from an author workspace to .coder/general/<category>/. Files are prefixed with author name for attribution.",
|
action = "Promote entries from an author workspace to .coder/general/<category>/. Files are prefixed with author name for attribution.",
|
||||||
cmd = "./ontoref coder publish <author> <category>",
|
cmd = "./ontoref coder publish <author> <category>",
|
||||||
actor = 'Human,
|
actor = 'Human,
|
||||||
depends_on = [{ step = "triage" }],
|
depends_on = [{ step = "novelty-check" }],
|
||||||
on_error = { strategy = 'Continue },
|
on_error = { strategy = 'Continue },
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@ -10,6 +10,33 @@ d.make_mode String {
|
|||||||
"Nushell >= 0.110.0 is available",
|
"Nushell >= 0.110.0 is available",
|
||||||
],
|
],
|
||||||
|
|
||||||
|
guards = [
|
||||||
|
{
|
||||||
|
id = "ontology-exists",
|
||||||
|
cmd = "test -f .ontology/core.ncl",
|
||||||
|
reason = "No .ontology/core.ncl found — this project has no ontology to sync. Run adopt_ontoref first.",
|
||||||
|
severity = 'Block,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id = "nickel-available",
|
||||||
|
cmd = "command -v nickel >/dev/null 2>&1",
|
||||||
|
reason = "nickel binary not on PATH — cannot export NCL schemas. Install via: cargo install nickel-lang-cli",
|
||||||
|
severity = 'Block,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id = "manifest-capabilities",
|
||||||
|
cmd = "ONTOREF_ROOT=\"$(pwd)\" ONTOREF_PROJECT_ROOT=\"$(pwd)\" nu --no-config-file -c 'use ./reflection/modules/sync.nu *; sync manifest-check'",
|
||||||
|
reason = "Manifest capabilities incomplete — sync may report drift caused by undeclared capabilities rather than real code divergence. Fix manifest first.",
|
||||||
|
severity = 'Warn,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
|
||||||
|
converge = {
|
||||||
|
condition = "ONTOREF_ROOT=\"$(pwd)\" ONTOREF_PROJECT_ROOT=\"$(pwd)\" nu --no-config-file -c 'use ./reflection/modules/sync.nu *; let d = (sync diff --quick); let issues = ($d | where { |r| $r.status != \"OK\" }); if ($issues | is-empty) { exit 0 } else { exit 1 }'",
|
||||||
|
max_iterations = 2,
|
||||||
|
strategy = 'RetryFailed,
|
||||||
|
},
|
||||||
|
|
||||||
steps = [
|
steps = [
|
||||||
{
|
{
|
||||||
id = "scan",
|
id = "scan",
|
||||||
@ -27,11 +54,20 @@ d.make_mode String {
|
|||||||
on_error = { strategy = 'Stop },
|
on_error = { strategy = 'Stop },
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
id = "propose",
|
id = "doc-drift",
|
||||||
action = "Generate Nickel code for new nodes (MISSING), mark stale nodes for removal, generate updated nodes for DRIFT, mark broken edges for deletion.",
|
action = "Compare crate-level //! doc comments against ontology node descriptions. Reports DRIFT for nodes whose description is empty or diverges significantly (Jaccard < 25%) from what the crate itself documents.",
|
||||||
cmd = "./ontoref sync propose",
|
cmd = "ontoref sync diff --docs",
|
||||||
actor = 'Both,
|
actor = 'Both,
|
||||||
depends_on = [{ step = "diff" }],
|
depends_on = [{ step = "diff" }],
|
||||||
|
on_error = { strategy = 'Continue },
|
||||||
|
note = "Skipped silently if no crates have //! doc comments. Requires src/lib.rs or src/main.rs with //! lines.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id = "propose",
|
||||||
|
action = "Generate Nickel code for new nodes (MISSING), mark stale nodes for removal, generate updated nodes for DRIFT, mark broken edges for deletion.",
|
||||||
|
cmd = "ontoref sync propose",
|
||||||
|
actor = 'Both,
|
||||||
|
depends_on = [{ step = "doc-drift" }],
|
||||||
on_error = { strategy = 'Stop },
|
on_error = { strategy = 'Stop },
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@ -1,14 +1,15 @@
|
|||||||
let d = import "../defaults.ncl" in
|
let d = import "../defaults.ncl" in
|
||||||
|
|
||||||
# Comprehensive project validation mode.
|
# Comprehensive project validation mode.
|
||||||
# Runs 5 independent validation categories in parallel, then aggregates results.
|
# Runs 6 independent validation categories in parallel, then aggregates results.
|
||||||
#
|
#
|
||||||
# DAG structure:
|
# DAG structure:
|
||||||
# adr-checks ─┐
|
# adr-checks ─┐
|
||||||
# content-verify─┤
|
# content-verify ─┤
|
||||||
# conn-health ─┼─► aggregate
|
# conn-health ─┤
|
||||||
# practice-cov ─┤
|
# practice-cov ─┼─► aggregate
|
||||||
# gate-align ─┘
|
# gate-align ─┤
|
||||||
|
# manifest-cov ─┘
|
||||||
#
|
#
|
||||||
# Exit: non-zero if any Hard constraint fails (via validate check-all).
|
# Exit: non-zero if any Hard constraint fails (via validate check-all).
|
||||||
# All parallel steps use on_error = 'Continue so the aggregate always runs
|
# All parallel steps use on_error = 'Continue so the aggregate always runs
|
||||||
@ -16,7 +17,7 @@ let d = import "../defaults.ncl" in
|
|||||||
|
|
||||||
d.make_mode String {
|
d.make_mode String {
|
||||||
id = "validate-project",
|
id = "validate-project",
|
||||||
trigger = "Run all 5 validation categories (ADR constraints, content assets, connection health, practice coverage, gate consistency) and produce a unified compliance report.",
|
trigger = "Run all 6 validation categories (ADR constraints, content assets, connection health, practice coverage, gate consistency, manifest capability completeness) and produce a unified compliance report.",
|
||||||
|
|
||||||
preconditions = [
|
preconditions = [
|
||||||
"ONTOREF_PROJECT_ROOT is set and points to a project with .ontology/ and adrs/ directories",
|
"ONTOREF_PROJECT_ROOT is set and points to a project with .ontology/ and adrs/ directories",
|
||||||
@ -71,10 +72,19 @@ d.make_mode String {
|
|||||||
on_error = { strategy = 'Continue },
|
on_error = { strategy = 'Continue },
|
||||||
},
|
},
|
||||||
|
|
||||||
|
# ── Category 6: manifest capability completeness ───────────────────────
|
||||||
|
{
|
||||||
|
id = "manifest-cov",
|
||||||
|
action = "Cross-reference manifest capabilities against Practice nodes, reflection modes, and daemon UI pages. Detects undeclared functionality that agents will never discover via describe capabilities.",
|
||||||
|
cmd = "nu --no-config-file -c 'use reflection/modules/sync.nu; let root = ($env.ONTOREF_PROJECT_ROOT? | default $env.ONTOREF_ROOT? | default \".\"); let results = (sync audit --quick | get manifest_coverage? | default []); if ($results | is-empty) { print \"manifest-cov: no audit data\" } else { let warns = ($results | where status != \"PASS\"); if ($warns | is-empty) { print \"manifest-cov: ok\" } else { for w in $warns { print $\" ($w.status) ($w.check): ($w.detail)\" }; if ($warns | any { |w| ($w.severity? | default \"Soft\") == \"Hard\" }) { exit 1 } } }'",
|
||||||
|
actor = 'Both,
|
||||||
|
on_error = { strategy = 'Continue },
|
||||||
|
},
|
||||||
|
|
||||||
# ── Aggregate: collect results from all categories ──────────────────────
|
# ── Aggregate: collect results from all categories ──────────────────────
|
||||||
{
|
{
|
||||||
id = "aggregate",
|
id = "aggregate",
|
||||||
action = "Collect results from all 5 validation categories and produce a unified compliance report. Exits non-zero if any Hard ADR constraint failed.",
|
action = "Collect results from all 6 validation categories and produce a unified compliance report. Exits non-zero if any Hard constraint failed.",
|
||||||
cmd = "nu --no-config-file -c 'use reflection/modules/validate.nu *; let summary = (validate summary); print ($summary | to json); if $summary.hard_passing < $summary.hard_total { exit 1 }'",
|
cmd = "nu --no-config-file -c 'use reflection/modules/validate.nu *; let summary = (validate summary); print ($summary | to json); if $summary.hard_passing < $summary.hard_total { exit 1 }'",
|
||||||
actor = 'Both,
|
actor = 'Both,
|
||||||
depends_on = [
|
depends_on = [
|
||||||
@ -83,6 +93,7 @@ d.make_mode String {
|
|||||||
{ step = "conn-health" },
|
{ step = "conn-health" },
|
||||||
{ step = "practice-cov" },
|
{ step = "practice-cov" },
|
||||||
{ step = "gate-align" },
|
{ step = "gate-align" },
|
||||||
|
{ step = "manifest-cov" },
|
||||||
],
|
],
|
||||||
on_error = { strategy = 'Stop },
|
on_error = { strategy = 'Stop },
|
||||||
},
|
},
|
||||||
@ -93,6 +104,7 @@ d.make_mode String {
|
|||||||
"All declared content_assets have existing source_path files",
|
"All declared content_assets have existing source_path files",
|
||||||
"Gate/dimension state alignment is consistent",
|
"Gate/dimension state alignment is consistent",
|
||||||
"Practice coverage report is available in output",
|
"Practice coverage report is available in output",
|
||||||
|
"Manifest capability coverage has no Hard failures (all functionality declared)",
|
||||||
"Unified compliance JSON is printed to stdout",
|
"Unified compliance JSON is printed to stdout",
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
|
|||||||
@ -655,3 +655,123 @@ export def "coder graduate" [
|
|||||||
|
|
||||||
$results
|
$results
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Check pending entries for novelty against published entries and QA store.
|
||||||
|
# Flags entries whose content overlaps significantly (Jaccard > threshold)
|
||||||
|
# with existing knowledge — potential slop that adds no new information.
|
||||||
|
export def "coder novelty-check" [
|
||||||
|
author: string
|
||||||
|
--threshold: float = 0.60 # Jaccard overlap above this = flagged as redundant
|
||||||
|
--category (-c): string = "" # Check specific category, default all
|
||||||
|
]: nothing -> table {
|
||||||
|
let root = (coder-root)
|
||||||
|
let author_dir = $"($root)/($author)"
|
||||||
|
if not ($author_dir | path exists) {
|
||||||
|
error make { msg: $"author '($author)' not initialized" }
|
||||||
|
}
|
||||||
|
|
||||||
|
# Collect pending entries from author workspace
|
||||||
|
let pending_pattern = if ($category | is-not-empty) {
|
||||||
|
$"($author_dir)/($category)/entries.jsonl"
|
||||||
|
} else {
|
||||||
|
$"($author_dir)/**/entries.jsonl"
|
||||||
|
}
|
||||||
|
let pending_files = (glob $pending_pattern)
|
||||||
|
mut pending_entries = []
|
||||||
|
for file in $pending_files {
|
||||||
|
let lines = (open $file | lines | where { $in | str trim | is-not-empty })
|
||||||
|
for line in $lines {
|
||||||
|
let trimmed = ($line | str trim)
|
||||||
|
if ($trimmed | str starts-with "{") and ($trimmed | str ends-with "}") {
|
||||||
|
let parsed = ($trimmed | from json)
|
||||||
|
if ($parsed | describe | str starts-with "record") {
|
||||||
|
$pending_entries = ($pending_entries | append $parsed)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($pending_entries | is-empty) { return [] }
|
||||||
|
|
||||||
|
# Collect existing published entries
|
||||||
|
let published_pattern = $"($root)/general/**/entries.jsonl"
|
||||||
|
let pub_files = (glob $published_pattern)
|
||||||
|
mut published = []
|
||||||
|
for file in $pub_files {
|
||||||
|
let lines = (open $file | lines | where { $in | str trim | is-not-empty })
|
||||||
|
for line in $lines {
|
||||||
|
let trimmed = ($line | str trim)
|
||||||
|
if ($trimmed | str starts-with "{") and ($trimmed | str ends-with "}") {
|
||||||
|
let parsed = ($trimmed | from json)
|
||||||
|
if ($parsed | describe | str starts-with "record") {
|
||||||
|
$published = ($published | append $parsed)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Collect QA entries
|
||||||
|
let project_root = ($env.ONTOREF_PROJECT_ROOT? | default ($env.ONTOREF_ROOT? | default "."))
|
||||||
|
let qa_file = $"($project_root)/reflection/qa.ncl"
|
||||||
|
mut qa_entries = []
|
||||||
|
if ($qa_file | path exists) {
|
||||||
|
let qa_data = (do { nickel export --format json $qa_file } | complete)
|
||||||
|
if $qa_data.exit_code == 0 {
|
||||||
|
let parsed = ($qa_data.stdout | from json)
|
||||||
|
$qa_entries = ($parsed.entries? | default [] | each { |e|
|
||||||
|
{ title: ($e.question? | default ""), content: ($e.answer? | default "") }
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let existing = ($published | append $qa_entries)
|
||||||
|
if ($existing | is-empty) { return [] }
|
||||||
|
|
||||||
|
# Build existing content corpus
|
||||||
|
let existing_texts = ($existing | each { |e|
|
||||||
|
$"($e.title? | default '') ($e.content? | default '')"
|
||||||
|
})
|
||||||
|
|
||||||
|
let stop_words = ["that", "this", "with", "from", "have", "will", "been", "when", "they", "each", "does", "also", "into", "than"]
|
||||||
|
|
||||||
|
mut results = []
|
||||||
|
for entry in $pending_entries {
|
||||||
|
let entry_text = $"($entry.title? | default '') ($entry.content? | default '')"
|
||||||
|
let entry_words = ($entry_text | str downcase | split row --regex '\W+'
|
||||||
|
| where { |w| ($w | str length) > 3 and not ($w in $stop_words) }
|
||||||
|
| sort | uniq)
|
||||||
|
|
||||||
|
if ($entry_words | is-empty) { continue }
|
||||||
|
|
||||||
|
mut max_overlap = 0.0
|
||||||
|
mut most_similar = ""
|
||||||
|
|
||||||
|
for existing_text in $existing_texts {
|
||||||
|
let ex_words = ($existing_text | str downcase | split row --regex '\W+'
|
||||||
|
| where { |w| ($w | str length) > 3 and not ($w in $stop_words) }
|
||||||
|
| sort | uniq)
|
||||||
|
|
||||||
|
if ($ex_words | is-empty) { continue }
|
||||||
|
|
||||||
|
let intersection = ($entry_words | where { |w| $w in $ex_words } | length)
|
||||||
|
let union_size = ($entry_words | append $ex_words | sort | uniq | length)
|
||||||
|
let jaccard = (($intersection | into float) / ($union_size | into float))
|
||||||
|
|
||||||
|
if $jaccard > $max_overlap {
|
||||||
|
$max_overlap = $jaccard
|
||||||
|
$most_similar = ($existing_text | str substring 0..80)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let status = if $max_overlap > $threshold { "REDUNDANT" } else if $max_overlap > ($threshold * 0.7) { "SIMILAR" } else { "NOVEL" }
|
||||||
|
|
||||||
|
$results = ($results | append {
|
||||||
|
title: ($entry.title? | default "?"),
|
||||||
|
status: $status,
|
||||||
|
overlap: ($max_overlap * 100.0 | math round),
|
||||||
|
similar_to: $most_similar,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
$results
|
||||||
|
}
|
||||||
|
|||||||
@ -104,12 +104,16 @@ export def "describe capabilities" [
|
|||||||
let just_recipes = (scan-just-recipes $root)
|
let just_recipes = (scan-just-recipes $root)
|
||||||
let ontoref_commands = (scan-ontoref-commands)
|
let ontoref_commands = (scan-ontoref-commands)
|
||||||
let modes = (scan-reflection-modes $root)
|
let modes = (scan-reflection-modes $root)
|
||||||
|
let mode_dags = (scan-reflection-mode-dags $root)
|
||||||
let claude = (scan-claude-capabilities $root)
|
let claude = (scan-claude-capabilities $root)
|
||||||
let ci_tools = (scan-ci-tools $root)
|
let ci_tools = (scan-ci-tools $root)
|
||||||
let manifest_modes = (scan-manifest-modes $root)
|
let manifest_modes = (scan-manifest-modes $root)
|
||||||
let manifest = (load-manifest-safe $root)
|
let manifest = (load-manifest-safe $root)
|
||||||
let manifest_capabilities = ($manifest.capabilities? | default [])
|
let manifest_capabilities = ($manifest.capabilities? | default [])
|
||||||
let backlog_pending = (count-backlog-pending $root)
|
let backlog = (load-backlog-items $root)
|
||||||
|
let adrs = (collect-adr-summary $root)
|
||||||
|
let api_routes = (load-api-catalog-static $root $a)
|
||||||
|
let feature_flags = (collect-cargo-features $root)
|
||||||
|
|
||||||
let data = {
|
let data = {
|
||||||
project_flags: $project_flags,
|
project_flags: $project_flags,
|
||||||
@ -117,11 +121,15 @@ export def "describe capabilities" [
|
|||||||
just_recipes: $just_recipes,
|
just_recipes: $just_recipes,
|
||||||
ontoref_commands: $ontoref_commands,
|
ontoref_commands: $ontoref_commands,
|
||||||
reflection_modes: $modes,
|
reflection_modes: $modes,
|
||||||
|
mode_dags: $mode_dags,
|
||||||
claude_capabilities: $claude,
|
claude_capabilities: $claude,
|
||||||
ci_tools: $ci_tools,
|
ci_tools: $ci_tools,
|
||||||
manifest_modes: $manifest_modes,
|
manifest_modes: $manifest_modes,
|
||||||
manifest_capabilities: $manifest_capabilities,
|
manifest_capabilities: $manifest_capabilities,
|
||||||
backlog_pending: $backlog_pending,
|
backlog: $backlog,
|
||||||
|
adrs: $adrs,
|
||||||
|
api_routes: $api_routes,
|
||||||
|
feature_flags: $feature_flags,
|
||||||
}
|
}
|
||||||
|
|
||||||
emit-output $data $f { || render-capabilities-text $data $a $root }
|
emit-output $data $f { || render-capabilities-text $data $a $root }
|
||||||
@ -191,8 +199,10 @@ export def "describe mode" [
|
|||||||
id: ($mode.id? | default $name),
|
id: ($mode.id? | default $name),
|
||||||
trigger: ($mode.trigger? | default ""),
|
trigger: ($mode.trigger? | default ""),
|
||||||
preconditions: ($mode.preconditions? | default []),
|
preconditions: ($mode.preconditions? | default []),
|
||||||
|
guards: ($mode.guards? | default []),
|
||||||
steps: $steps,
|
steps: $steps,
|
||||||
postconditions: ($mode.postconditions? | default []),
|
postconditions: ($mode.postconditions? | default []),
|
||||||
|
converge: ($mode.converge? | default null),
|
||||||
source: (if ($project_file | path exists) { "project" } else { "ontoref" }),
|
source: (if ($project_file | path exists) { "project" } else { "ontoref" }),
|
||||||
file: $mode_file,
|
file: $mode_file,
|
||||||
}
|
}
|
||||||
@ -211,6 +221,16 @@ export def "describe mode" [
|
|||||||
for p in $data.preconditions { print $" · ($p)" }
|
for p in $data.preconditions { print $" · ($p)" }
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ($data.guards | is-not-empty) {
|
||||||
|
print ""
|
||||||
|
print " GUARDS (pre-flight checks)"
|
||||||
|
for g in $data.guards {
|
||||||
|
let sev = ($g.severity? | default "Block")
|
||||||
|
let marker = if $sev == "Block" { "⊘" } else { "⚠" }
|
||||||
|
print $" ($marker) ($g.id) [($sev)]: ($g.reason)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
print ""
|
print ""
|
||||||
print " STEPS"
|
print " STEPS"
|
||||||
print " ──────────────────────────────────────────────────────────────"
|
print " ──────────────────────────────────────────────────────────────"
|
||||||
@ -238,6 +258,17 @@ export def "describe mode" [
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ($data.converge != null) {
|
||||||
|
print ""
|
||||||
|
print " CONVERGENCE"
|
||||||
|
let max = ($data.converge.max_iterations? | default 3)
|
||||||
|
let strat = ($data.converge.strategy? | default "RetryFailed")
|
||||||
|
print $" Iterate until condition met — max ($max) iterations, strategy: ($strat)"
|
||||||
|
if ($data.converge.condition? | default "" | is-not-empty) {
|
||||||
|
print $" $ ($data.converge.condition | str substring 0..120)..."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if ($data.postconditions | is-not-empty) {
|
if ($data.postconditions | is-not-empty) {
|
||||||
print ""
|
print ""
|
||||||
print " POSTCONDITIONS"
|
print " POSTCONDITIONS"
|
||||||
@ -1246,6 +1277,9 @@ export def "describe features" [
|
|||||||
# Cargo deps for the crate if feature maps to one
|
# Cargo deps for the crate if feature maps to one
|
||||||
let crate_deps = (collect-crate-deps $root $id)
|
let crate_deps = (collect-crate-deps $root $id)
|
||||||
|
|
||||||
|
# //! crate doc — present only when this node's primary artifact is a crate directory.
|
||||||
|
let crate_doc = (find-crate-doc-for-node $root ($node.artifact_paths? | default []))
|
||||||
|
|
||||||
let data = {
|
let data = {
|
||||||
id: $node.id,
|
id: $node.id,
|
||||||
name: ($node.name? | default $node.id),
|
name: ($node.name? | default $node.id),
|
||||||
@ -1259,6 +1293,7 @@ export def "describe features" [
|
|||||||
dimensions: $related_dims,
|
dimensions: $related_dims,
|
||||||
constraints: $related_constraints,
|
constraints: $related_constraints,
|
||||||
crate_deps: $crate_deps,
|
crate_deps: $crate_deps,
|
||||||
|
crate_doc: $crate_doc,
|
||||||
}
|
}
|
||||||
|
|
||||||
emit-output $data $f { || render-feature-detail-text $data $root }
|
emit-output $data $f { || render-feature-detail-text $data $root }
|
||||||
@ -1302,10 +1337,11 @@ export def "describe guides" [
|
|||||||
# Fetch API surface from daemon; empty list if daemon is not reachable.
|
# Fetch API surface from daemon; empty list if daemon is not reachable.
|
||||||
let daemon_url = ($env.ONTOREF_DAEMON_URL? | default "http://127.0.0.1:7891")
|
let daemon_url = ($env.ONTOREF_DAEMON_URL? | default "http://127.0.0.1:7891")
|
||||||
let api_surface = do {
|
let api_surface = do {
|
||||||
let r = (do { http get $"($daemon_url)/api/catalog" } | complete)
|
let r = (do { ^curl -sf --max-time 2 $"($daemon_url)/api/catalog" } | complete)
|
||||||
if $r.exit_code == 0 {
|
if $r.exit_code == 0 {
|
||||||
let resp = ($r.stdout | from json)
|
let resp = (do { $r.stdout | from json } | complete)
|
||||||
let all = ($resp.routes? | default [])
|
if $resp.exit_code == 0 {
|
||||||
|
let all = ($resp.stdout.routes? | default [])
|
||||||
if ($a | is-not-empty) {
|
if ($a | is-not-empty) {
|
||||||
$all | where { |route| $route.actors | any { |act| $act == $a } }
|
$all | where { |route| $route.actors | any { |act| $act == $a } }
|
||||||
} else {
|
} else {
|
||||||
@ -1314,6 +1350,26 @@ export def "describe guides" [
|
|||||||
} else {
|
} else {
|
||||||
[]
|
[]
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
[]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let authoring_conventions = {
|
||||||
|
migration_rule: "Any change to templates/, reflection/schemas/*.ncl, .claude/CLAUDE.md, or consumer-facing reflection/modes/ that consumers need to adopt MUST be accompanied by a new migration in reflection/migrations/. Migrations are the sole propagation mechanism — without one the change never reaches consumer projects.",
|
||||||
|
migration_triggers: [
|
||||||
|
"templates/ — always, templates are installed verbatim into consumer projects",
|
||||||
|
"reflection/schemas/*.ncl — if adds required fields consumers must populate",
|
||||||
|
".claude/CLAUDE.md — if the section should exist in all consumer CLAUDE.md files",
|
||||||
|
"reflection/modes/*.ncl — if the mode is part of the adoption surface",
|
||||||
|
"adrs/ — only if the ADR introduces a constraint consumers must satisfy",
|
||||||
|
],
|
||||||
|
migration_not_needed: [
|
||||||
|
"crates/ — Rust implementation, not copied to consumers",
|
||||||
|
"reflection/modules/ — runtime behavior delivered via installed binary",
|
||||||
|
"internal refactors that do not affect the consumer-visible protocol surface",
|
||||||
|
],
|
||||||
|
migration_authoring: "ontoref migrate list | last — check last id, increment. Schema: { id, slug, description, check: {tag, ...}, instructions }. check.tag: FileExists | Grep | NuCmd. NuCmd must be valid Nushell — no bash operators.",
|
||||||
}
|
}
|
||||||
|
|
||||||
let data = {
|
let data = {
|
||||||
@ -1334,6 +1390,7 @@ export def "describe guides" [
|
|||||||
capabilities: $manifest_capabilities,
|
capabilities: $manifest_capabilities,
|
||||||
requirements: $manifest_requirements,
|
requirements: $manifest_requirements,
|
||||||
critical_deps: $manifest_critical_deps,
|
critical_deps: $manifest_critical_deps,
|
||||||
|
authoring_conventions: $authoring_conventions,
|
||||||
}
|
}
|
||||||
|
|
||||||
emit-output $data $f {||
|
emit-output $data $f {||
|
||||||
@ -1526,6 +1583,62 @@ export def "describe state" [
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Scan src/**/*.rs for `pub` items and check for preceding `///` doc comments.
|
||||||
|
# Returns { total: int, documented: int, percent: int }.
|
||||||
|
def crate-doc-coverage [crate_full_path: string]: nothing -> record {
|
||||||
|
let src_dir = $"($crate_full_path)/src"
|
||||||
|
if not ($src_dir | path exists) { return { total: 0, documented: 0, percent: 0 } }
|
||||||
|
|
||||||
|
let rs_files = (glob $"($src_dir)/**/*.rs")
|
||||||
|
if ($rs_files | is-empty) { return { total: 0, documented: 0, percent: 0 } }
|
||||||
|
|
||||||
|
mut total = 0
|
||||||
|
mut documented = 0
|
||||||
|
|
||||||
|
for rs in $rs_files {
|
||||||
|
let lines = (open $rs | lines)
|
||||||
|
let indexed = ($lines | enumerate)
|
||||||
|
for row in $indexed {
|
||||||
|
let trimmed = ($row.item | str trim)
|
||||||
|
# Match `pub fn`, `pub struct`, `pub enum`, `pub trait`, `pub type`, `pub const`, `pub mod`.
|
||||||
|
let is_pub = (
|
||||||
|
($trimmed | str starts-with "pub fn ") or
|
||||||
|
($trimmed | str starts-with "pub struct ") or
|
||||||
|
($trimmed | str starts-with "pub enum ") or
|
||||||
|
($trimmed | str starts-with "pub trait ") or
|
||||||
|
($trimmed | str starts-with "pub type ") or
|
||||||
|
($trimmed | str starts-with "pub const ") or
|
||||||
|
($trimmed | str starts-with "pub mod ") or
|
||||||
|
($trimmed | str starts-with "pub async fn ") or
|
||||||
|
($trimmed =~ '^\s*pub\s*\(crate\)')
|
||||||
|
)
|
||||||
|
if not $is_pub { continue }
|
||||||
|
$total = ($total + 1)
|
||||||
|
# Look at the previous non-empty line.
|
||||||
|
if $row.index > 0 {
|
||||||
|
let prev = ($indexed | where { |r| $r.index < $row.index } | reverse | where { |r| ($r.item | str trim | is-not-empty) } | if ($in | is-not-empty) { first } else { null })
|
||||||
|
if ($prev != null) {
|
||||||
|
let prev_trim = ($prev.item | str trim)
|
||||||
|
if ($prev_trim | str starts-with "///") or ($prev_trim | str starts-with "#[") {
|
||||||
|
# Check further back for /// if immediately preceded by an attribute
|
||||||
|
if ($prev_trim | str starts-with "#[") {
|
||||||
|
let before_attr = ($indexed | where { |r| $r.index < $prev.index } | reverse | where { |r| ($r.item | str trim | is-not-empty) } | if ($in | is-not-empty) { first } else { null })
|
||||||
|
if ($before_attr != null) and (($before_attr.item | str trim) | str starts-with "///") {
|
||||||
|
$documented = ($documented + 1)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
$documented = ($documented + 1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let pct = if $total > 0 { ($documented * 100 / $total | math round) } else { 100 }
|
||||||
|
{ total: $total, documented: $documented, percent: $pct }
|
||||||
|
}
|
||||||
|
|
||||||
# ── describe workspace ───────────────────────────────────────────────────────────
|
# ── describe workspace ───────────────────────────────────────────────────────────
|
||||||
# "What crates are in this workspace and how do they depend on each other?"
|
# "What crates are in this workspace and how do they depend on each other?"
|
||||||
# Reads workspace Cargo.toml + member manifests. Shows only intra-workspace deps —
|
# Reads workspace Cargo.toml + member manifests. Shows only intra-workspace deps —
|
||||||
@ -1561,7 +1674,17 @@ export def "describe workspace" [
|
|||||||
let name = ($c | get -o package.name | default ($ct | path dirname | path basename))
|
let name = ($c | get -o package.name | default ($ct | path dirname | path basename))
|
||||||
let features = ($c | get -o features | default {} | columns)
|
let features = ($c | get -o features | default {} | columns)
|
||||||
let all_deps = ($c | get -o dependencies | default {})
|
let all_deps = ($c | get -o dependencies | default {})
|
||||||
$crates = ($crates | append [{ name: $name, features: $features, all_deps: $all_deps, path: ($ct | path dirname | path relative-to $root) }])
|
let crate_path = ($ct | path dirname | path relative-to $root)
|
||||||
|
let crate_doc = (find-crate-doc-for-node $root [$crate_path])
|
||||||
|
let doc_coverage = (crate-doc-coverage $"($root)/($crate_path)")
|
||||||
|
$crates = ($crates | append [{
|
||||||
|
name: $name,
|
||||||
|
features: $features,
|
||||||
|
all_deps: $all_deps,
|
||||||
|
path: $crate_path,
|
||||||
|
crate_doc: $crate_doc,
|
||||||
|
doc_coverage: $doc_coverage,
|
||||||
|
}])
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1571,7 +1694,14 @@ export def "describe workspace" [
|
|||||||
let crates_with_ws_deps = ($crates | each { |cr|
|
let crates_with_ws_deps = ($crates | each { |cr|
|
||||||
let dep_names = (try { $cr.all_deps | columns } catch { [] })
|
let dep_names = (try { $cr.all_deps | columns } catch { [] })
|
||||||
let ws_deps = ($dep_names | where { |d| $d in $crate_names })
|
let ws_deps = ($dep_names | where { |d| $d in $crate_names })
|
||||||
{ name: $cr.name, features: $cr.features, path: $cr.path, depends_on: $ws_deps }
|
{
|
||||||
|
name: $cr.name,
|
||||||
|
features: $cr.features,
|
||||||
|
path: $cr.path,
|
||||||
|
depends_on: $ws_deps,
|
||||||
|
crate_doc: $cr.crate_doc,
|
||||||
|
doc_coverage: $cr.doc_coverage,
|
||||||
|
}
|
||||||
})
|
})
|
||||||
|
|
||||||
let data = { crates: $crates_with_ws_deps }
|
let data = { crates: $crates_with_ws_deps }
|
||||||
@ -1580,7 +1710,12 @@ export def "describe workspace" [
|
|||||||
print $"(ansi white_bold)Workspace Crates(ansi reset) ($crates_with_ws_deps | length) members"
|
print $"(ansi white_bold)Workspace Crates(ansi reset) ($crates_with_ws_deps | length) members"
|
||||||
print ""
|
print ""
|
||||||
for cr in $crates_with_ws_deps {
|
for cr in $crates_with_ws_deps {
|
||||||
print $" (ansi white_bold)($cr.name)(ansi reset) (ansi dark_gray)($cr.path)(ansi reset)"
|
let cov = $cr.doc_coverage
|
||||||
|
let pct = $cov.percent
|
||||||
|
let cov_color = if $pct >= 80 { ansi green } else if $pct >= 50 { ansi yellow } else { ansi red }
|
||||||
|
let doc_tag = if ($cr.crate_doc | is-not-empty) { " ✓ //! doc" } else { " ✗ no //! doc" }
|
||||||
|
print $" (ansi white_bold)($cr.name)(ansi reset) (ansi dark_gray)($cr.path)(ansi reset)(ansi cyan)($doc_tag)(ansi reset)"
|
||||||
|
print $" doc coverage : ($cov_color)($pct)%(ansi reset) ($cov.documented)/($cov.total) public items"
|
||||||
if ($cr.features | is-not-empty) {
|
if ($cr.features | is-not-empty) {
|
||||||
print $" features : ($cr.features | str join ', ')"
|
print $" features : ($cr.features | str join ', ')"
|
||||||
}
|
}
|
||||||
@ -2189,6 +2324,85 @@ def count-backlog-pending [root: string]: nothing -> int {
|
|||||||
| length
|
| length
|
||||||
}
|
}
|
||||||
|
|
||||||
|
def load-backlog-items [root: string]: nothing -> record {
|
||||||
|
let file = $"($root)/reflection/backlog.ncl"
|
||||||
|
if not ($file | path exists) { return { pending: 0, items: [] } }
|
||||||
|
let ip = (nickel-import-path $root)
|
||||||
|
let backlog = (daemon-export-safe $file --import-path $ip)
|
||||||
|
if $backlog == null { return { pending: 0, items: [] } }
|
||||||
|
let pending = (($backlog.items? | default [])
|
||||||
|
| where { |i| ($i.status? | default "open") not-in ["done", "graduated"] })
|
||||||
|
{
|
||||||
|
pending: ($pending | length),
|
||||||
|
items: ($pending | each { |i| {
|
||||||
|
id: ($i.id? | default ""),
|
||||||
|
title: ($i.title? | default ""),
|
||||||
|
kind: ($i.kind? | default ""),
|
||||||
|
priority: ($i.priority? | default ""),
|
||||||
|
status: ($i.status? | default "open"),
|
||||||
|
}}),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def load-api-catalog-static [root: string, actor: string]: nothing -> list<record> {
|
||||||
|
let catalog = $"($root)/artifacts/api-catalog-ontoref-daemon.ncl"
|
||||||
|
if not ($catalog | path exists) { return [] }
|
||||||
|
let ip = (nickel-import-path $root)
|
||||||
|
let data = (daemon-export-safe $catalog --import-path $ip)
|
||||||
|
if $data == null { return [] }
|
||||||
|
let all = ($data.routes? | default [])
|
||||||
|
let actor_cap = ($actor | str capitalize)
|
||||||
|
if ($actor | is-not-empty) {
|
||||||
|
$all | where { |r| $r.actors | any { |a| $a == $actor_cap } }
|
||||||
|
} else {
|
||||||
|
$all
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def scan-reflection-mode-dags [root: string]: nothing -> list<record> {
|
||||||
|
let ontoref_modes = (glob $"($env.ONTOREF_ROOT)/reflection/modes/*.ncl")
|
||||||
|
let project_modes = if $root != $env.ONTOREF_ROOT {
|
||||||
|
glob $"($root)/reflection/modes/*.ncl"
|
||||||
|
} else { [] }
|
||||||
|
let all_modes = ($ontoref_modes | append $project_modes | uniq)
|
||||||
|
|
||||||
|
$all_modes | each { |f|
|
||||||
|
let mode_root = if ($f | str starts-with $root) and ($root != $env.ONTOREF_ROOT) { $root } else { $env.ONTOREF_ROOT }
|
||||||
|
let ip = (nickel-import-path $mode_root)
|
||||||
|
let m = (daemon-export-safe $f --import-path $ip)
|
||||||
|
if $m != null {
|
||||||
|
{
|
||||||
|
id: ($m.id? | default ""),
|
||||||
|
trigger: ($m.trigger? | default ""),
|
||||||
|
source: (if ($f | str starts-with $root) and ($root != $env.ONTOREF_ROOT) { "project" } else { "ontoref" }),
|
||||||
|
preconditions: ($m.preconditions? | default []),
|
||||||
|
guards: ($m.guards? | default [] | each { |g| {
|
||||||
|
id: ($g.id? | default ""),
|
||||||
|
reason: ($g.reason? | default ""),
|
||||||
|
severity: ($g.severity? | default "Block"),
|
||||||
|
}}),
|
||||||
|
steps: ($m.steps? | default [] | each { |s| {
|
||||||
|
id: ($s.id? | default ""),
|
||||||
|
actor: ($s.actor? | default ""),
|
||||||
|
action: ($s.action? | default ""),
|
||||||
|
cmd: ($s.cmd? | default ""),
|
||||||
|
on_error: ($s.on_error?.strategy? | default "Stop"),
|
||||||
|
verify: ($s.verify? | default ""),
|
||||||
|
note: ($s.note? | default ""),
|
||||||
|
depends_on: ($s.depends_on? | default [] | each { |d| $d.step? | default "" }),
|
||||||
|
}}),
|
||||||
|
postconditions: ($m.postconditions? | default []),
|
||||||
|
converge: (if ($m.converge? | default null) != null {
|
||||||
|
{
|
||||||
|
max_iterations: ($m.converge.max_iterations? | default 3),
|
||||||
|
strategy: ($m.converge.strategy? | default "RetryFailed"),
|
||||||
|
}
|
||||||
|
} else { null }),
|
||||||
|
}
|
||||||
|
} else { null }
|
||||||
|
} | compact
|
||||||
|
}
|
||||||
|
|
||||||
# ── Feature collectors ────────────────────────────────────────────────────────
|
# ── Feature collectors ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
def collect-cargo-features [root: string]: nothing -> list<record> {
|
def collect-cargo-features [root: string]: nothing -> list<record> {
|
||||||
@ -2494,8 +2708,9 @@ def render-capabilities-text [data: record, actor: string, root: string]: nothin
|
|||||||
if ($crates | is-not-empty) {
|
if ($crates | is-not-empty) {
|
||||||
print $" Crates: ($crates | str join ', ')"
|
print $" Crates: ($crates | str join ', ')"
|
||||||
}
|
}
|
||||||
if ($data.backlog_pending? | default 0) > 0 {
|
let bp = ($data.backlog?.pending? | default 0)
|
||||||
print $" Backlog pending: ($data.backlog_pending)"
|
if $bp > 0 {
|
||||||
|
print $" Backlog pending: ($bp)"
|
||||||
}
|
}
|
||||||
let open_prs = ($flags.open_prs? | default 0)
|
let open_prs = ($flags.open_prs? | default 0)
|
||||||
if $open_prs > 0 {
|
if $open_prs > 0 {
|
||||||
@ -2611,6 +2826,49 @@ def render-capabilities-text [data: record, actor: string, root: string]: nothin
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ($data.adrs? | default [] | is-not-empty) {
|
||||||
|
print ""
|
||||||
|
print "ARCHITECTURAL DECISIONS (ADRs)"
|
||||||
|
print "──────────────────────────────────────────────────────────────────"
|
||||||
|
for adr in ($data.adrs? | default []) {
|
||||||
|
let marker = match ($adr.status? | default "") { "Accepted" => "✓", "Proposed" => "?", _ => "×" }
|
||||||
|
let cc = ($adr.constraint_count? | default 0)
|
||||||
|
print $" ($marker) ($adr.id): ($adr.title) [($cc) constraints]"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($data.backlog?.items? | default [] | is-not-empty) {
|
||||||
|
print ""
|
||||||
|
print "BACKLOG (open items)"
|
||||||
|
print "──────────────────────────────────────────────────────────────────"
|
||||||
|
for item in ($data.backlog.items | sort-by priority) {
|
||||||
|
let kind_marker = if ($item.kind? | default "") == "Bug" { "⚠" } else { "·" }
|
||||||
|
print $" ($kind_marker) [($item.priority? | default '?')] ($item.id): ($item.title)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($data.feature_flags? | default [] | is-not-empty) {
|
||||||
|
print ""
|
||||||
|
print "FEATURE FLAGS (Cargo)"
|
||||||
|
print "──────────────────────────────────────────────────────────────────"
|
||||||
|
for ff in ($data.feature_flags? | default []) {
|
||||||
|
print $" ($ff.crate)/($ff.feature) → ($ff.enables)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($data.api_routes? | default [] | is-not-empty) {
|
||||||
|
print ""
|
||||||
|
print $"API ROUTES \(actor: ($actor)\)"
|
||||||
|
print "──────────────────────────────────────────────────────────────────"
|
||||||
|
for r in ($data.api_routes? | default []) {
|
||||||
|
let auth = ($r.auth? | default "")
|
||||||
|
print $" ($r.method) ($r.path) [auth: ($auth)]"
|
||||||
|
if ($r.description? | default "" | is-not-empty) {
|
||||||
|
print $" ($r.description)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
print ""
|
print ""
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2808,6 +3066,42 @@ def render-why-text [data: record, id: string]: nothing -> nothing {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Returns the //! crate-level doc for the first artifact_path that is a crate directory.
|
||||||
|
# Empty string if no such path exists or no //! lines are present.
|
||||||
|
def find-crate-doc-for-node [root: string, paths: list<string>]: nothing -> string {
|
||||||
|
for p in $paths {
|
||||||
|
let p_norm = ($p | str replace --regex '/$' "")
|
||||||
|
let full = $"($root)/($p_norm)"
|
||||||
|
if not ($"($full)/Cargo.toml" | path exists) { continue }
|
||||||
|
# It is a crate directory. Extract //! lines from src/lib.rs or src/main.rs.
|
||||||
|
let entry = (
|
||||||
|
[$"($full)/src/lib.rs", $"($full)/src/main.rs"]
|
||||||
|
| where { |f| $f | path exists }
|
||||||
|
)
|
||||||
|
if ($entry | is-empty) { return "" }
|
||||||
|
let lines = (open ($entry | first) | lines)
|
||||||
|
# Skip #![...] inner attributes and blank lines, then collect //! block.
|
||||||
|
let past_attrs = (
|
||||||
|
$lines
|
||||||
|
| skip while { |l|
|
||||||
|
let t = ($l | str trim)
|
||||||
|
($t | is-empty) or ($t | str starts-with "#!")
|
||||||
|
}
|
||||||
|
)
|
||||||
|
let doc_lines = (
|
||||||
|
$past_attrs
|
||||||
|
| take while { |l|
|
||||||
|
let t = ($l | str trim)
|
||||||
|
($t | is-empty) or ($t | str starts-with "//!")
|
||||||
|
}
|
||||||
|
| where { |l| $l | str trim | str starts-with "//!" }
|
||||||
|
| each { |l| $l | str trim | str replace --regex '^//! ?' "" }
|
||||||
|
)
|
||||||
|
return ($doc_lines | str join "\n")
|
||||||
|
}
|
||||||
|
""
|
||||||
|
}
|
||||||
|
|
||||||
# ── Feature renderers ────────────────────────────────────────────────────────
|
# ── Feature renderers ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
def render-features-list-text [data: record, root: string]: nothing -> nothing {
|
def render-features-list-text [data: record, root: string]: nothing -> nothing {
|
||||||
@ -2937,6 +3231,16 @@ def render-feature-detail-text [data: record, root: string]: nothing -> nothing
|
|||||||
}
|
}
|
||||||
print ""
|
print ""
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let crate_doc = ($data.crate_doc? | default "")
|
||||||
|
if ($crate_doc | is-not-empty) {
|
||||||
|
print "IMPLEMENTATION VIEW \(crate //! doc\)"
|
||||||
|
print "──────────────────────────────────────────────────────────────────"
|
||||||
|
for line in ($crate_doc | lines) {
|
||||||
|
print $" ($line)"
|
||||||
|
}
|
||||||
|
print ""
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# ── describe connections ──────────────────────────────────────────────────────
|
# ── describe connections ──────────────────────────────────────────────────────
|
||||||
|
|||||||
@ -351,6 +351,73 @@ def render-md [data: record]: nothing -> string {
|
|||||||
$lines | str join "\n"
|
$lines | str join "\n"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ── Crate doc helpers ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def read-crate-module-doc [crate_full_path: string]: nothing -> string {
|
||||||
|
for entry in [$"($crate_full_path)/src/lib.rs", $"($crate_full_path)/src/main.rs"] {
|
||||||
|
if not ($entry | path exists) { continue }
|
||||||
|
let lines = (open $entry | lines)
|
||||||
|
let past_attrs = (
|
||||||
|
$lines
|
||||||
|
| skip while { |l|
|
||||||
|
let t = ($l | str trim)
|
||||||
|
($t | is-empty) or ($t | str starts-with "#!")
|
||||||
|
}
|
||||||
|
)
|
||||||
|
let doc_lines = (
|
||||||
|
$past_attrs
|
||||||
|
| take while { |l|
|
||||||
|
let t = ($l | str trim)
|
||||||
|
($t | is-empty) or ($t | str starts-with "//!")
|
||||||
|
}
|
||||||
|
| where { |l| $l | str trim | str starts-with "//!" }
|
||||||
|
| each { |l| $l | str trim | str replace --regex '^//! ?' "" }
|
||||||
|
)
|
||||||
|
return ($doc_lines | str join "\n")
|
||||||
|
}
|
||||||
|
""
|
||||||
|
}
|
||||||
|
|
||||||
|
def count-pub-coverage [crate_full_path: string]: nothing -> record {
|
||||||
|
let src_dir = $"($crate_full_path)/src"
|
||||||
|
if not ($src_dir | path exists) { return { total: 0, documented: 0, percent: 0 } }
|
||||||
|
let rs_files = (glob $"($src_dir)/**/*.rs")
|
||||||
|
if ($rs_files | is-empty) { return { total: 0, documented: 0, percent: 0 } }
|
||||||
|
mut total = 0
|
||||||
|
mut documented = 0
|
||||||
|
for rs in $rs_files {
|
||||||
|
let lines = (open $rs | lines)
|
||||||
|
let indexed = ($lines | enumerate)
|
||||||
|
for row in $indexed {
|
||||||
|
let t = ($row.item | str trim)
|
||||||
|
let is_pub = (
|
||||||
|
($t | str starts-with "pub fn ") or ($t | str starts-with "pub struct ") or
|
||||||
|
($t | str starts-with "pub enum ") or ($t | str starts-with "pub trait ") or
|
||||||
|
($t | str starts-with "pub type ") or ($t | str starts-with "pub const ") or
|
||||||
|
($t | str starts-with "pub mod ") or ($t | str starts-with "pub async fn ")
|
||||||
|
)
|
||||||
|
if not $is_pub { continue }
|
||||||
|
$total = ($total + 1)
|
||||||
|
if $row.index > 0 {
|
||||||
|
let prev = ($indexed | where { |r| $r.index < $row.index } | reverse | where { |r| ($r.item | str trim | is-not-empty) } | if ($in | is-not-empty) { first } else { null })
|
||||||
|
if ($prev != null) {
|
||||||
|
let pt = ($prev.item | str trim)
|
||||||
|
if ($pt | str starts-with "///") {
|
||||||
|
$documented = ($documented + 1)
|
||||||
|
} else if ($pt | str starts-with "#[") {
|
||||||
|
let before = ($indexed | where { |r| $r.index < $prev.index } | reverse | where { |r| ($r.item | str trim | is-not-empty) } | if ($in | is-not-empty) { first } else { null })
|
||||||
|
if ($before != null) and (($before.item | str trim) | str starts-with "///") {
|
||||||
|
$documented = ($documented + 1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
let pct = if $total > 0 { ($documented * 100 / $total | math round) } else { 100 }
|
||||||
|
{ total: $total, documented: $documented, percent: $pct }
|
||||||
|
}
|
||||||
|
|
||||||
# ── Format: mdBook ───────────────────────────────────────────────────────────
|
# ── Format: mdBook ───────────────────────────────────────────────────────────
|
||||||
|
|
||||||
def render-mdbook [data: record, root: string] {
|
def render-mdbook [data: record, root: string] {
|
||||||
@ -430,6 +497,76 @@ def render-mdbook [data: record, root: string] {
|
|||||||
))
|
))
|
||||||
($modes_lines | str join "\n") | save -f $"($docs_src)/modes/README.md"
|
($modes_lines | str join "\n") | save -f $"($docs_src)/modes/README.md"
|
||||||
|
|
||||||
|
# Crates chapter — one page per workspace member, sourced from //! docs
|
||||||
|
mut crate_pages = []
|
||||||
|
if ($data.identity.crates | is-not-empty) {
|
||||||
|
mkdir $"($docs_src)/crates"
|
||||||
|
for c in $data.identity.crates {
|
||||||
|
let full_path = $"($root)/($c.path)"
|
||||||
|
let module_doc = (read-crate-module-doc $full_path)
|
||||||
|
let coverage = (count-pub-coverage $full_path)
|
||||||
|
|
||||||
|
# Which practice nodes list this crate as first artifact_path?
|
||||||
|
let c_norm = ($c.path | str replace --regex '/$' "")
|
||||||
|
let implementing = (
|
||||||
|
$data.architecture.practices
|
||||||
|
| where { |p|
|
||||||
|
let first = ($p.artifact_paths? | default [] | if ($in | is-not-empty) { first } else { "" } | str replace --regex '/$' "")
|
||||||
|
$first == $c_norm
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Cargo feature flags from Cargo.toml
|
||||||
|
let cargo_file = $"($full_path)/Cargo.toml"
|
||||||
|
let features = if ($cargo_file | path exists) {
|
||||||
|
let cargo_data = (open $cargo_file)
|
||||||
|
let feat_map = ($cargo_data | get -o features | default {})
|
||||||
|
$feat_map | transpose key value | where { |r| $r.key != "default" } | each { |r|
|
||||||
|
let deps = ($r.value | each { |d| $" - `($d)`" } | str join "\n")
|
||||||
|
if ($deps | is-not-empty) {
|
||||||
|
$"- `($r.key)` — enables:\n($deps)"
|
||||||
|
} else {
|
||||||
|
$"- `($r.key)`"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else { [] }
|
||||||
|
|
||||||
|
let coverage_badge = if $coverage.percent >= 80 {
|
||||||
|
$"✅ ($coverage.percent)% \(($coverage.documented)/($coverage.total) pub items\)"
|
||||||
|
} else if $coverage.percent >= 50 {
|
||||||
|
$"⚠️ ($coverage.percent)% \(($coverage.documented)/($coverage.total) pub items\)"
|
||||||
|
} else {
|
||||||
|
$"❌ ($coverage.percent)% \(($coverage.documented)/($coverage.total) pub items\)"
|
||||||
|
}
|
||||||
|
|
||||||
|
mut page_lines = [$"# ($c.name)" ""]
|
||||||
|
if ($module_doc | is-not-empty) {
|
||||||
|
$page_lines = ($page_lines | append [$module_doc ""])
|
||||||
|
} else {
|
||||||
|
$page_lines = ($page_lines | append [$"> ⚠️ No `//!` module doc found in `src/lib.rs`." ""])
|
||||||
|
}
|
||||||
|
$page_lines = ($page_lines | append [$"**Doc coverage:** ($coverage_badge)" ""])
|
||||||
|
if ($features | is-not-empty) {
|
||||||
|
$page_lines = ($page_lines | append (["## Feature Flags" ""] | append $features | append ""))
|
||||||
|
}
|
||||||
|
if ($implementing | is-not-empty) {
|
||||||
|
let practice_links = ($implementing | each { |p| $"- **($p.id)** — ($p.name)" })
|
||||||
|
$page_lines = ($page_lines | append (["## Implements" ""] | append $practice_links | append ""))
|
||||||
|
}
|
||||||
|
|
||||||
|
let slug = ($c.name | str replace "--" "-")
|
||||||
|
let page_path = $"($docs_src)/crates/($slug).md"
|
||||||
|
($page_lines | str join "\n") | save -f $page_path
|
||||||
|
$crate_pages = ($crate_pages | append { name: $c.name, slug: $slug })
|
||||||
|
}
|
||||||
|
|
||||||
|
# Crates index
|
||||||
|
let crates_index = (["# Crates" ""] | append (
|
||||||
|
$crate_pages | each { |cp| $"- [($cp.name)](($cp.slug).md)" }
|
||||||
|
))
|
||||||
|
($crates_index | str join "\n") | save -f $"($docs_src)/crates/README.md"
|
||||||
|
}
|
||||||
|
|
||||||
# SUMMARY.md
|
# SUMMARY.md
|
||||||
mut summary = [
|
mut summary = [
|
||||||
"# Summary"
|
"# Summary"
|
||||||
@ -447,6 +584,12 @@ def render-mdbook [data: record, root: string] {
|
|||||||
$summary = ($summary | append $"- [($adr.id)](decisions/($adr.id).md)")
|
$summary = ($summary | append $"- [($adr.id)](decisions/($adr.id).md)")
|
||||||
}
|
}
|
||||||
$summary = ($summary | append ["" "# Operations" "" "- [Modes](modes/README.md)"])
|
$summary = ($summary | append ["" "# Operations" "" "- [Modes](modes/README.md)"])
|
||||||
|
if ($crate_pages | is-not-empty) {
|
||||||
|
$summary = ($summary | append ["" "# Crates" "" "- [Overview](crates/README.md)"])
|
||||||
|
for cp in $crate_pages {
|
||||||
|
$summary = ($summary | append $" - [($cp.name)](crates/($cp.slug).md)")
|
||||||
|
}
|
||||||
|
}
|
||||||
($summary | str join "\n") | save -f $"($docs_src)/SUMMARY.md"
|
($summary | str join "\n") | save -f $"($docs_src)/SUMMARY.md"
|
||||||
|
|
||||||
# book.toml
|
# book.toml
|
||||||
@ -472,6 +615,10 @@ preferred-dark-theme = \"navy\"
|
|||||||
let adr_count = ($data.decisions_all | length)
|
let adr_count = ($data.decisions_all | length)
|
||||||
print $" (ansi green)Generated:(ansi reset) ($adr_count) ADR pages in ($docs_src)/decisions/"
|
print $" (ansi green)Generated:(ansi reset) ($adr_count) ADR pages in ($docs_src)/decisions/"
|
||||||
print $" (ansi green)Generated:(ansi reset) ($docs_src)/modes/README.md"
|
print $" (ansi green)Generated:(ansi reset) ($docs_src)/modes/README.md"
|
||||||
|
if ($crate_pages | is-not-empty) {
|
||||||
|
let crate_count = ($crate_pages | length)
|
||||||
|
print $" (ansi green)Generated:(ansi reset) ($crate_count) crate pages in ($docs_src)/crates/"
|
||||||
|
}
|
||||||
|
|
||||||
# Build if mdbook is available
|
# Build if mdbook is available
|
||||||
let has_mdbook = (do { ^which mdbook } | complete | get exit_code) == 0
|
let has_mdbook = (do { ^which mdbook } | complete | get exit_code) == 0
|
||||||
|
|||||||
@ -153,6 +153,19 @@ export def "step report" [
|
|||||||
|
|
||||||
# Validate step exists in mode DAG
|
# Validate step exists in mode DAG
|
||||||
let mode_data = (load-mode-dag $mode)
|
let mode_data = (load-mode-dag $mode)
|
||||||
|
|
||||||
|
# Validate guards — warn if any Block guard would fail (informational in step report context)
|
||||||
|
let guards = ($mode_data.guards? | default [])
|
||||||
|
let blocking_guards = ($guards | where { |g| ($g.severity? | default "Block") == "Block" })
|
||||||
|
for g in $blocking_guards {
|
||||||
|
let result = (do { bash -c $g.cmd } | complete)
|
||||||
|
if $result.exit_code != 0 {
|
||||||
|
if $fmt != "json" {
|
||||||
|
print $"(ansi yellow)WARN(ansi reset): guard '($g.id)' would block mode execution: ($g.reason)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let matching_steps = ($mode_data.steps? | default [] | where { |s| $s.id == $step_id })
|
let matching_steps = ($mode_data.steps? | default [] | where { |s| $s.id == $step_id })
|
||||||
let step_def = if ($matching_steps | is-not-empty) { $matching_steps | first } else { null }
|
let step_def = if ($matching_steps | is-not-empty) { $matching_steps | first } else { null }
|
||||||
if ($step_def | is-empty) {
|
if ($step_def | is-empty) {
|
||||||
|
|||||||
@ -50,6 +50,8 @@ export def "sync scan" [
|
|||||||
export def "sync diff" [
|
export def "sync diff" [
|
||||||
--quick, # Skip nickel exports; parse NCL text directly for speed
|
--quick, # Skip nickel exports; parse NCL text directly for speed
|
||||||
--level: string = "", # Extra checks: "full" adds ADR violations, content assets, connection health
|
--level: string = "", # Extra checks: "full" adds ADR violations, content assets, connection health
|
||||||
|
--docs, # Check for drift between crate //! doc comments and ontology node descriptions
|
||||||
|
--fail-on-drift, # Exit 1 if any DRIFT items are found (for pre-commit use)
|
||||||
]: nothing -> table {
|
]: nothing -> table {
|
||||||
let root = (project-root)
|
let root = (project-root)
|
||||||
let scan = (sync scan --level structural)
|
let scan = (sync scan --level structural)
|
||||||
@ -201,7 +203,27 @@ export def "sync diff" [
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
$report | sort-by status id
|
if $docs {
|
||||||
|
let doc_scan = (sync scan --level structural)
|
||||||
|
let doc_drifts = (check-crate-doc-drift $root $nodes $doc_scan)
|
||||||
|
$report = ($report | append $doc_drifts)
|
||||||
|
}
|
||||||
|
|
||||||
|
let sorted = ($report | sort-by status id)
|
||||||
|
|
||||||
|
if $fail_on_drift {
|
||||||
|
let drifts = ($sorted | where status == "DRIFT")
|
||||||
|
if ($drifts | is-not-empty) {
|
||||||
|
print $"(ansi red)✗ ($drifts | length) crate\(s\) have doc drift:(ansi reset)"
|
||||||
|
for d in $drifts {
|
||||||
|
print $" ($d.id) ($d.artifact_path)"
|
||||||
|
print $" ($d.detail)"
|
||||||
|
}
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
$sorted
|
||||||
}
|
}
|
||||||
|
|
||||||
# ── propose ───────────────────────────────────────────────────────────────────
|
# ── propose ───────────────────────────────────────────────────────────────────
|
||||||
@ -515,6 +537,7 @@ export def "sync audit" [
|
|||||||
let justfile_results = if $quick { [] } else { audit-justfiles ($scan.justfiles? | default { exists: false }) $manifest }
|
let justfile_results = if $quick { [] } else { audit-justfiles ($scan.justfiles? | default { exists: false }) $manifest }
|
||||||
let claude_results = if $quick { [] } else { audit-claude ($scan.claude? | default { exists: false }) $manifest }
|
let claude_results = if $quick { [] } else { audit-claude ($scan.claude? | default { exists: false }) $manifest }
|
||||||
let tools_results = if $quick { [] } else { audit-tools $manifest }
|
let tools_results = if $quick { [] } else { audit-tools $manifest }
|
||||||
|
let manifest_cov_results = if $quick { [] } else { audit-manifest-coverage $root }
|
||||||
|
|
||||||
# Health score (0-100) — pooled weighted model
|
# Health score (0-100) — pooled weighted model
|
||||||
#
|
#
|
||||||
@ -531,7 +554,7 @@ export def "sync audit" [
|
|||||||
|
|
||||||
let adr_pass = ($adr_results | where status == "PASS" | length)
|
let adr_pass = ($adr_results | where status == "PASS" | length)
|
||||||
let adr_total = ($adr_results | length)
|
let adr_total = ($adr_results | length)
|
||||||
let all_infra = ($justfile_results | append $claude_results | append $tools_results)
|
let all_infra = ($justfile_results | append $claude_results | append $tools_results | append $manifest_cov_results)
|
||||||
let jc_pass = ($all_infra | where status == "PASS" | length)
|
let jc_pass = ($all_infra | where status == "PASS" | length)
|
||||||
let jc_total = ($all_infra | length)
|
let jc_total = ($all_infra | length)
|
||||||
|
|
||||||
@ -645,6 +668,16 @@ export def "sync audit" [
|
|||||||
}
|
}
|
||||||
print ""
|
print ""
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Show manifest capability coverage issues
|
||||||
|
let manifest_issues = ($manifest_cov_results | where { |r| $r.status != "PASS" })
|
||||||
|
if ($manifest_issues | is-not-empty) {
|
||||||
|
print " Manifest capability gaps:"
|
||||||
|
for p in $manifest_issues {
|
||||||
|
print $" [($p.status)] ($p.check): ($p.detail)"
|
||||||
|
}
|
||||||
|
print ""
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if $strict and (($missing_count + $stale_count + $broken_count) > 0) {
|
if $strict and (($missing_count + $stale_count + $broken_count) > 0) {
|
||||||
@ -654,6 +687,41 @@ export def "sync audit" [
|
|||||||
$report
|
$report
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ── manifest-check ───────────────────────────────────────────────────────────
|
||||||
|
# Quick manifest capability coverage check — used by pre-commit hooks and CI.
|
||||||
|
# Exits 0 if no Hard failures. Prints warnings but does not block on Soft issues.
|
||||||
|
export def "sync manifest-check" [
|
||||||
|
--strict, # Also fail on WARN (Soft) issues
|
||||||
|
]: nothing -> nothing {
|
||||||
|
let root = (project-root)
|
||||||
|
let results = (audit-manifest-coverage $root)
|
||||||
|
let fails = ($results | where status == "FAIL")
|
||||||
|
let warns = ($results | where status == "WARN")
|
||||||
|
|
||||||
|
if ($fails | is-not-empty) {
|
||||||
|
for f in $fails {
|
||||||
|
print $"(ansi red)✗ ($f.detail)(ansi reset)"
|
||||||
|
}
|
||||||
|
error make { msg: $"Manifest coverage: ($fails | length) Hard failure\(s\)" }
|
||||||
|
}
|
||||||
|
|
||||||
|
if $strict and ($warns | is-not-empty) {
|
||||||
|
for w in $warns {
|
||||||
|
print $"(ansi yellow)⚠ ($w.check): ($w.detail)(ansi reset)"
|
||||||
|
}
|
||||||
|
error make { msg: $"Manifest coverage: ($warns | length) warnings \(strict mode\)" }
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($warns | is-not-empty) {
|
||||||
|
print $"(ansi yellow)⚠ ($warns | length) manifest coverage warnings — run `ontoref sync audit` for details(ansi reset)"
|
||||||
|
}
|
||||||
|
|
||||||
|
let pass = ($results | where status == "PASS")
|
||||||
|
if ($pass | is-not-empty) {
|
||||||
|
print $"(ansi green)✓ ($pass | first | get detail)(ansi reset)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
# ── watch ─────────────────────────────────────────────────────────────────────
|
# ── watch ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
# Launch bacon in headless mode with ontology-watch job, monitor drift file.
|
# Launch bacon in headless mode with ontology-watch job, monitor drift file.
|
||||||
@ -741,6 +809,8 @@ def parse-single-crate [crate_dir: string, level: string, root: string]: nothing
|
|||||||
extract-pub-api $crate_dir $name
|
extract-pub-api $crate_dir $name
|
||||||
} else { [] }
|
} else { [] }
|
||||||
|
|
||||||
|
let crate_doc = (extract-crate-doc $crate_dir)
|
||||||
|
|
||||||
{
|
{
|
||||||
name: $name,
|
name: $name,
|
||||||
path: ($crate_dir | path relative-to $root),
|
path: ($crate_dir | path relative-to $root),
|
||||||
@ -749,6 +819,7 @@ def parse-single-crate [crate_dir: string, level: string, root: string]: nothing
|
|||||||
src_modules: $src_modules,
|
src_modules: $src_modules,
|
||||||
test_count: $test_count,
|
test_count: $test_count,
|
||||||
pub_api: $pub_api,
|
pub_api: $pub_api,
|
||||||
|
crate_doc: $crate_doc,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -783,6 +854,94 @@ def extract-pub-api [crate_dir: string, crate_name: string]: nothing -> list {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
def extract-crate-doc [crate_dir: string]: nothing -> string {
|
||||||
|
let candidates = [$"($crate_dir)/src/lib.rs", $"($crate_dir)/src/main.rs"]
|
||||||
|
let existing = ($candidates | where { |f| $f | path exists })
|
||||||
|
if ($existing | is-empty) { return "" }
|
||||||
|
|
||||||
|
open --raw ($existing | first)
|
||||||
|
| lines
|
||||||
|
| skip while { |l|
|
||||||
|
let t = ($l | str trim)
|
||||||
|
($t | is-empty) or ($t | str starts-with "#![")
|
||||||
|
}
|
||||||
|
| take while { |l|
|
||||||
|
let t = ($l | str trim)
|
||||||
|
($t | is-empty) or ($t | str starts-with "//!")
|
||||||
|
}
|
||||||
|
| where { |l| $l | str starts-with "//!" }
|
||||||
|
| each { |l| $l | str replace --regex '^//!\s?' "" }
|
||||||
|
| str join "\n"
|
||||||
|
| str trim
|
||||||
|
}
|
||||||
|
|
||||||
|
def word-overlap [a: string, b: string]: nothing -> float {
|
||||||
|
let stop = ["that", "this", "with", "from", "have", "will", "been", "when", "they", "each"]
|
||||||
|
let words_a = ($a | str downcase | split row --regex '\W+'
|
||||||
|
| where { |w| ($w | str length) > 3 and not ($w in $stop) } | sort | uniq)
|
||||||
|
let words_b = ($b | str downcase | split row --regex '\W+'
|
||||||
|
| where { |w| ($w | str length) > 3 and not ($w in $stop) } | sort | uniq)
|
||||||
|
if ($words_a | is-empty) or ($words_b | is-empty) { return 0.0 }
|
||||||
|
let intersection = ($words_a | where { |w| $w in $words_b } | length)
|
||||||
|
let union_size = ($words_a | append $words_b | sort | uniq | length)
|
||||||
|
($intersection | into float) / ($union_size | into float)
|
||||||
|
}
|
||||||
|
|
||||||
|
def check-crate-doc-drift [root: string, nodes: list<record>, scan: record]: nothing -> list<record> {
|
||||||
|
let crates = ($scan.crates? | default [])
|
||||||
|
if ($crates | is-empty) { return [] }
|
||||||
|
|
||||||
|
mut drifts = []
|
||||||
|
|
||||||
|
for node in $nodes {
|
||||||
|
let paths = ($node.artifact_paths? | default [])
|
||||||
|
if ($paths | is-empty) { continue }
|
||||||
|
|
||||||
|
let node_desc = ($node.description? | default "" | str trim)
|
||||||
|
# First sentence of description for summary-level comparison.
|
||||||
|
# Split on ". " (period + space) to avoid breaking on path fragments like ".ontology/".
|
||||||
|
let node_summary = ($node_desc | split row ". " | where { |s| not ($s | str trim | is-empty) } | if ($in | is-not-empty) { first | str trim } else { "" })
|
||||||
|
|
||||||
|
# Only check the FIRST artifact_path for crate-doc drift.
|
||||||
|
# A crate appearing mid-list means "this crate implements the concept" —
|
||||||
|
# different abstraction level from the concept description itself.
|
||||||
|
let first_path = ($paths | if ($in | is-not-empty) { first } else { "" })
|
||||||
|
let p_norm = ($first_path | str replace --regex '/$' "")
|
||||||
|
let matched = ($crates | where { |c| $c.path == $p_norm })
|
||||||
|
|
||||||
|
if ($matched | is-not-empty) {
|
||||||
|
let cr = ($matched | first)
|
||||||
|
let crate_doc = ($cr.crate_doc? | default "" | str trim)
|
||||||
|
|
||||||
|
if ($crate_doc | is-not-empty) {
|
||||||
|
let crate_first = ($crate_doc | lines | where { |l| $l | is-not-empty } | if ($in | is-not-empty) { first } else { "" })
|
||||||
|
|
||||||
|
if ($node_summary | is-empty) {
|
||||||
|
$drifts = ($drifts | append {
|
||||||
|
status: "DRIFT",
|
||||||
|
id: $node.id,
|
||||||
|
artifact_path: $cr.path,
|
||||||
|
detail: $"Node has no description. Crate //! doc: \"($crate_first | str substring 0..120)\"",
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
# Compare first sentences: both are "summary" granularity regardless of total length.
|
||||||
|
let overlap = (word-overlap $node_summary $crate_first)
|
||||||
|
if $overlap < 0.20 {
|
||||||
|
$drifts = ($drifts | append {
|
||||||
|
status: "DRIFT",
|
||||||
|
id: $node.id,
|
||||||
|
artifact_path: $cr.path,
|
||||||
|
detail: $"Doc mismatch \(overlap: ($overlap * 100 | math round)%\). Node: \"($node_summary | str substring 0..80)\" | Crate: \"($crate_first | str substring 0..80)\"",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
$drifts
|
||||||
|
}
|
||||||
|
|
||||||
def scan-scenarios [root: string]: nothing -> list {
|
def scan-scenarios [root: string]: nothing -> list {
|
||||||
let scenarios_dir = $"($root)/reflection/scenarios"
|
let scenarios_dir = $"($root)/reflection/scenarios"
|
||||||
if not ($scenarios_dir | path exists) {
|
if not ($scenarios_dir | path exists) {
|
||||||
@ -985,8 +1144,11 @@ def scan-claude [root: string]: nothing -> record {
|
|||||||
# Layout conventions
|
# Layout conventions
|
||||||
let has_layout = ($"($claude_dir)/layout_conventions.md" | path exists)
|
let has_layout = ($"($claude_dir)/layout_conventions.md" | path exists)
|
||||||
|
|
||||||
# Session hook
|
# Session hook — check both legacy and current locations
|
||||||
let has_session_hook = ($"($claude_dir)/ontoref-session-start.sh" | path exists)
|
let has_session_hook = (
|
||||||
|
($"($claude_dir)/hooks/session-context.sh" | path exists)
|
||||||
|
or ($"($claude_dir)/ontoref-session-start.sh" | path exists)
|
||||||
|
)
|
||||||
|
|
||||||
{
|
{
|
||||||
exists: true,
|
exists: true,
|
||||||
@ -1190,7 +1352,10 @@ def find-unclaimed-artifacts [scan: record, root: string, node_paths: list<strin
|
|||||||
}
|
}
|
||||||
|
|
||||||
for f in ($scan.forms? | default []) {
|
for f in ($scan.forms? | default []) {
|
||||||
let form_rel = $f.path
|
# form.path may be absolute — normalize to relative for comparison
|
||||||
|
let form_rel = if ($f.path | str starts-with $root) {
|
||||||
|
$f.path | path relative-to $root
|
||||||
|
} else { $f.path }
|
||||||
let claimed = ($node_paths | any { |np| $form_rel starts-with $np or $np starts-with $form_rel })
|
let claimed = ($node_paths | any { |np| $form_rel starts-with $np or $np starts-with $form_rel })
|
||||||
if not $claimed {
|
if not $claimed {
|
||||||
$unclaimed = ($unclaimed | append { id: $"form-($f.name)", path: $form_rel, kind: "form" })
|
$unclaimed = ($unclaimed | append { id: $"form-($f.name)", path: $form_rel, kind: "form" })
|
||||||
@ -1198,7 +1363,10 @@ def find-unclaimed-artifacts [scan: record, root: string, node_paths: list<strin
|
|||||||
}
|
}
|
||||||
|
|
||||||
for m in ($scan.modes? | default []) {
|
for m in ($scan.modes? | default []) {
|
||||||
let mode_rel = $m.path
|
# mode.path may be absolute — normalize to relative for comparison
|
||||||
|
let mode_rel = if ($m.path | str starts-with $root) {
|
||||||
|
$m.path | path relative-to $root
|
||||||
|
} else { $m.path }
|
||||||
let claimed = ($node_paths | any { |np| $mode_rel starts-with $np or $np starts-with $mode_rel })
|
let claimed = ($node_paths | any { |np| $mode_rel starts-with $np or $np starts-with $mode_rel })
|
||||||
if not $claimed {
|
if not $claimed {
|
||||||
$unclaimed = ($unclaimed | append { id: $"mode-($m.name)", path: $mode_rel, kind: "mode" })
|
$unclaimed = ($unclaimed | append { id: $"mode-($m.name)", path: $mode_rel, kind: "mode" })
|
||||||
@ -1381,7 +1549,7 @@ def audit-claude [claude: record, manifest: record]: nothing -> list {
|
|||||||
let checks = [
|
let checks = [
|
||||||
["claude-md" ($claude.has_claude_md? | default false) "CLAUDE.md"]
|
["claude-md" ($claude.has_claude_md? | default false) "CLAUDE.md"]
|
||||||
["layout-conventions" ($claude.has_layout_conventions? | default false) "layout_conventions.md"]
|
["layout-conventions" ($claude.has_layout_conventions? | default false) "layout_conventions.md"]
|
||||||
["session-hook" ($claude.has_session_hook? | default false) "ontoref-session-start.sh"]
|
["session-hook" ($claude.has_session_hook? | default false) "hooks/session-context.sh"]
|
||||||
]
|
]
|
||||||
|
|
||||||
for c in $checks {
|
for c in $checks {
|
||||||
@ -1402,6 +1570,102 @@ def audit-claude [claude: record, manifest: record]: nothing -> list {
|
|||||||
$results
|
$results
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Audit manifest capability completeness.
|
||||||
|
# Cross-references Practice nodes, reflection modes, and daemon UI pages
|
||||||
|
# against declared capabilities[].artifacts and capabilities[].nodes to find
|
||||||
|
# undeclared functionality that agents and sessions will never see.
|
||||||
|
def audit-manifest-coverage [root: string]: nothing -> list {
|
||||||
|
let manifest = (load-manifest-safe $root)
|
||||||
|
let caps = ($manifest.capabilities? | default [])
|
||||||
|
|
||||||
|
if ($caps | is-empty) {
|
||||||
|
return [{
|
||||||
|
check: "manifest-capabilities",
|
||||||
|
status: "FAIL",
|
||||||
|
detail: "No capabilities declared in manifest.ncl — project functionality invisible to agents",
|
||||||
|
severity: "Hard",
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
|
||||||
|
let all_cap_nodes = ($caps | each { |c| $c.nodes? | default [] } | flatten | uniq)
|
||||||
|
let all_cap_artifacts = ($caps | each { |c| $c.artifacts? | default [] } | flatten)
|
||||||
|
|
||||||
|
mut results = []
|
||||||
|
|
||||||
|
# Check 1: Practice nodes without any capability referencing them
|
||||||
|
let core_ncl = $"($root)/.ontology/core.ncl"
|
||||||
|
if ($core_ncl | path exists) {
|
||||||
|
let core = (daemon-export-safe $core_ncl)
|
||||||
|
if $core != null {
|
||||||
|
let practices = ($core.nodes? | default [] | where { |n| ($n.level? | default "") == "Practice" })
|
||||||
|
let uncovered = ($practices | where { |p|
|
||||||
|
not ($all_cap_nodes | any { |n| $n == $p.id })
|
||||||
|
})
|
||||||
|
for p in $uncovered {
|
||||||
|
$results = ($results | append {
|
||||||
|
check: "practice-node-uncovered",
|
||||||
|
status: "WARN",
|
||||||
|
detail: $"Practice node '($p.id)' \(($p.name)\) has no capability referencing it in nodes[]",
|
||||||
|
severity: "Soft",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check 2: Reflection modes not mentioned in any capability's artifacts
|
||||||
|
let modes_dir = $"($root)/reflection/modes"
|
||||||
|
if ($modes_dir | path exists) {
|
||||||
|
let mode_files = (glob $"($modes_dir)/*.ncl" | each { |f| $f | path basename | str replace ".ncl" "" })
|
||||||
|
for mode_id in $mode_files {
|
||||||
|
let mode_path = $"reflection/modes/($mode_id).ncl"
|
||||||
|
let referenced = ($all_cap_artifacts | any { |a|
|
||||||
|
($a == $mode_path) or ($a | str starts-with "reflection/modes/")
|
||||||
|
})
|
||||||
|
if not $referenced {
|
||||||
|
$results = ($results | append {
|
||||||
|
check: "mode-uncovered",
|
||||||
|
status: "WARN",
|
||||||
|
detail: $"Reflection mode '($mode_id)' not referenced by any capability — invisible to agents",
|
||||||
|
severity: "Soft",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check 3: Capability quality — summaries too thin or missing artifacts
|
||||||
|
for c in $caps {
|
||||||
|
let summary = ($c.summary? | default "")
|
||||||
|
if ($summary | str length) < 30 {
|
||||||
|
$results = ($results | append {
|
||||||
|
check: "capability-thin-summary",
|
||||||
|
status: "WARN",
|
||||||
|
detail: $"Capability '($c.id)' summary under 30 chars — too thin for agent comprehension",
|
||||||
|
severity: "Soft",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
let arts = ($c.artifacts? | default [])
|
||||||
|
if ($arts | is-empty) {
|
||||||
|
$results = ($results | append {
|
||||||
|
check: "capability-no-artifacts",
|
||||||
|
status: "WARN",
|
||||||
|
detail: $"Capability '($c.id)' declares no artifacts — cannot verify existence",
|
||||||
|
severity: "Soft",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($results | is-empty) {
|
||||||
|
[{
|
||||||
|
check: "manifest-coverage",
|
||||||
|
status: "PASS",
|
||||||
|
detail: $"All ($caps | length) capabilities verified: practice nodes covered, modes referenced, summaries adequate",
|
||||||
|
severity: "Hard",
|
||||||
|
}]
|
||||||
|
} else {
|
||||||
|
$results
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
def audit-gates [root: string, diff: list]: nothing -> list {
|
def audit-gates [root: string, diff: list]: nothing -> list {
|
||||||
let gate_ncl = $"($root)/.ontology/gate.ncl"
|
let gate_ncl = $"($root)/.ontology/gate.ncl"
|
||||||
if not ($gate_ncl | path exists) { return [] }
|
if not ($gate_ncl | path exists) { return [] }
|
||||||
|
|||||||
@ -261,6 +261,32 @@ export def run-mode [id: string, --dry-run, --yes] {
|
|||||||
print ""
|
print ""
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ── Guards (Active Partner) ──
|
||||||
|
let guards = ($m.guards? | default [])
|
||||||
|
if ($guards | is-not-empty) {
|
||||||
|
print $" (ansi white_bold)Guards(ansi reset)"
|
||||||
|
mut blocked = false
|
||||||
|
for g in $guards {
|
||||||
|
let result = (do { bash -c $g.cmd } | complete)
|
||||||
|
let severity = ($g.severity? | default "Block")
|
||||||
|
if $result.exit_code != 0 {
|
||||||
|
if $severity == "Block" {
|
||||||
|
print $" (ansi red_bold)✗ BLOCKED(ansi reset) ($g.id): ($g.reason)"
|
||||||
|
$blocked = true
|
||||||
|
} else {
|
||||||
|
print $" (ansi yellow)⚠ WARN(ansi reset) ($g.id): ($g.reason)"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
print $" (ansi green)✓ PASS(ansi reset) ($g.id)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
print ""
|
||||||
|
if $blocked {
|
||||||
|
print $" (ansi red)Execution blocked by guard failure. Fix the issue and retry.(ansi reset)"
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if $dry_run {
|
if $dry_run {
|
||||||
print $" (ansi yellow_bold)DRY RUN(ansi reset) — showing steps without executing"
|
print $" (ansi yellow_bold)DRY RUN(ansi reset) — showing steps without executing"
|
||||||
print ""
|
print ""
|
||||||
@ -356,6 +382,72 @@ export def run-mode [id: string, --dry-run, --yes] {
|
|||||||
print ""
|
print ""
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ── Convergence check (Refinement Loop) ──
|
||||||
|
let converge = ($m.converge? | default null)
|
||||||
|
if $converge != null {
|
||||||
|
let max_iter = ($converge.max_iterations? | default 3)
|
||||||
|
let conv_strategy = ($converge.strategy? | default "RetryFailed")
|
||||||
|
let conv_cmd = ($converge.condition? | default "")
|
||||||
|
|
||||||
|
if ($conv_cmd | is-not-empty) {
|
||||||
|
mut iteration = 1
|
||||||
|
mut converged = false
|
||||||
|
|
||||||
|
# Check initial convergence after first execution
|
||||||
|
let check_result = (do { bash -c $conv_cmd } | complete)
|
||||||
|
if $check_result.exit_code == 0 {
|
||||||
|
$converged = true
|
||||||
|
}
|
||||||
|
|
||||||
|
while (not $converged) and ($iteration <= $max_iter) {
|
||||||
|
print $" (ansi cyan_bold)CONVERGE(ansi reset) iteration ($iteration)/($max_iter) — condition not met, re-executing ($conv_strategy)"
|
||||||
|
print ""
|
||||||
|
|
||||||
|
# Re-execute steps based on strategy
|
||||||
|
let retry_steps = if $conv_strategy == "RetryFailed" {
|
||||||
|
$steps | where { |s| $failed_steps | any { |f| $f == $s.id } }
|
||||||
|
} else {
|
||||||
|
$steps
|
||||||
|
}
|
||||||
|
|
||||||
|
$failed_steps = []
|
||||||
|
for step in ($retry_steps | enumerate) {
|
||||||
|
let s = $step.item
|
||||||
|
if not (actor-can-run-step ($s.actor? | default "Both")) { continue }
|
||||||
|
if ($s.cmd? | is-empty) or ($s.cmd == "") { continue }
|
||||||
|
|
||||||
|
print-step-header $s $step.index ($retry_steps | length)
|
||||||
|
let result = (exec-step-cmd $s.cmd)
|
||||||
|
if $result.success {
|
||||||
|
print $" (ansi green)OK(ansi reset)"
|
||||||
|
} else {
|
||||||
|
let strategy = ($s.on_error? | default {} | get -o strategy | default "Stop")
|
||||||
|
$failed_steps = ($failed_steps | append $s.id)
|
||||||
|
if $strategy == "Stop" {
|
||||||
|
print $" (ansi red_bold)FAILED(ansi reset) — aborting convergence"
|
||||||
|
break
|
||||||
|
} else {
|
||||||
|
print $" (ansi yellow)FAILED(ansi reset) — continuing"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
print ""
|
||||||
|
}
|
||||||
|
|
||||||
|
let check_result = (do { bash -c $conv_cmd } | complete)
|
||||||
|
if $check_result.exit_code == 0 {
|
||||||
|
$converged = true
|
||||||
|
print $" (ansi green_bold)CONVERGED(ansi reset) condition met after ($iteration) iteration(s)"
|
||||||
|
}
|
||||||
|
$iteration = ($iteration + 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
if not $converged {
|
||||||
|
print $" (ansi yellow_bold)NOT CONVERGED(ansi reset) condition not met after ($max_iter) iterations"
|
||||||
|
}
|
||||||
|
print ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
# ── Postconditions ──
|
# ── Postconditions ──
|
||||||
if ($postconditions | is-not-empty) {
|
if ($postconditions | is-not-empty) {
|
||||||
print $" (ansi white_bold)Postconditions(ansi reset)"
|
print $" (ansi white_bold)Postconditions(ansi reset)"
|
||||||
|
|||||||
@ -12,6 +12,45 @@ let _OnError = {
|
|||||||
backoff_s | Number | default = 5,
|
backoff_s | Number | default = 5,
|
||||||
} in
|
} in
|
||||||
|
|
||||||
|
# ── Guard ────────────────────────────────────────────────────────────────────
|
||||||
|
# Executable pre-flight check that runs BEFORE any step in the mode.
|
||||||
|
# If a guard fails, the mode prints the reason and aborts — preventing agents
|
||||||
|
# and humans from executing procedures that violate active constraints.
|
||||||
|
# Guards turn silent constraint violations into loud, early blocks.
|
||||||
|
#
|
||||||
|
# Pattern: Active Partner (#1 from Augmented Coding Patterns)
|
||||||
|
# "Explicitly grant permission and encourage AI to push back."
|
||||||
|
# Guards are the mechanism by which the protocol pushes back.
|
||||||
|
#
|
||||||
|
# cmd: shell command that exits 0 = pass, non-zero = blocked
|
||||||
|
# reason: human-readable explanation shown when the guard blocks
|
||||||
|
# severity: 'Block aborts execution, 'Warn prints warning but continues
|
||||||
|
let _Guard = {
|
||||||
|
id | String,
|
||||||
|
cmd | String,
|
||||||
|
reason | String,
|
||||||
|
severity | [| 'Block, 'Warn |] | default = 'Block,
|
||||||
|
} in
|
||||||
|
|
||||||
|
# ── Converge ─────────────────────────────────────────────────────────────────
|
||||||
|
# Post-execution convergence check for iterative modes.
|
||||||
|
# After all steps complete, the executor evaluates the condition command.
|
||||||
|
# If it exits non-zero, the mode re-executes (failed steps or all steps)
|
||||||
|
# up to max_iterations times.
|
||||||
|
#
|
||||||
|
# Pattern: Refinement Loop (#36 from Augmented Coding Patterns)
|
||||||
|
# "Each iteration removes a layer of noise, making the next layer visible."
|
||||||
|
#
|
||||||
|
# condition: shell command — exit 0 = converged, non-zero = iterate again
|
||||||
|
# max_iterations: upper bound on re-execution cycles (prevents infinite loops)
|
||||||
|
# strategy: 'RetryFailed re-runs only steps that failed or were blocked;
|
||||||
|
# 'RetryAll re-runs the entire DAG from scratch
|
||||||
|
let _Converge = {
|
||||||
|
condition | String,
|
||||||
|
max_iterations | Number | default = 3,
|
||||||
|
strategy | [| 'RetryFailed, 'RetryAll |] | default = 'RetryFailed,
|
||||||
|
} in
|
||||||
|
|
||||||
let _ActionStep = fun ActionContract => {
|
let _ActionStep = fun ActionContract => {
|
||||||
id | String,
|
id | String,
|
||||||
action | ActionContract,
|
action | ActionContract,
|
||||||
@ -27,8 +66,10 @@ let _ModeBase = fun ActionContract => {
|
|||||||
id | String,
|
id | String,
|
||||||
trigger | String,
|
trigger | String,
|
||||||
preconditions | Array String | default = [],
|
preconditions | Array String | default = [],
|
||||||
|
guards | Array _Guard | default = [],
|
||||||
steps | Array (_ActionStep ActionContract),
|
steps | Array (_ActionStep ActionContract),
|
||||||
postconditions | Array String | default = [],
|
postconditions | Array String | default = [],
|
||||||
|
converge | _Converge | optional,
|
||||||
} in
|
} in
|
||||||
|
|
||||||
# DAG-validated Mode contract:
|
# DAG-validated Mode contract:
|
||||||
@ -73,6 +114,8 @@ in
|
|||||||
{
|
{
|
||||||
Dependency = _Dependency,
|
Dependency = _Dependency,
|
||||||
OnError = _OnError,
|
OnError = _OnError,
|
||||||
|
Guard = _Guard,
|
||||||
|
Converge = _Converge,
|
||||||
ActionStep = _ActionStep,
|
ActionStep = _ActionStep,
|
||||||
Mode = _Mode,
|
Mode = _Mode,
|
||||||
}
|
}
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user