626 lines
22 KiB
Markdown
626 lines
22 KiB
Markdown
|
|
# Ontoref Full Adoption Prompt
|
|||
|
|
|
|||
|
|
**Purpose:** Complete onboarding of `{project_name}` into the ontoref protocol — or bring
|
|||
|
|
an existing adoption up to the current version. Covers all adoption layers in dependency
|
|||
|
|
order: protocol infrastructure → ontology enrichment → config surface → API surface →
|
|||
|
|
manifest self-interrogation.
|
|||
|
|
|
|||
|
|
**Substitutions required before use:**
|
|||
|
|
- `{project_name}` — kebab-case project identifier
|
|||
|
|
- `{project_dir}` — absolute path to project root
|
|||
|
|
- `{ontoref_source_dir}` — absolute path to the ontoref source checkout (only needed for
|
|||
|
|
Cargo path dependencies in Phases 3c and 4a; not needed if ontoref crates are not used
|
|||
|
|
as Rust dependencies)
|
|||
|
|
|
|||
|
|
**Run as:** `ontoref --actor developer` from `{project_dir}` (requires `ontoref` installed
|
|||
|
|
globally via `just install-daemon` from the ontoref repo).
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Bootstrap — source ontoref env vars
|
|||
|
|
|
|||
|
|
Before running any direct `nickel export` command, source the ontoref env into the current
|
|||
|
|
shell. This sets `NICKEL_IMPORT_PATH` and `ONTOREF_ROOT` without launching a full command:
|
|||
|
|
|
|||
|
|
```sh
|
|||
|
|
cd {project_dir}
|
|||
|
|
. $(which ontoref) --env-only
|
|||
|
|
# NICKEL_IMPORT_PATH and ONTOREF_ROOT are now available in this shell session
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
All `nickel export` commands in this prompt assume these vars are set. Re-run the source
|
|||
|
|
line if you open a new terminal.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Phase 0 — Read the project first
|
|||
|
|
|
|||
|
|
Do not write anything until you have read and understood the project. This phase is not
|
|||
|
|
optional — subsequent phases require accurate knowledge of what the project actually does.
|
|||
|
|
|
|||
|
|
```sh
|
|||
|
|
cd {project_dir}
|
|||
|
|
|
|||
|
|
# Purpose, architecture, stack
|
|||
|
|
cat README.md
|
|||
|
|
cat .claude/CLAUDE.md 2>/dev/null || true
|
|||
|
|
cat CLAUDE.md 2>/dev/null || true
|
|||
|
|
|
|||
|
|
# Existing ontology state
|
|||
|
|
test -f .ontology/core.ncl && \
|
|||
|
|
nickel export --import-path "$NICKEL_IMPORT_PATH" .ontology/core.ncl \
|
|||
|
|
| jq '{nodes: [.nodes[] | {id, name, level}], edge_count: (.edges | length)}'
|
|||
|
|
|
|||
|
|
# Manifest if present
|
|||
|
|
test -f .ontology/manifest.ncl && \
|
|||
|
|
nickel export --import-path "$NICKEL_IMPORT_PATH" .ontology/manifest.ncl \
|
|||
|
|
| jq '{repo_kind, description}'
|
|||
|
|
|
|||
|
|
# Rust crates and their purposes
|
|||
|
|
cat Cargo.toml 2>/dev/null | grep -A2 '\[workspace\]' || cat Cargo.toml 2>/dev/null | head -20
|
|||
|
|
ls crates/ 2>/dev/null || true
|
|||
|
|
|
|||
|
|
# Config loading pattern: does the project use nickel export + serde?
|
|||
|
|
grep -rl 'nickel export\|DaemonNclConfig\|ConfigLoader\|config_from_ncl' \
|
|||
|
|
crates/ src/ 2>/dev/null | head -10
|
|||
|
|
|
|||
|
|
# HTTP handlers: does the project expose an HTTP API?
|
|||
|
|
grep -rl '#\[get\|#\[post\|#\[put\|Router::new\|axum\|actix' \
|
|||
|
|
crates/ src/ 2>/dev/null | head -10
|
|||
|
|
|
|||
|
|
# External services: what does it connect to?
|
|||
|
|
grep -rl 'SurrealDb\|nats\|postgres\|redis\|reqwest\|http::Client' \
|
|||
|
|
crates/ src/ 2>/dev/null | head -10
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Identify:
|
|||
|
|
- What the project does for each audience (developer, agent, CI, end user)
|
|||
|
|
- Whether it uses NCL schemas for configuration (Nickel-validated-overrides applies)
|
|||
|
|
- Whether it exposes an HTTP API (`#[onto_api]` annotation applies)
|
|||
|
|
- What external services it depends on (critical_deps candidates)
|
|||
|
|
- What the existing ontology covers vs what is missing
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Phase 1 — Protocol infrastructure
|
|||
|
|
|
|||
|
|
Add missing v2 files. All steps are additive — nothing is overwritten.
|
|||
|
|
|
|||
|
|
```sh
|
|||
|
|
cd {project_dir}
|
|||
|
|
|
|||
|
|
# Detect missing files
|
|||
|
|
test -f .ontology/manifest.ncl && echo "manifest: present" || echo "manifest: MISSING"
|
|||
|
|
test -f .ontology/connections.ncl && echo "connections: present" || echo "connections: MISSING"
|
|||
|
|
|
|||
|
|
# Add manifest.ncl if missing (template is installed with ontoref)
|
|||
|
|
test -f .ontology/manifest.ncl || \
|
|||
|
|
sed "s/{{ project_name }}/{project_name}/g" \
|
|||
|
|
"$ONTOREF_ROOT/templates/ontology/manifest.ncl" > .ontology/manifest.ncl
|
|||
|
|
|
|||
|
|
# Add connections.ncl if missing
|
|||
|
|
test -f .ontology/connections.ncl || \
|
|||
|
|
sed "s/{{ project_name }}/{project_name}/g" \
|
|||
|
|
"$ONTOREF_ROOT/templates/ontology/connections.ncl" > .ontology/connections.ncl
|
|||
|
|
|
|||
|
|
# Validate both parse
|
|||
|
|
nickel export --import-path "$NICKEL_IMPORT_PATH" .ontology/manifest.ncl \
|
|||
|
|
> /dev/null && echo "manifest: ok"
|
|||
|
|
nickel export --import-path "$NICKEL_IMPORT_PATH" .ontology/connections.ncl \
|
|||
|
|
> /dev/null && echo "connections: ok"
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Check for deprecated `check_hint` fields in ADRs and migrate to typed `check` variants.
|
|||
|
|
See `$ONTOREF_ROOT/reflection/templates/update-ontology-prompt.md` Phase 7 for the
|
|||
|
|
migration mapping.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Phase 2 — Ontology enrichment (core.ncl, state.ncl)
|
|||
|
|
|
|||
|
|
Follow `$ONTOREF_ROOT/reflection/templates/update-ontology-prompt.md` Phases 2–6 in full.
|
|||
|
|
|
|||
|
|
Key rules:
|
|||
|
|
- Add nodes only for things that actually exist in code today — no aspirational nodes
|
|||
|
|
- Advance dimension states only when transition conditions are demonstrably met
|
|||
|
|
- Update `blocker` and `catalyst` to reflect current reality
|
|||
|
|
- Every edit must pass `nickel export` before continuing to the next node
|
|||
|
|
|
|||
|
|
After completing ontology enrichment, run:
|
|||
|
|
```sh
|
|||
|
|
nickel export --import-path "$NICKEL_IMPORT_PATH" .ontology/core.ncl \
|
|||
|
|
| jq '{nodes: (.nodes|length), edges: (.edges|length)}'
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Phase 3 — Config surface
|
|||
|
|
|
|||
|
|
**Skip this phase if the project has no structured NCL configuration system.**
|
|||
|
|
|
|||
|
|
This phase has three parts that build on each other: declare the config surface in the
|
|||
|
|
manifest, apply the nickel-validated-overrides pattern to Rust services, and register
|
|||
|
|
struct fields via `#[derive(ConfigFields)]`.
|
|||
|
|
|
|||
|
|
### 3a. Declare config_surface in manifest.ncl
|
|||
|
|
|
|||
|
|
Open `.ontology/manifest.ncl`. Identify:
|
|||
|
|
- Where the project's NCL config files live (`config_root`)
|
|||
|
|
- What the entry-point file is (`entry_point`, usually `config.ncl`)
|
|||
|
|
- What sections exist (each top-level key in the merged export)
|
|||
|
|
- Who reads each section (Rust structs, Nu scripts, CI pipelines)
|
|||
|
|
|
|||
|
|
```nickel
|
|||
|
|
config_surface = m.make_config_surface {
|
|||
|
|
config_root = "config/", # adjust to project layout
|
|||
|
|
entry_point = "config.ncl",
|
|||
|
|
kind = 'NclMerge, # 'NclMerge | 'SingleFile | 'TypeDialog
|
|||
|
|
contracts_path = "nickel/contracts/", # where NCL contract files live
|
|||
|
|
overrides_dir = "", # defaults to config_root
|
|||
|
|
|
|||
|
|
sections = [
|
|||
|
|
m.make_config_section {
|
|||
|
|
id = "server", # must match top-level NCL key
|
|||
|
|
file = "config.ncl",
|
|||
|
|
contract = "contracts.ncl -> ServerConfig",
|
|||
|
|
description = "...",
|
|||
|
|
rationale = "...",
|
|||
|
|
mutable = true,
|
|||
|
|
consumers = [
|
|||
|
|
m.make_config_consumer {
|
|||
|
|
id = "{project_name}-backend",
|
|||
|
|
kind = 'RustStruct,
|
|||
|
|
ref = "crates/{crate}/src/config.rs -> ServerConfig",
|
|||
|
|
fields = [], # leave empty once #[derive(ConfigFields)] is in place
|
|||
|
|
},
|
|||
|
|
],
|
|||
|
|
},
|
|||
|
|
],
|
|||
|
|
},
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Validate:
|
|||
|
|
```sh
|
|||
|
|
nickel export --import-path "$NICKEL_IMPORT_PATH" .ontology/manifest.ncl | jq .config_surface
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 3b. Nickel-validated-overrides pattern
|
|||
|
|
|
|||
|
|
**Apply this if the project has Rust services that read NCL config AND accept env var or
|
|||
|
|
CLI argument overrides.** Without this, env overrides bypass all NCL contract validation.
|
|||
|
|
|
|||
|
|
**The core insight:** JSON is valid Nickel syntax. Write env overrides as JSON to a temp
|
|||
|
|
`.ncl` file and pass it as a second argument to `nickel export`. Nickel merges both files
|
|||
|
|
and applies contracts to the merged result before any Rust struct is populated.
|
|||
|
|
|
|||
|
|
```text
|
|||
|
|
OLD: load_from_ncl() → deserialize → apply_env_overrides(&mut self) ← bypasses Nickel
|
|||
|
|
NEW: collect_env_overrides() → nickel export base.ncl overrides.ncl → deserialize
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 1 — Add `export_with_overrides` to the config loader:**
|
|||
|
|
|
|||
|
|
In the crate that calls `nickel export` (typically a platform-config or config-loader crate):
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
pub fn export_with_overrides(
|
|||
|
|
ncl_path: &Path,
|
|||
|
|
import_path: &str,
|
|||
|
|
overrides: &serde_json::Value,
|
|||
|
|
) -> Result<String, ConfigError> {
|
|||
|
|
if overrides.is_null() || overrides == &serde_json::Value::Object(Default::default()) {
|
|||
|
|
return plain_nickel_export(ncl_path, import_path);
|
|||
|
|
}
|
|||
|
|
let tmp = tempfile::NamedTempFile::with_suffix(".ncl")?;
|
|||
|
|
serde_json::to_writer(&tmp, overrides)?;
|
|||
|
|
let output = std::process::Command::new("nickel")
|
|||
|
|
.args(["export", "--format", "json"])
|
|||
|
|
.arg(ncl_path)
|
|||
|
|
.arg(tmp.path())
|
|||
|
|
.arg("--import-path").arg(import_path)
|
|||
|
|
.output()?;
|
|||
|
|
// tmp dropped here — cleaned up automatically
|
|||
|
|
if output.status.success() {
|
|||
|
|
Ok(String::from_utf8(output.stdout)?)
|
|||
|
|
} else {
|
|||
|
|
Err(ConfigError::NickelContract(String::from_utf8_lossy(&output.stderr).into()))
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 2 — Replace `apply_env_overrides(&mut self)` with `collect_env_overrides()`:**
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
// REMOVE: fn apply_env_overrides(&mut self) { self.port = env::var("PORT")... }
|
|||
|
|
// ADD:
|
|||
|
|
pub fn collect_env_overrides() -> serde_json::Value {
|
|||
|
|
let mut overrides = serde_json::json!({});
|
|||
|
|
// JSON shape must match NCL schema nesting exactly
|
|||
|
|
if let Ok(port) = std::env::var("SERVER_PORT") {
|
|||
|
|
if let Ok(n) = port.parse::<u16>() {
|
|||
|
|
overrides["server"]["port"] = n.into();
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
// ... one block per env var, following the NCL schema nesting
|
|||
|
|
overrides
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 3 — Wire through the load path:**
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
pub fn load() -> Result<Self, ConfigError> {
|
|||
|
|
let overrides = Self::collect_env_overrides();
|
|||
|
|
let json = export_with_overrides(&config_path, &import_path, &overrides)?;
|
|||
|
|
Ok(serde_json::from_str(&json)?)
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 4 — Verify NCL schemas have real constraints, not bare type annotations:**
|
|||
|
|
|
|||
|
|
```sh
|
|||
|
|
grep -n '| Number\b\|| String\b' {project_dir}/nickel/**/*.ncl 2>/dev/null || \
|
|||
|
|
grep -rn '| Number\b\|| String\b' {project_dir}/config/ 2>/dev/null
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Bare `| Number` passes any number. Constraints must use `from_validator` or `from_predicate`:
|
|||
|
|
|
|||
|
|
```nickel
|
|||
|
|
# Weak — any number passes
|
|||
|
|
port | Number = 8080
|
|||
|
|
|
|||
|
|
# Strong — contract rejects values outside 1024-65535
|
|||
|
|
port | std.contract.from_validator (fun port =>
|
|||
|
|
if port >= 1024 && port <= 65535 then 'Ok
|
|||
|
|
else 'Error { message = "port must be 1024-65535, got #{port}" }) = 8080
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Without real constraints in NCL, the overrides pattern has no enforcement teeth.
|
|||
|
|
|
|||
|
|
**Validation tests:**
|
|||
|
|
|
|||
|
|
```sh
|
|||
|
|
# Invalid override must produce a Nickel contract error, NOT start silently
|
|||
|
|
SERVER_PORT=99999 cargo run -- 2>&1 | grep -i "nickel\|contract\|error"
|
|||
|
|
|
|||
|
|
# Valid override must start with the overridden value
|
|||
|
|
SERVER_PORT=8090 cargo run -- 2>&1 | grep "port.*8090"
|
|||
|
|
|
|||
|
|
cargo clippy -- -D warnings
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 3c. `#[derive(ConfigFields)]` for compile-time field registration
|
|||
|
|
|
|||
|
|
Annotate every Rust struct that deserialises a config section. This gives the ontoref
|
|||
|
|
daemon accurate field lists without hand-maintaining `fields[]` in the manifest.
|
|||
|
|
|
|||
|
|
**Add dependency** (in the crate containing config structs; path is relative to that crate,
|
|||
|
|
assuming ontoref source is checked out as a sibling of this project):
|
|||
|
|
|
|||
|
|
```toml
|
|||
|
|
[dependencies]
|
|||
|
|
ontoref-ontology = { path = "../../{ontoref_source_dir}/crates/ontoref-ontology" }
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Annotate config structs:**
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
use ontoref_ontology::ConfigFields;
|
|||
|
|
|
|||
|
|
#[derive(Deserialize, ConfigFields)]
|
|||
|
|
#[config_section(id = "server", ncl_file = "config/config.ncl")]
|
|||
|
|
pub struct ServerConfig {
|
|||
|
|
pub host: String,
|
|||
|
|
pub port: u16,
|
|||
|
|
// #[serde(rename = "tls_enabled")] is respected — renamed name is registered
|
|||
|
|
pub tls: bool,
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Rules:
|
|||
|
|
- `id` must match the section `id` in `manifest.ncl → config_surface.sections[*].id`
|
|||
|
|
- `ncl_file` is relative to the project root
|
|||
|
|
- Only top-level fields of the annotated struct are registered; annotate nested structs
|
|||
|
|
separately if their fields need to appear in the coherence report
|
|||
|
|
|
|||
|
|
**Verify registration:**
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
#[test]
|
|||
|
|
fn config_fields_registered() {
|
|||
|
|
for entry in inventory::iter::<ontoref_ontology::ConfigFieldsEntry> {
|
|||
|
|
assert!(!entry.fields.is_empty(), "section {} has no fields", entry.section_id);
|
|||
|
|
eprintln!("section={} struct={} fields={:?}",
|
|||
|
|
entry.section_id, entry.struct_name, entry.fields);
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Coherence integration test:**
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
#[test]
|
|||
|
|
fn ncl_rust_coherence() {
|
|||
|
|
let root = std::path::Path::new(env!("CARGO_MANIFEST_DIR"))
|
|||
|
|
.ancestors().find(|p| p.join(".ontology").exists())
|
|||
|
|
.expect("project root not found");
|
|||
|
|
|
|||
|
|
for entry in inventory::iter::<ontoref_ontology::ConfigFieldsEntry> {
|
|||
|
|
let ncl_path = root.join(entry.ncl_file);
|
|||
|
|
let out = std::process::Command::new("nickel")
|
|||
|
|
.args(["export", "--format", "json"])
|
|||
|
|
.arg(&ncl_path).current_dir(root).output().unwrap();
|
|||
|
|
assert!(out.status.success(), "nickel export failed: {}", entry.ncl_file);
|
|||
|
|
|
|||
|
|
let json: serde_json::Value = serde_json::from_slice(&out.stdout).unwrap();
|
|||
|
|
let section = json.get(entry.section_id).or(Some(&json))
|
|||
|
|
.and_then(|v| v.as_object())
|
|||
|
|
.unwrap_or_else(|| panic!("section '{}' missing", entry.section_id));
|
|||
|
|
|
|||
|
|
let ncl_keys: std::collections::BTreeSet<&str> = section.keys().map(String::as_str).collect();
|
|||
|
|
let rust_fields: std::collections::BTreeSet<&str> = entry.fields.iter().copied().collect();
|
|||
|
|
let missing: Vec<_> = rust_fields.difference(&ncl_keys).collect();
|
|||
|
|
assert!(missing.is_empty(),
|
|||
|
|
"{} declares fields not in NCL: {:?}", entry.struct_name, missing);
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Once `#[derive(ConfigFields)]` is in place, remove the `fields = [...]` lists from the
|
|||
|
|
corresponding `config_consumer` entries in `manifest.ncl` — the derive macro is now
|
|||
|
|
the source of truth.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Phase 4 — API surface
|
|||
|
|
|
|||
|
|
**Skip this phase if the project exposes no HTTP API.**
|
|||
|
|
|
|||
|
|
If the project has an HTTP API served by axum or actix-web, annotate each handler with
|
|||
|
|
`#[onto_api]` so the daemon can surface the full annotated catalog at `GET /api/catalog`.
|
|||
|
|
|
|||
|
|
### 4a. Add ontoref-derive dependency
|
|||
|
|
|
|||
|
|
Path is relative to the consuming crate, assuming ontoref source is a sibling project:
|
|||
|
|
|
|||
|
|
```toml
|
|||
|
|
[dependencies]
|
|||
|
|
ontoref-ontology = { path = "../../{ontoref_source_dir}/crates/ontoref-ontology" }
|
|||
|
|
ontoref-derive = { path = "../../{ontoref_source_dir}/crates/ontoref-derive" }
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 4b. Annotate HTTP handlers
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
use ontoref_derive::onto_api;
|
|||
|
|
|
|||
|
|
#[onto_api(
|
|||
|
|
method = "GET",
|
|||
|
|
path = "/api/things",
|
|||
|
|
description = "List all things with optional filter.",
|
|||
|
|
auth = "bearer",
|
|||
|
|
actors = ["developer", "agent"],
|
|||
|
|
params = [("filter", "string", false, "optional substring filter")],
|
|||
|
|
tags = ["things", "read"]
|
|||
|
|
)]
|
|||
|
|
async fn list_things(/* axum extractors */) -> impl IntoResponse { /* ... */ }
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Fields:
|
|||
|
|
- `method` — HTTP verb in caps: `"GET" | "POST" | "PUT" | "DELETE" | "PATCH"`
|
|||
|
|
- `path` — route path as registered in the router (e.g. `"/projects/{slug}/things"`)
|
|||
|
|
- `description` — one sentence, agent-readable
|
|||
|
|
- `auth` — `"bearer" | "admin" | "none"`
|
|||
|
|
- `actors` — list of actors allowed: `"developer" | "agent" | "ci" | "admin"`
|
|||
|
|
- `params` — array of `(name, type, required, description)` tuples
|
|||
|
|
- `tags` — grouping tags for catalog filtering
|
|||
|
|
|
|||
|
|
**Register inventory collection** in the crate's `lib.rs` or `main.rs`:
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
// In ontoref-ontology already: inventory::collect!(ApiRouteEntry);
|
|||
|
|
// In your crate: this is automatic — inventory::submit! is emitted by the macro.
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 4c. Expose the catalog endpoint
|
|||
|
|
|
|||
|
|
Add the catalog route to the router (if not already provided by ontoref-daemon):
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
// If embedding ontoref-daemon routes:
|
|||
|
|
// GET /api/catalog is already registered by ontoref-daemon's router.
|
|||
|
|
|
|||
|
|
// If building a standalone service with its own router, add:
|
|||
|
|
use ontoref_ontology::ApiRouteEntry;
|
|||
|
|
|
|||
|
|
async fn api_catalog() -> axum::Json<serde_json::Value> {
|
|||
|
|
let routes: Vec<_> = inventory::iter::<ApiRouteEntry>
|
|||
|
|
.map(|r| serde_json::json!({
|
|||
|
|
"method": r.method, "path": r.path, "description": r.description,
|
|||
|
|
"auth": r.auth, "actors": r.actors, "params": r.params, "tags": r.tags,
|
|||
|
|
}))
|
|||
|
|
.collect();
|
|||
|
|
axum::Json(serde_json::json!({ "routes": routes }))
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 4d. Export the catalog for ontoref UI visibility
|
|||
|
|
|
|||
|
|
The ontoref daemon reads `#[onto_api]` entries from its own `inventory` at runtime.
|
|||
|
|
Consumer projects run as separate binaries — their entries are never linked into the
|
|||
|
|
ontoref-daemon process. To make the API surface visible in the ontoref UI, generate a
|
|||
|
|
static `api-catalog.json` in the project root and commit it.
|
|||
|
|
|
|||
|
|
**Add `--dump-api-catalog` to the daemon binary's `Cli` struct** (in `main.rs`):
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
/// Print all #[onto_api] registered routes as JSON and exit.
|
|||
|
|
/// Pipe to api-catalog.json so the ontoref UI can display this project's
|
|||
|
|
/// API surface when registered as a non-primary slug.
|
|||
|
|
#[arg(long)]
|
|||
|
|
dump_api_catalog: bool,
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Add an early-exit handler before the server starts (same pattern as `--hash-password`):
|
|||
|
|
|
|||
|
|
```rust
|
|||
|
|
if cli.dump_api_catalog {
|
|||
|
|
println!("{}", ontoref_ontology::api::dump_catalog_json());
|
|||
|
|
return;
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Add the just recipe** — in `justfiles/assets.just` (create if absent, import in root `justfile`):
|
|||
|
|
|
|||
|
|
```just
|
|||
|
|
# Export this daemon's API catalog to api-catalog.json.
|
|||
|
|
# Run after any #[onto_api] annotation is added or changed.
|
|||
|
|
# Read by the ontoref UI for non-primary project slugs.
|
|||
|
|
[doc("Export #[onto_api] routes to api-catalog.json")]
|
|||
|
|
export-api-catalog:
|
|||
|
|
cargo run -p {daemon_crate} --no-default-features -- --dump-api-catalog > api-catalog.json
|
|||
|
|
@echo "exported routes to api-catalog.json"
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Replace `{daemon_crate}` with the crate name of the project's HTTP daemon binary.
|
|||
|
|
|
|||
|
|
**Run and commit:**
|
|||
|
|
|
|||
|
|
```sh
|
|||
|
|
just export-api-catalog
|
|||
|
|
git add api-catalog.json
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Commit `api-catalog.json` alongside the `#[onto_api]` annotations — they change together.
|
|||
|
|
|
|||
|
|
### 4e. Verify
|
|||
|
|
|
|||
|
|
```sh
|
|||
|
|
cargo check --all-targets
|
|||
|
|
just export-api-catalog
|
|||
|
|
# Confirm routes were captured:
|
|||
|
|
cat api-catalog.json | jq '[.[] | {method, path, tags}]'
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Phase 5 — Manifest self-interrogation
|
|||
|
|
|
|||
|
|
Populate `capabilities[]`, `requirements[]`, and `critical_deps[]` in `.ontology/manifest.ncl`.
|
|||
|
|
|
|||
|
|
Follow `$ONTOREF_ROOT/reflection/templates/manifest-self-interrogation-prompt.md` in full.
|
|||
|
|
|
|||
|
|
**Quick reference for the three types:**
|
|||
|
|
|
|||
|
|
```nickel
|
|||
|
|
capabilities = [
|
|||
|
|
m.make_capability {
|
|||
|
|
id = "kebab-id",
|
|||
|
|
name = "Name",
|
|||
|
|
summary = "One line: what does this capability do?",
|
|||
|
|
rationale = "Why it exists. What was rejected.",
|
|||
|
|
how = "Key patterns, entry points, data flows.",
|
|||
|
|
artifacts = ["crates/foo/", "GET /api/foo"],
|
|||
|
|
adrs = ["adr-001"], # IDs that formalize decisions here
|
|||
|
|
nodes = ["practice-node-id"], # IDs from .ontology/core.ncl
|
|||
|
|
},
|
|||
|
|
],
|
|||
|
|
|
|||
|
|
requirements = [
|
|||
|
|
m.make_requirement {
|
|||
|
|
id = "id",
|
|||
|
|
name = "Name",
|
|||
|
|
env = 'Both, # 'Production | 'Development | 'Both
|
|||
|
|
kind = 'Tool, # 'Tool | 'Service | 'EnvVar | 'Infrastructure
|
|||
|
|
version = "",
|
|||
|
|
required = true,
|
|||
|
|
impact = "What breaks if absent.",
|
|||
|
|
provision = "How to install/set/provision.",
|
|||
|
|
},
|
|||
|
|
],
|
|||
|
|
|
|||
|
|
critical_deps = [
|
|||
|
|
m.make_critical_dep {
|
|||
|
|
id = "id",
|
|||
|
|
name = "crate-or-service",
|
|||
|
|
ref = "crates.io: foo",
|
|||
|
|
used_for = "Which capabilities depend on this.",
|
|||
|
|
failure_impact = "What breaks if this dep disappears or breaks its contract.",
|
|||
|
|
mitigation = "Feature flags, fallback builds, alternatives.",
|
|||
|
|
},
|
|||
|
|
],
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Phase 6 — Final validation
|
|||
|
|
|
|||
|
|
```sh
|
|||
|
|
cd {project_dir}
|
|||
|
|
|
|||
|
|
# All .ontology/ files
|
|||
|
|
for f in .ontology/core.ncl .ontology/state.ncl .ontology/gate.ncl \
|
|||
|
|
.ontology/manifest.ncl .ontology/connections.ncl; do
|
|||
|
|
test -f "$f" && \
|
|||
|
|
nickel export --import-path "$NICKEL_IMPORT_PATH" "$f" > /dev/null \
|
|||
|
|
&& echo "ok: $f" || echo "FAIL: $f"
|
|||
|
|
done
|
|||
|
|
|
|||
|
|
# All ADRs
|
|||
|
|
for f in adrs/adr-*.ncl; do
|
|||
|
|
nickel export --import-path "$NICKEL_IMPORT_PATH" "$f" > /dev/null \
|
|||
|
|
&& echo "ok: $f" || echo "FAIL: $f"
|
|||
|
|
done
|
|||
|
|
|
|||
|
|
# Rust: build, lint, tests
|
|||
|
|
cargo check --all-targets --all-features
|
|||
|
|
cargo clippy --all-targets --all-features -- -D warnings
|
|||
|
|
cargo test config_fields_registered -- --nocapture 2>/dev/null || true
|
|||
|
|
cargo test ncl_rust_coherence -- --nocapture 2>/dev/null || true
|
|||
|
|
|
|||
|
|
# Describe output
|
|||
|
|
ontoref --actor developer describe project
|
|||
|
|
ontoref --actor developer describe requirements
|
|||
|
|
ONTOREF_ACTOR=agent ontoref describe capabilities | from json | get capabilities | length
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Checklist
|
|||
|
|
|
|||
|
|
### Protocol layer
|
|||
|
|
- [ ] `.ontology/manifest.ncl` present and exports cleanly
|
|||
|
|
- [ ] `.ontology/connections.ncl` present and exports cleanly
|
|||
|
|
- [ ] `core.ncl` nodes reflect current implementation (no aspirational nodes)
|
|||
|
|
- [ ] `state.ncl` dimension states match current reality
|
|||
|
|
- [ ] All `check_hint` fields migrated to typed `check` variants
|
|||
|
|
|
|||
|
|
### Config surface
|
|||
|
|
- [ ] `config_surface` declared in `manifest.ncl` (if project uses NCL config)
|
|||
|
|
- [ ] All sections have `id`, `file`, `consumers` with accurate kinds
|
|||
|
|
- [ ] Nickel-validated-overrides: `collect_env_overrides()` implemented (if applicable)
|
|||
|
|
- [ ] Nickel-validated-overrides: `apply_env_overrides(&mut self)` removed
|
|||
|
|
- [ ] NCL schemas have real constraints (`from_validator`, not bare `| Number`)
|
|||
|
|
- [ ] `#[derive(ConfigFields)]` on all config structs that read NCL sections
|
|||
|
|
- [ ] `cargo test config_fields_registered` passes
|
|||
|
|
- [ ] `cargo test ncl_rust_coherence` passes
|
|||
|
|
- [ ] `fields = [...]` removed from manifest consumers once derive is in place
|
|||
|
|
|
|||
|
|
### API surface
|
|||
|
|
- [ ] `#[onto_api]` on all HTTP handlers (if project has an HTTP API)
|
|||
|
|
- [ ] `GET /api/catalog` returns non-empty routes list
|
|||
|
|
- [ ] All routes have accurate `auth`, `actors`, `tags`
|
|||
|
|
|
|||
|
|
### Manifest self-interrogation
|
|||
|
|
- [ ] `description` field populated (non-empty)
|
|||
|
|
- [ ] At least 1 `capability` entry with non-empty `summary`
|
|||
|
|
- [ ] `capabilities[].nodes[]` verified against `core.ncl` node IDs
|
|||
|
|
- [ ] At least 1 `requirement` per relevant environment
|
|||
|
|
- [ ] All `critical_deps` have non-empty `failure_impact`
|
|||
|
|
|
|||
|
|
### Delivery
|
|||
|
|
- [ ] `describe project` returns complete picture
|
|||
|
|
- [ ] `describe requirements` renders without errors
|
|||
|
|
- [ ] No orphaned `describe diff` changes (everything committed or staged intentionally)
|
|||
|
|
- [ ] Do NOT commit — developer reviews the diff first
|