diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index a959ebf1..8fe8677f 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -2,162 +2,81 @@ ## Planning Sources Of Truth -Do not duplicate the repo's "active plan" inside `CONTRIBUTING.md`. -That information drifts too easily here. +- `docs/method/backlog/` — lane-organized backlog (inbox/, asap/, up-next/, cool-ideas/, bad-code/) +- `docs/design//` — active cycle work; backlog item promotes here +- `CHANGELOG.md` — what has landed +- `docs/method/retro//` — closed cycle retrospectives -Instead, use these sources: +No milestones. No ROADMAP. Cycles are the unit of work. +See [METHOD.md](../METHOD.md) for the full process. -- `BACKLOG/README.md` for the currently active cycle and promotable pre-design - slices -- `docs/ROADMAP.md` for committed release and milestone inventory -- `CHANGELOG.md` for what has already landed on the branch or in released - versions -- `docs/design/` for the governing design notes promoted from active backlog - items +## Cycle Process -If these artifacts disagree, reconcile them as part of the cycle close instead -of letting `CONTRIBUTING.md` become a second planning registry. +A cycle is one backlog item, start to finish: -## Development Loop +1. **Pull** — promote a backlog item to `docs/design//` +2. **Design** — write the design doc; add hills, playback questions, non-goals +3. **Spec** — write failing tests as executable spec +4. **Implement** — make the tests pass +5. **Close** — retrospective, drift audit, CHANGELOG, tech debt journal, + cool ideas -This repo follows the same disciplined cycle used by higher-layer products built -on git-warp: +### Retrospectives -1. design docs first -2. tests as executable spec second -3. implementation third -4. playback, retrospective, and reconciliation after the slice lands +Every closed cycle gets a retrospective in `docs/method/retro//`. +At minimum: -Tests are the spec. Design docs define intent and invariants. Implementation -follows. +1. Governing design docs and backlog IDs +2. What actually landed +3. Design Alignment Audit — label each point as `aligned`, `partially aligned`, + or `not aligned` +4. Observed drift — classify as deliberate tradeoff, implementation shortcut, + hidden constraint, test gap, or design ambiguity +5. Resolution — update design docs, add follow-on backlog item, or fix + immediately -When a `BACKLOG/` item is selected for active work, promote it into -`docs/design/` before writing tests. - -For non-trivial work, use IBM Design Thinking style framing: - -- sponsor actors -- hills -- playbacks -- explicit non-goals - -Keep that vocabulary in the design method. Do not leak it into the runtime -ontology unless the substrate truly needs a first-class concept. - -## Retrospectives - -Retrospectives are not optional cleanup. Every closed slice should leave behind -an explicit retrospective, and that retrospective must audit the landed changes -against the intended design. - -At minimum, every retrospective should include: - -1. governing design docs and backlog IDs -2. what actually landed -3. a `Design Alignment Audit` section -4. any observed drift -5. whether the drift is accepted, rejected, or deferred - -The `Design Alignment Audit` should check the implemented slice against the -intended invariants and label each major point as: - -- `aligned` -- `partially aligned` -- `not aligned` - -If implementation drift occurred, the retrospective must say why: - -- deliberate tradeoff -- implementation shortcut -- hidden pre-existing constraint -- test gap -- design ambiguity - -And it must say how the repo resolves that drift: - -- update the design docs -- add a follow-on `BACKLOG/` item -- immediately fix the implementation in the next slice - -Do not treat a passing test suite as proof that the design was honored. The -retro is where we verify that the code matches the intended architecture, not -just the executable spec that happened to be written. - -## Checkpoints - -Most slices should pass through four checkpoints: - -1. doctrine -2. spec -3. semantic -4. surface - -For git-warp, "surface" often means public API, CLI, or documentation surface -rather than a GUI. - -Local red while iterating is acceptable. Shared branches, pushes intended for -review, and merge submissions should be green. +Do not treat a passing test suite as proof that the design was honored. ## Getting Started -1. Clone the repository -2. Install dependencies: `npm install` -3. Set up git hooks: `npm run setup:hooks` -4. Run tests: `npm test` +```bash +git clone git@github.com:git-stunts/git-warp.git +cd git-warp +npm install # installs deps, sets up git hooks +npm run test:local # run unit tests +``` ## Git Hooks -This project uses custom git hooks located in `scripts/hooks/`. Run `npm run setup:hooks` to enable them. -- Hooks are also auto-configured on `npm install` (no-op if not a git repo). -- `pre-commit` runs eslint on staged JS files. -- `pre-push` runs `npm run lint`, `npm test`, `npm run benchmark`, and the Docker bats CLI suite (`git-warp` commands). +Custom hooks in `scripts/hooks/`, auto-configured on `npm install`. -### Pre-commit Hook - -The pre-commit hook runs ESLint on all staged JavaScript files. If linting fails, the commit is blocked. - -To fix lint errors: -```bash -npx eslint --fix -``` - -To bypass temporarily (use sparingly): -```bash -git commit --no-verify -``` +- **pre-commit** — ESLint on staged JS files +- **pre-push** — 8-gate IRONCLAD firewall (tsc, policy, consumer types, + ESLint, ratchet, surface, markdown, tests) ## Code Style -- ESLint enforces code style. Run `npx eslint .` to check. -- Use template literals instead of string concatenation -- Always use curly braces for if/else blocks -- Keep functions focused and avoid deep nesting +- ESLint enforces style. Run `npx eslint .` to check. +- Template literals over concatenation +- Always use curly braces for if/else +- Keep functions focused, avoid deep nesting ## Running Tests ```bash -npm test # Run all unit tests (Docker) -npm run test:local # Run unit tests without Docker -npm test -- # Run specific tests - -# Multi-runtime test matrix (Docker) -npm run test:node22 # Node 22: unit + integration + BATS CLI -npm run test:bun # Bun: API integration tests -npm run test:deno # Deno: API integration tests -npm run test:matrix # All runtimes in parallel +npm run test:local # Unit tests without Docker +npm test # Unit tests (Docker) +npm run test:matrix # Full multi-runtime matrix (Docker) ``` ### No-Coordination Invariant -The no-coordination regression suite is non-negotiable for multi-writer safety. -Ensure `test/unit/domain/WarpGraph.noCoordination.test.js` passes before submitting changes. +`test/unit/domain/WarpGraph.noCoordination.test.js` is non-negotiable for +multi-writer safety. Must pass before any PR. ## Pull Requests -1. Create a feature branch from `main` -2. Make your changes with clear commit messages -3. Keep commits documentation-atomic: when a change affects shipped behavior, public surface, or backlog status, update `CHANGELOG.md` and the roadmap/backlog docs in the same commit. -4. When a `BACKLOG/` item becomes active, promote it into `docs/design/` before implementation. When roadmap work completes, reconcile `docs/ROADMAP.md` and `docs/ROADMAP/COMPLETED.md` in the same commit. -5. Ensure all tests pass: `npm test` -6. Ensure linting passes: `npx eslint .` -7. Submit a PR with a clear description +1. Branch from the latest green branch +2. Clear commit messages; docs-atomic (CHANGELOG + code in same commit) +3. All tests pass, all lint gates pass +4. Submit PR with clear description diff --git a/.github/maintainers/README.md b/.github/maintainers/README.md index 2fc18ccd..cf861474 100644 --- a/.github/maintainers/README.md +++ b/.github/maintainers/README.md @@ -18,8 +18,8 @@ evaluating or using the product API. ## Related project artifacts -- [Backlog](../../BACKLOG/README.md) - Active and promoted work tracked as repo-operating artifacts. +- [Backlog](../../docs/method/backlog/) + Lane-organized backlog items with legend prefixes. - [Design notes](../../docs/design/) Governing design docs for promoted backlog items and active cycles. - [Retrospectives](../../docs/archive/retrospectives/) diff --git a/BACKLOG/OG-001-worldline-api.md b/BACKLOG/OG-001-worldline-api.md deleted file mode 100644 index b06d3bab..00000000 --- a/BACKLOG/OG-001-worldline-api.md +++ /dev/null @@ -1,38 +0,0 @@ -# OG-001 — First-Class `Worldline` API - -Status: DONE - -Promoted to: `docs/design/worldline-observer-api-phasing.md` - -## Problem - -Read-side coordinates are still expressed indirectly through mutable -`WarpRuntime` session handles instead of a first-class history noun. - -## Why This Matters - -The observer rewrite is not complete until callers can target immutable history -through a proper `Worldline` API rather than by treating `WarpRuntime` as both a -session and a snapshot. - -## Promotion - -This item was promoted when the next slice began defining the public read-side -API shape after the detached observer-boundary repair work. - -## Outcome - -The minimal first-class `Worldline` surface landed on 2026-03-27: - -- `WarpRuntime.worldline()` now returns a worldline handle -- `Worldline.materialize()` resolves detached snapshots -- `Worldline.observer()` creates observers pinned to the worldline source -- `Worldline.seek()` returns a new worldline handle - -Further work on tick-indexed coordinates and richer worldline identity now -belongs to later slices rather than this initial noun-introduction item. - -See also: - -- `docs/design/worldline-observer-api-phasing.md` -- `docs/archive/retrospectives/2026-03-27-worldline-minimal-phase-b.md` diff --git a/BACKLOG/OG-002-warpgraph-role-split.md b/BACKLOG/OG-002-warpgraph-role-split.md deleted file mode 100644 index cd7461ba..00000000 --- a/BACKLOG/OG-002-warpgraph-role-split.md +++ /dev/null @@ -1,32 +0,0 @@ -# OG-002 — Split Mutable Session `WarpRuntime` From Immutable Snapshot Noun - -Status: DONE - -Promoted to: `docs/design/warpstate-runtime-noun-split.md` - -Completed in: `15.0.0` - -## Problem - -`WarpRuntime` had been carrying too many roles at once under the old -`WarpGraph` noun: mutable session handle, -materialization driver, and the intended immutable snapshot noun. - -## Why This Matters - -The new observer/worldline model will stay semantically muddy until the public -names make the substrate boundary obvious. - -## Promotion - -This item was promoted when `Worldline` and immutable observer/worldline -handles made the remaining `WarpGraph` noun overload the next explicit cleanup -decision. - -## Outcome - -The hard major-version cut landed: - -- public runtime noun is now `WarpRuntime` -- `WarpGraph` was removed instead of preserved as a compatibility alias -- package version bumped to `15.0.0` diff --git a/BACKLOG/OG-003-snapshot-immutability.md b/BACKLOG/OG-003-snapshot-immutability.md deleted file mode 100644 index cf450baf..00000000 --- a/BACKLOG/OG-003-snapshot-immutability.md +++ /dev/null @@ -1,41 +0,0 @@ -# OG-003 — Deepen Public Snapshot Immutability - -Status: DONE - -Promoted to: `docs/design/snapshot-immutability-hardening.md` - -Completed on: `2026-03-27` - -## Problem - -Public materialize APIs now return detached state, but nested `Map` structures -are still writable by callers in their local copy. - -## Why This Matters - -The current slice fixed aliasing, not full immutability. Snapshot hashing and -read-only semantics would be stronger if callers could not mutate the public -structure at all. - -## Promotion Trigger - -Promoted when the runtime rename was complete and the remaining read-side gap -was reduced to one concrete problem: detached snapshots still exposed mutable -nested containers. - -## Outcome - -This slice landed with one shared immutable-snapshot helper that now hardens: - -- `WarpRuntime.materialize(...)` -- `WarpRuntime.materializeCoordinate(...)` -- `WarpRuntime.materializeStrand(...)` -- `WarpRuntime.getStateSnapshot()` -- `Worldline.materialize()` - -The public snapshot contract is now stronger: - -- nested `Map` / `Set` mutators throw -- nested register payload objects are frozen -- detached snapshots no longer expose writable nested state through ordinary - caller operations diff --git a/BACKLOG/OG-004-observer-seek-contract.md b/BACKLOG/OG-004-observer-seek-contract.md deleted file mode 100644 index 53c58296..00000000 --- a/BACKLOG/OG-004-observer-seek-contract.md +++ /dev/null @@ -1,34 +0,0 @@ -# OG-004 — Canonical Immutable Observer Seek Contract - -Status: DONE - -Promoted to: `docs/design/worldline-observer-api-phasing.md` - -## Problem - -The preferred observer seek behavior is now clearer, but it is not yet enforced -as a first-class API contract. - -## Why This Matters - -If observer seeking mutates handles in place, the system will reintroduce the -same handle-instability that the read-boundary rewrite is removing. - -## Promotion - -This item was promoted when observer construction and immutable `seek()` -semantics became the next public read-side API slice. - -## Outcome - -Phase A landed on 2026-03-27: - -- observers now expose factual `source` metadata -- observers now expose pinned `stateHash` -- `ObserverView.seek()` now returns a new observer rather than mutating the - current one - -See also: - -- `docs/design/worldline-observer-api-phasing.md` -- `docs/archive/retrospectives/2026-03-27-observer-seek-phase-a.md` diff --git a/BACKLOG/OG-005-detached-read-benchmarks.md b/BACKLOG/OG-005-detached-read-benchmarks.md deleted file mode 100644 index 2208955d..00000000 --- a/BACKLOG/OG-005-detached-read-benchmarks.md +++ /dev/null @@ -1,24 +0,0 @@ -# OG-005 — Benchmark Detached Coordinate And Strand Reads - -Status: DONE - -Promoted to: `docs/design/detached-read-benchmarks.md` - -Closed by: - -- `test/unit/benchmark/detachedReadBenchmark.fixture.test.js` -- `test/benchmark/DetachedReadBoundary.benchmark.js` -- `docs/archive/retrospectives/2026-03-27-detached-read-benchmarks.md` - -## Problem - -Detached read handles are safer, but their cost is not yet measured. - -## Why This Matters - -Before adding new caching layers or optimizing around detached reads, we should -know what the coordinate and strand read boundary actually costs. - -## Promotion Trigger - -Promoted when the detached-read performance slice began. diff --git a/BACKLOG/OG-006-read-api-doc-consistency.md b/BACKLOG/OG-006-read-api-doc-consistency.md deleted file mode 100644 index 4643068e..00000000 --- a/BACKLOG/OG-006-read-api-doc-consistency.md +++ /dev/null @@ -1,24 +0,0 @@ -# OG-006 — Remove Remaining Docs And Examples That Imply Caller Retargeting - -Status: DONE - -Promoted to: `docs/design/read-api-doc-consistency.md` - -Closed by: - -- `test/unit/scripts/read-api-doc-consistency.test.js` -- `docs/archive/retrospectives/2026-03-27-read-api-doc-consistency.md` - -## Problem - -Some docs and examples may still imply that `materializeCoordinate()` or -`materializeStrand()` retarget the caller graph instance. - -## Why This Matters - -Tests now encode the safer contract. The prose surface should stop teaching the -old semantics. - -## Promotion Trigger - -Promoted when the public read-surface documentation reconciliation pass began. diff --git a/BACKLOG/OG-007-hash-stability-coverage.md b/BACKLOG/OG-007-hash-stability-coverage.md deleted file mode 100644 index 9f8041d3..00000000 --- a/BACKLOG/OG-007-hash-stability-coverage.md +++ /dev/null @@ -1,25 +0,0 @@ -# OG-007 — Expand Hash-Stability Coverage Across Snapshot Flavors - -Status: DONE - -Promoted to: `docs/design/snapshot-hash-stability-coverage.md` - -Closed by: - -- `test/unit/domain/WarpRuntime.snapshotHashStability.test.js` -- `docs/archive/retrospectives/2026-03-27-snapshot-hash-stability-coverage.md` - -## Problem - -The read-boundary slice added detached snapshot behavior, but hash-stability -coverage is still incomplete across receipt-enabled and strand snapshots. - -## Why This Matters - -Hash-stable materialized state is a core requirement for immutable read-side -semantics. - -## Promotion Trigger - -Promoted when the next snapshot-integrity test pass began after detached reads, -runtime renaming, and immutable public snapshots had all landed. diff --git a/BACKLOG/OG-008-retargeting-compatibility.md b/BACKLOG/OG-008-retargeting-compatibility.md deleted file mode 100644 index bc9ca14c..00000000 --- a/BACKLOG/OG-008-retargeting-compatibility.md +++ /dev/null @@ -1,25 +0,0 @@ -# OG-008 — Compatibility And Deprecation Story For Retargeting Reads - -Status: DONE - -Completed in: `15.0.0` - -## Problem - -The public read semantics changed. Callers that depended on retargeting needed -an explicit decision about whether the old surface would linger as an alias or -be removed cleanly. - -## Why This Matters - -Breaking API changes are acceptable here, but they should still be explicit and -traceable. - -## Promotion Trigger - -This item resolved as a hard major-version cut: - -- detached read semantics already removed the old retargeting contract -- the runtime noun was renamed from `WarpGraph` to `WarpRuntime` -- no compatibility alias was kept -- the release version moved to `15.0.0` diff --git a/BACKLOG/OG-010-public-api-design-thinking.md b/BACKLOG/OG-010-public-api-design-thinking.md deleted file mode 100644 index adbde7fa..00000000 --- a/BACKLOG/OG-010-public-api-design-thinking.md +++ /dev/null @@ -1,65 +0,0 @@ -# OG-010 — IBM Design Thinking Pass Over Public APIs And README - -Status: DONE - -## Problem - -Multiple higher-layer apps have repeated the same misuse pattern on top of -`git-warp`: - -- materialize too much graph history into app memory -- write app-local graph read logic -- write app-local traversal logic -- treat whole-graph enumeration as a normal product read path - -This is no longer just an application mistake. It is evidence that the -`git-warp` public surface and docs do not teach the right read discipline -strongly enough. - -## Why This Matters - -The substrate now has much better semantics than it had before: - -- pinned read handles -- detached immutable snapshots -- `Worldline` -- `Observer` -- strand read boundaries - -But the public API and README still need a product-design pass so the right path -is easier to discover than the wrong one. - -This cycle must consider two sponsor perspectives equally: - -- sponsor human: an application developer trying to build a real product on - top of `git-warp` -- sponsor agent: a coding agent trying to use `git-warp` without rebuilding a - second graph engine above it - -This cycle must also remain honest to a third tooling/debugger sponsor: - -- sponsor tooling: a TTD or provenance/debugger consumer that needs explicit - replay, provenance, comparison, and multi-lane playback truth - -If the public surface serves one and confuses the others, it is not good -enough. - -## Intended Questions For The Cycle - -- Which APIs are inspection/debug APIs versus product hot-path APIs? -- How should the README teach read discipline, not just raw capability? -- What cost-signaling is missing from the current surface? -- What task-shaped read examples should exist for both humans and agents? -- What public read helpers would let higher layers ask questions instead of - rebuilding graph logic locally? -- Which features are primary WARP product value versus core/tooling truth? -- Where should multi-lane playback coordination such as `PlaybackHead` live? - -## Promotion - -Promoted to: - -- [docs/design/public-api-design-thinking.md](../docs/design/public-api-design-thinking.md) - -This item now tracks the active cycle kickoff for the IBM Design Thinking pass -over the `git-warp` public API and README. diff --git a/BACKLOG/OG-012-documentation-corpus-audit.md b/BACKLOG/OG-012-documentation-corpus-audit.md deleted file mode 100644 index 03a8314d..00000000 --- a/BACKLOG/OG-012-documentation-corpus-audit.md +++ /dev/null @@ -1,44 +0,0 @@ -# OG-012 — Audit And Reconcile The Documentation Corpus Before v15 - -Status: DONE - -## Problem - -The repo's documentation corpus has grown organically across multiple release -tranches. - -That left three different doc classes mixed together in the same visible -surface: - -- current user-facing docs -- historical design / milestone / runbook material -- superseded or one-off artifacts that still look "live" because they sit at - the top of `docs/` - -Before `v15.0.0`, the docs set needs to become intentional. - -## Why This Matters - -If the repository does not make it clear which docs are canonical, both humans -and agents will read the wrong thing: - -- app builders will learn outdated nouns or workflows -- agentic consumers will infer the wrong public API surface -- maintainers will keep accreting new docs into an already muddy structure - -This is a release-quality problem, not just housekeeping. - -## Desired Outcome - -- define the canonical documentation set for `v15` -- separate live docs from archived/historical material -- remove obvious trash from the repo surface -- make the docs taxonomy explicit in-repo -- add executable checks so the corpus does not drift back into a pile - -## Promotion - -Promoted to: - -- [docs/design/documentation-corpus-audit.md](../docs/design/documentation-corpus-audit.md) -- [docs/design/architecture-and-cli-guide-rewrite.md](../docs/design/architecture-and-cli-guide-rewrite.md) diff --git a/BACKLOG/OG-014-streaming-content-attachments.md b/BACKLOG/OG-014-streaming-content-attachments.md deleted file mode 100644 index c8f25bd7..00000000 --- a/BACKLOG/OG-014-streaming-content-attachments.md +++ /dev/null @@ -1,114 +0,0 @@ -# OG-014 — Mandatory CAS blob storage with streaming I/O - -Status: DONE - -Legend: Observer Geometry - -Design doc: `docs/design/streaming-cas-blob-storage.md` - -## Problem - -Content blob attachments in `git-warp` have two structural problems: - -### 1. CAS blob storage is opt-in - -`attachContent()` and `attachEdgeContent()` accept an optional `blobStorage` -injection. When callers do not provide it, blobs fall through to raw -`persistence.writeBlob()` — a single unchunked Git object with no CDC -deduplication, no encryption support, and no streaming restore path. - -This means the substrate's chunking, deduplication, and encryption capabilities -are present but silently bypassed by default. There is no good reason for a -content blob to skip CAS. Every blob should be chunked. - -### 2. Neither write nor read paths support streaming - -**Write path**: `attachContent(nodeId, content)` accepts `Uint8Array | string`. -The caller must buffer the entire payload in memory before handing it to the -patch builder. `CasBlobAdapter.store()` then wraps that buffer in -`Readable.from([buf])` — a synthetic stream from an already-buffered payload. - -**Read path**: `getContent(nodeId)` returns `Promise`. The -full blob is materialized into memory before the caller can process it. -`CasBlobAdapter.retrieve()` calls `cas.restore()` which buffers internally. - -`git-cas` already supports streaming on both sides: -- `cas.store({ source })` accepts any readable/iterable source -- `cas.restoreStream()` returns `AsyncIterable` - -The streaming substrate is there. It is not expressed through the public API. - -## Why this matters - -WARP graphs can carry attached documents, media, model weights, and other -payloads that are legitimately large. The API should not force full in-memory -buffering on either side of the I/O boundary. - -- Callers writing large content should be able to pipe a stream in -- Callers reading large content should be able to consume it incrementally -- Every blob should get CDC chunking and deduplication as a substrate guarantee -- The decision between buffered and streaming I/O should belong to the caller - -## Current state - -As of `v15.0.1`: - -- `BlobStoragePort`: `store(content, options) → Promise`, - `retrieve(oid) → Promise` — both buffered -- `CasBlobAdapter`: fully implemented CAS adapter with CDC chunking, optional - encryption, backward-compat fallback to raw Git blobs — but only buffered I/O -- `CasBlobAdapter` is internal (not exported from `index.js`) -- `PatchBuilderV2.attachContent()`: accepts `Uint8Array | string`, uses - `blobStorage.store()` if injected, else raw `persistence.writeBlob()` -- `getContent()` / `getEdgeContent()`: returns `Promise`, - uses `blobStorage.retrieve()` if injected, else raw `persistence.readBlob()` -- `WarpApp` and `WarpCore` do not expose content read methods at all -- `git-cas` streaming (`restoreStream()`) is already used in - `CasSeekCacheAdapter` but not in blob reads -- `InMemoryGraphAdapter` has `writeBlob()`/`readBlob()` for browser/test path - -## Desired outcome - -1. CAS blob storage is mandatory — no fallback to raw `writeBlob()` for content -2. Write path accepts streaming input and pipes through without buffering -3. Read path returns a stream the caller can consume incrementally -4. Buffered convenience methods remain available, layered on top of streams -5. Browser and in-memory paths still work via a conforming adapter -6. Legacy raw Git blob attachments remain readable for backward compatibility - -## Acceptance criteria - -1. Every content blob written through `attachContent()` / `attachEdgeContent()` - goes through `BlobStoragePort` — no raw `persistence.writeBlob()` fallback. -2. `attachContent()` / `attachEdgeContent()` accept streaming input - (`AsyncIterable`, `ReadableStream`, `Uint8Array`, `string`). -3. New `getContentStream()` / `getEdgeContentStream()` return - `AsyncIterable` for incremental consumption. -4. Existing `getContent()` / `getEdgeContent()` remain as buffered convenience, - implemented on top of the stream primitive. -5. `BlobStoragePort` grows `storeStream()` and `retrieveStream()` methods. -6. `CasBlobAdapter` implements streaming via `git-cas` natively. -7. An `InMemoryBlobStorageAdapter` implements the port contract for browser and - test paths. -8. Legacy raw Git blob attachments remain readable through backward-compat - fallback in `CasBlobAdapter.retrieveStream()`. -9. Content stream methods are exposed on `WarpApp` and `WarpCore`. - -## Non-goals - -- No automatic migration of existing raw Git blobs to CAS format -- No silent breaking change to existing `getContent()` / `getEdgeContent()` - return types -- No attempt to solve whole-state out-of-core replay (that is OG-013) -- No encryption-by-default (encryption remains an opt-in CAS capability) - -## Notes - -This item supersedes the original OG-014 scope, which covered only streaming -reads. The expanded scope now includes mandatory CAS and streaming writes. - -Related items: -- `OG-013`: out-of-core materialization and streaming reads (broader, separate) -- `B160`: blob attachments via CAS (done, but opt-in — this item makes it - mandatory) -- `B163`: streaming restore for seek cache (done, pattern to follow for blobs) diff --git a/BACKLOG/OG-015-jsr-documentation-quality.md b/BACKLOG/OG-015-jsr-documentation-quality.md deleted file mode 100644 index 0e5533fd..00000000 --- a/BACKLOG/OG-015-jsr-documentation-quality.md +++ /dev/null @@ -1,104 +0,0 @@ -# OG-015 — Raise JSR documentation quality score - -Status: DONE - -Legend: Observer Geometry - -## Problem - -`v15.0.1` fixed the release-surface problems that made npm and JSR publish too -much internal material, and it fixed JSR's `No slow types are used` warning. - -But JSR still reports that only about 67% of exported symbols are documented. -That means the package is now publishable and structurally cleaner, while still -leaving too much of the public surface under-documented in IDE hovers and JSR's -generated API docs. - -This is a real quality gap for a package whose public surface is now explicitly -split into: - -- `WarpApp` -- `WarpCore` -- `Worldline` -- `Aperture` -- `Observer` -- `Strand` - -If those nouns and their surrounding helpers are not documented consistently, -the docs pipeline and the type surface drift apart again. - -## Why this matters - -The repo now has a much stronger user-facing documentation pipeline, but JSR -and editor hovers are part of the real product surface too. - -Improving symbol docs would: - -- increase discoverability for builders reading the API from their editor -- make JSR-generated reference pages more useful -- reduce the need to jump from code completion into source files -- keep the public noun cuts (`WarpApp`, `WarpCore`, `Strand`, `Aperture`, etc.) - legible at the type level -- reinforce the builder-first documentation posture established in `v15` - -## Current state - -As of `v15.0.1`: - -- JSR dry-run passes -- the slow-type warning is resolved via self-type bindings on JavaScript - entrypoints -- the package README and module docs are present -- many exported symbols still lack symbol-level doc comments -- some public type descriptions are accurate but too terse to be useful as - standalone hover docs - -## Desired outcome - -Raise the documentation quality of the exported public surface until the -generated reference feels intentional rather than incidental. - -Likely shape: - -- audit the exported symbols in `index.d.ts` -- add or improve doc comments for major public classes, methods, and types -- prioritize the main builder and tooling entrypoints first - - `WarpApp` - - `WarpCore` - - `Worldline` - - `Observer` - - `Aperture` - - `Strand` - - writer / patch APIs - - query / traversal result shapes -- tighten module docs on secondary entrypoints where needed -- re-run `jsr publish --dry-run` until documentation coverage crosses the JSR - threshold and the generated output reads cleanly - -## Acceptance criteria - -1. JSR documentation coverage rises above the current failing threshold. -2. Major public symbols have meaningful hover docs, not placeholder prose. -3. Public docs and type-surface docs use the same nouns and conceptual model. -4. Secondary entrypoints keep valid module docs. -5. New doc comments stay builder-first and do not reintroduce paper-heavy - framing into the main API reference surface. - -## Non-goals - -- rewriting the entire docs site or public guide corpus again -- documenting private or internal-only helpers as if they were public API -- treating JSR score-chasing as more important than accurate public semantics - -## Notes - -This item is specifically about public type-surface and JSR documentation -quality. - -It is related to, but separate from: - -- `OG-011-public-api-catalog-and-playground.md` -- `OG-012-documentation-corpus-audit.md` - -Those items are about broader documentation architecture. -This item is about the publish-time API documentation quality bar. diff --git a/BACKLOG/OG-016-retrospective-archive-cleanup.md b/BACKLOG/OG-016-retrospective-archive-cleanup.md deleted file mode 100644 index 593b706f..00000000 --- a/BACKLOG/OG-016-retrospective-archive-cleanup.md +++ /dev/null @@ -1,40 +0,0 @@ -# OG-016 — Archive retrospective clutter - -Status: DONE - -Legend: Observer Geometry - -## Problem - -The `docs/` tree contains dozens of retrospective files -(`docs/archive/retrospectives/2026-03-28-...`, design doc retros, etc.) that are -valuable for the team but clutter the documentation surface visible to -external contributors and evaluators. - -The editor's report (2026-03-29) flagged this as the primary drag on -document cohesion (8/10 → could be 10/10). - -## Desired outcome - -Move retrospective and historical audit files into a dedicated archive -path so the `docs/` tree shows only active, forward-looking documentation. - -Likely shape: - -- `docs/archive/retrospectives/` for retrospective files -- `docs/archive/audits/` for historical audit transcripts (already partially - exists) -- Update any cross-references that point into moved paths -- Keep design doc retros (`.retro.md`) co-located with their design docs — - those are part of the active design record, not archive clutter - -## Acceptance criteria - -1. `docs/` top-level listing is clean and forward-looking. -2. No broken cross-references after the move. -3. Historical files remain reachable via archive path. - -## Non-goals - -- No content edits to the retrospective files themselves. -- No deletion of any retrospective — they all stay in the repo. diff --git a/BACKLOG/README.md b/BACKLOG/README.md deleted file mode 100644 index b3aec82e..00000000 --- a/BACKLOG/README.md +++ /dev/null @@ -1,37 +0,0 @@ -# BACKLOG — Observer Geometry - -Last updated: 2026-03-29 - -This directory holds promotable pre-design items for the current Observer -Geometry tranche. - -Workflow: - -1. capture the slice here -2. promote it into `docs/design/` when selected -3. write tests as the executable spec -4. implement -5. add a retrospective - -## Active Items - -| Status | ID | Title | File | -| --- | --- | --- | --- | -| DONE | OG-001 | First-class `Worldline` API | [OG-001-worldline-api.md](OG-001-worldline-api.md) | -| DONE | OG-002 | Split mutable session `WarpRuntime` from immutable snapshot noun | [OG-002-warpgraph-role-split.md](OG-002-warpgraph-role-split.md) | -| DONE | OG-003 | Deepen public snapshot immutability | [OG-003-snapshot-immutability.md](OG-003-snapshot-immutability.md) | -| DONE | OG-004 | Canonical immutable observer seek contract | [OG-004-observer-seek-contract.md](OG-004-observer-seek-contract.md) | -| DONE | OG-005 | Benchmark detached coordinate and strand reads | [OG-005-detached-read-benchmarks.md](OG-005-detached-read-benchmarks.md) | -| DONE | OG-006 | Remove remaining docs/examples that imply caller retargeting | [OG-006-read-api-doc-consistency.md](OG-006-read-api-doc-consistency.md) | -| DONE | OG-007 | Expand hash-stability coverage across snapshot flavors | [OG-007-hash-stability-coverage.md](OG-007-hash-stability-coverage.md) | -| DONE | OG-008 | Make retargeting compatibility a hard major-version cut | [OG-008-retargeting-compatibility.md](OG-008-retargeting-compatibility.md) | -| QUEUED | OG-009 | Align playback-head and TTD consumers after read nouns stabilize | [OG-009-playback-head-alignment.md](OG-009-playback-head-alignment.md) | -| DONE | OG-010 | IBM Design Thinking pass over public APIs and README | [OG-010-public-api-design-thinking.md](OG-010-public-api-design-thinking.md) | -| QUEUED | OG-011 | Publish a public API catalog and browser documentation playground | [OG-011-public-api-catalog-and-playground.md](OG-011-public-api-catalog-and-playground.md) | -| DONE | OG-012 | Audit and reconcile the documentation corpus before v15 | [OG-012-documentation-corpus-audit.md](OG-012-documentation-corpus-audit.md) | -| QUEUED | OG-013 | Design out-of-core materialization and streaming reads | [OG-013-out-of-core-materialization-and-streaming-reads.md](OG-013-out-of-core-materialization-and-streaming-reads.md) | -| DONE | OG-014 | Mandatory CAS blob storage with streaming I/O | [OG-014-streaming-content-attachments.md](OG-014-streaming-content-attachments.md) | -| DONE | OG-015 | Raise JSR documentation quality score | [OG-015-jsr-documentation-quality.md](OG-015-jsr-documentation-quality.md) | -| DONE | OG-016 | Archive retrospective clutter | [OG-016-retrospective-archive-cleanup.md](OG-016-retrospective-archive-cleanup.md) | -| QUEUED | OG-017 | Break up the `index.d.ts` monolith | [OG-017-modular-type-declarations.md](OG-017-modular-type-declarations.md) | -| QUEUED | OG-018 | Browser guide and storage adapter documentation | [OG-018-browser-guide.md](OG-018-browser-guide.md) | diff --git a/CHANGELOG.md b/CHANGELOG.md index 43ac9b7f..a1d3b047 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,11 +9,41 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Changed +- **The Method** — introduced `METHOD.md` as the development process framework. Filesystem-native backlog (`docs/method/backlog/`) with lane directories (`inbox/`, `asap/`, `up-next/`, `cool-ideas/`, `bad-code/`). Legend-prefixed filenames (`PROTO_`, `TRUST_`, `VIZ_`, `TUI_`, `DX_`, `PERF_`). Sequential cycle numbering (`docs/design//`). Dual-audience design docs (sponsor human + sponsor agent). Replaced B-number system entirely. +- **Backlog migration** — all 49 B-number and OG items migrated from `BACKLOG/` to `docs/method/backlog/` lanes. Tech debt journal (`.claude/bad_code.md`) split into 10 individual files in `bad-code/`. Cool ideas journal split into 13 individual files in `cool-ideas/`. `docs/release.md` moved to `docs/method/release.md`. `BACKLOG/` directory removed. + +- **Zero-error TypeScript campaign complete** — eliminated all 1,707 `tsc --noEmit` errors across 271 files. Mechanical TS4111 bracket-access sweep (614), null guards for `noUncheckedIndexedAccess`, conditional spreads for `exactOptionalPropertyTypes`, unused variable removal. All 8 pre-push IRONCLAD gates now pass. +- **JoinReducer OpStrategy registry** — replaced five triplicated switch statements over 8 canonical op types with a frozen `Map` registry. Each strategy defines `mutate`, `outcome`, `snapshot`, `accumulate`, `validate`. Adding a new op type without all five methods is a hard error at module load time. Cross-path equivalence tests verify `applyFast`, `applyWithReceipt`, and `applyWithDiff` produce identical CRDT state. +- **ESLint `dot-notation` restored** — re-enabled via `@typescript-eslint/dot-notation` which respects `noPropertyAccessFromIndexSignature`. The type-aware variant correctly allows bracket access on index-signature types while enforcing dot notation elsewhere. +- `EffectSinkPort.deliver()` return type widened to `DeliveryObservation | DeliveryObservation[]` to match `MultiplexSink` behavior. - **Zero-error lint campaign complete** — eliminated all 1,876 ESLint errors across ~180 source files. Every raw `Error` replaced with domain error classes. Every port stub uses `WarpError` with `E_NOT_IMPLEMENTED`. `MessageCodecInternal` type-poisoning from `@git-stunts/trailer-codec` fixed at root via `unknown` intermediary casts. Errors barrel (`src/domain/errors/index.js`) now exports all 27 error classes. - **Lint ratchet enforcement** — `npm run lint:ratchet` asserts zero ESLint errors codebase-wide. Added as CI Gate 4b. Pre-push hook (Gate 4) already blocked non-zero exits; ratchet makes the invariant explicit and auditable. - **Git hooks wired** — `core.hooksPath` set to `scripts/hooks/` on `npm install`. Pre-commit lints staged JS files. Pre-push runs full 8-gate IRONCLAD firewall. - -### Added +- **OpStrategy.receiptName** — each OpStrategy entry now carries its own TickReceipt-compatible operation name, eliminating the redundant `RECEIPT_OP_TYPE` lookup tables in JoinReducer and ConflictAnalyzerService. +- **SyncProtocol uses SyncError** — `E_SYNC_DIVERGENCE` now throws `SyncError` instead of a raw `Error` with manually attached code property. +- **AuditReceiptService uses AuditError** — all 16 raw `Error` throws replaced with typed `AuditError` carrying serializable context and machine-readable error codes (`E_AUDIT_INVALID`, `E_AUDIT_CAS_FAILED`, `E_AUDIT_DEGRADED`). +- **CLI import.meta.url resolution** — replaced `__dirname` polyfill pattern in CLI with idiomatic `fileURLToPath(new URL('../..', import.meta.url))` for resilient package root resolution. + +- **WarpRuntime god class decomposition (NO_DOGS_NO_MASTERS)** — extracted 6 of 11 mixin method groups into independent service controllers, following the SyncController precedent. Each controller receives the runtime host via constructor injection and delegates through `defineProperty` loops on the prototype. Public API surface unchanged — all 100+ methods remain on `WarpRuntime.prototype`. The remaining 4 mixins (checkpoint, patch, materialize, materializeAdvanced) form the core mutation kernel and are deferred to a future cycle. + - **`StrandController`** (182 LOC) — strand lifecycle + conflict analysis, cached StrandService instance + - **`ComparisonController`** (1,155 LOC) — coordinate/strand comparison, transfer planning + - **`SubscriptionController`** (244 LOC) — subscribe, watch, notification dispatch + - **`ProvenanceController`** (242 LOC) — patch lookups, backward causal cone, slice materialization + - **`ForkController`** (274 LOC) — fork creation, wormhole compression, backfill rejection + - **`QueryController`** (964 LOC) — all read queries, observer/worldline factories, content access +- **`AuditReceipt` promoted to class** — replaced `@typedef {Object}` with a real JavaScript class. Constructor validates and freezes. +- **WarpApp/WarpCore content methods** — replaced direct function imports from `query.methods.js` with `callInternalRuntimeMethod()` delegation, which correctly resolves dynamically wired prototype methods. +- **11 typedef-to-class promotions (NO_DOGS_NO_MASTERS)** — replaced phantom `@typedef {Object}` shapes with real JavaScript classes: `WarpStateV5`, `Dot`, `EventId`, `EffectEmission`, `EffectCoordinate`, `DeliveryObservation`, `TickReceipt`, `PatchDiff`, `LWWRegister`, `BTR`, `TrustState`. Each class has a constructor, validates inputs where applicable, and supports `instanceof`. Factory functions retained for backward compatibility. +- **CBOR codec canonical key sorting for class instances** — both `CborCodec` and `defaultCodec` now sort keys for all object types (not just plain objects), using `instanceof` checks to skip built-in CBOR-native types (Uint8Array, Date, Set, Map, RegExp). This decouples class field declaration order from wire format, matching Echo's Rust canonical encoder behavior. +- **Comparison pipeline class hierarchy** — `NormalizedSelector` is now a base class with 4 subclasses (`LiveSelector`, `CoordinateSelector`, `StrandSelector`, `StrandBaseSelector`), each implementing `resolve()` directly. Eliminates kind-switch dispatch. `ResolvedComparisonSide` promoted to class. Live frontier captured once for consistency across both sides. +- **OpOutcome subclass hierarchy** — `OpOutcomeResult` base class with `OpApplied`, `OpSuperseded`, `OpRedundant` subclasses. `OpSuperseded` carries the winning `EventId` as a structured field instead of a formatted string. `VerificationResult` promoted to class. +- **ForkController hardening** — fork ref creation rolls back on `WarpRuntime.open` failure; `_isAncestor` uses visited-Set cycle detection instead of false-positive MAX_WALK counter; backfill rejection throws typed `ForkError` with `E_FORK_BACKFILL_REJECTED` and `E_FORK_WRITER_DIVERGED` codes. + +### Added + +- **`AuditError`** — domain error class for audit receipt validation and persistence failures. Exported from package root with four static error codes. +- **`WarpStateV5` class** — core CRDT materialized state promoted from typedef to its own module (`src/domain/services/WarpStateV5.js`). Provides `static empty()` factory and `clone()` method. Re-exported from `JoinReducer.js` for backward compatibility. +- **`NO_DOGS_NO_MASTERS` legend** — backlog legend for god object decomposition and typedef-to-class liberation. Code: `NDNM_`. - **Effect emission & delivery observation substrate slice** — new receipt families for outbound effects and their delivery lifecycle. `EffectEmission` records that the system produced an outbound effect candidate at a causal coordinate. `DeliveryObservation` records how a sink handled that emission (delivered, suppressed, failed, skipped). `ExternalizationPolicy` provides execution context (live/replay/inspect) that shapes delivery behavior. Preset lenses `LIVE_LENS`, `REPLAY_LENS`, and `INSPECT_LENS` cover common modes. - **`EffectSinkPort`** — abstract port for effect delivery sinks, following the hexagonal architecture pattern. diff --git a/METHOD.md b/METHOD.md new file mode 100644 index 00000000..3dfb33d1 --- /dev/null +++ b/METHOD.md @@ -0,0 +1,227 @@ +# METHOD + +A backlog, a loop, and honest bookkeeping. + +## Principles + +The agent and the human sit at the same table. They see different +things. Both are named in every design. Both must agree before work +ships. Default to building the agent surface first — it is the +foundation the human experience stands on. If the work is +human-first exploratory design, say so in the design doc. + +Everything traces to a playback question. If you cannot say which +question your work answers, you are drifting. Stop. Reconnect to +the design, or change it. + +Tests are the executable spec. Design names the hill and the playback +questions. Tests prove the answers. No ceremonial layer between +intent and proof. + +The filesystem is the database. A directory is a decision context. A +filename is an identity. Moving a file is a decision. `ls` is the +query. + +Process should be calm. No sprints. No velocity. No burndown. A +backlog tiered by judgment, and a loop for doing it well. + +## Structure + +```text +docs/ + method/ + backlog/ + inbox/ raw ideas, anyone, anytime + asap/ do this now + up-next/ do this soon + cool-ideas/ experiments, wild thoughts + bad-code/ tech debt + *.md shaped work not in a named lane + legends/ named domains + retro// cycle retrospectives + graveyard/ rejected ideas + process.md how cycles run + release.md how releases work + design/ + / cycle design docs + *.md living documents +``` + +Signpost documents live at root or one level into `docs/`. They use +`ALL_CAPS.md`. Deeper than that, they are not signposts. + +## Backlog + +Markdown files. Each describes work worth doing. The filesystem is +the index. + +### Inbox + +Anyone — human or agent — drops ideas in at any time. A sentence is +enough. No legend, no scope, no ceremony. Capture it. Keep moving. +The inbox is processed during maintenance. + +### Lanes + +- **`inbox/`** — unprocessed +- **`asap/`** — pull into a cycle soon +- **`up-next/`** — next in line +- **`cool-ideas/`** — not commitments +- **`bad-code/`** — it works, but it bothers you + +Anything else sits in the backlog root. The backlog root holds shaped +work that matters, but does not currently belong in a named lane. + +### Naming + +Legend prefix if applicable. No numeric IDs. + +```text +VIZ_braille-rendering.md +PROTO_strand-lifecycle.md +debt-trailer-codec-dts.md +``` + +### Promoting + +Pulled into a cycle, a backlog item becomes a design doc: + +```text +backlog/asap/PROTO_strand-lifecycle.md + → design//strand-lifecycle.md +``` + +The backlog file is removed. + +### Commitment + +Pull it and you own it. It does not go back. + +- **Finish** — hill met +- **Pivot** — end early, write the retro. Remaining work re-enters + the backlog as a new item + +### Maintenance + +End of cycle: + +- Process inbox. Promote, flesh out, or bury. +- Re-prioritize. What you learned changes what matters. +- Clean up. Merge duplicates, kill the dead. + +Do not reorganize mid-cycle. + +### Cycle types + +Same loop regardless: + +- **Feature** — design, test, build, ship +- **Design** — the deliverable is docs, not code +- **Debt** — pull from `bad-code/`. The hill is "this no longer + bothers us" + +## Legends + +A named domain that spans many cycles. Each legend describes what it +covers, who cares, what success looks like, and how you know. + +Legends do not start or finish. They are reference frames. + +A legend code (`VIZ`, `PROTO`, `TUI`) prefixes backlog filenames. + +## Cycles + +A unit of shipped work. Design, implementation, retrospective. +Numbered sequentially. + +Cycle directories use `/`, for example +`0010-strand-speculation/`. + +### The loop + +0. **Pull** — choose. Move it. Committed. + +1. **Design** — write a design doc in `docs/design//`. + - Sponsor human + - Sponsor agent + - Hill + - Playback questions — yes/no, both perspectives. Write them + first. + - Non-goals + +2. **RED** — write failing tests. Playback questions become specs. + Default to agent surface first. + +3. **GREEN** — make them pass. + +4. **Playback** — produce a witness. The agent answers agent + questions. The human answers user questions. Write it down. The + witness is the concrete artifact — test output, transcript, + screenshot, recording — that shows both answers. No clear yes + means no. + +5. **PR → main** — review until merge. + +6. **Close** — merge. Retro in `docs/method/retro//`. + - Drift check (mandatory). Undocumented drift is the only + failure. + - New debt to `bad-code/`. + - Cool ideas to `cool-ideas/`. + - Backlog maintenance. + + Releases happen when externally meaningful behavior changes. + Update CHANGELOG when externally visible behavior changed. + Update README when usage, interfaces, or operator understanding + changed. + +### Outcomes + +- **Hill met** — merge, close +- **Partial** — merge what is honest. Retro explains the gap +- **Not met** — cycle still concludes. Write the retro + +A failed cycle with a good retro beats a successful one with no +learnings. + +Every cycle ends with a retro. Success is not required. + +## Graveyard + +Rejected work moves to `docs/method/graveyard/` with a note. The +graveyard prevents re-proposing without context. + +## Flow + +```text +idea + → inbox/ + → triage during maintenance + → graveyard/ + → cool-ideas/ + → backlog root + → up-next/ + → asap/ + → design// (committed) + → RED + → GREEN + → playback (witness) + → retro// + → release (when meaningful) +``` + +## What this system does not have + +No milestones. No velocity. No ticket numbers. + +The backlog is tiered by lane. Choice within a lane is judgment at +pull time. That is enough. + +## Naming + +| Convention | Example | When | +|---|---|---| +| `ALL_CAPS.md` | `VISION.md` | Signpost — root or `docs/` | +| `lowercase.md` | `doctrine.md` | Everything else | +| `_.md` | `VIZ_braille.md` | Backlog with legend | +| `.md` | `debt-trailer-codec.md` | Backlog without legend | +| `/` | `0010-strand-speculation/` | Cycle directory | diff --git a/bin/cli/infrastructure.js b/bin/cli/infrastructure.js index 6f03259b..13fbc472 100644 --- a/bin/cli/infrastructure.js +++ b/bin/cli/infrastructure.js @@ -407,13 +407,13 @@ export function parseArgs(argv) { /** @type {CliOptions} */ const options = { - repo: path.resolve(typeof values['repo'] === 'string' ? values['repo'] : process.cwd()), - json: Boolean(values['json']), - ndjson: Boolean(values['ndjson']), - view: typeof values['view'] === 'string' ? values['view'] : null, - graph: typeof values['graph'] === 'string' ? values['graph'] : null, - writer: typeof values['writer'] === 'string' ? values['writer'] : 'cli', - help: Boolean(values['help']), + repo: path.resolve(typeof values.repo === 'string' ? values.repo : process.cwd()), + json: Boolean(values.json), + ndjson: Boolean(values.ndjson), + view: typeof values.view === 'string' ? values.view : null, + graph: typeof values.graph === 'string' ? values.graph : null, + writer: typeof values.writer === 'string' ? values.writer : 'cli', + help: Boolean(values.help), }; return { options, command, commandArgs }; diff --git a/bin/cli/shared.js b/bin/cli/shared.js index 3518561f..d9a2c35d 100644 --- a/bin/cli/shared.js +++ b/bin/cli/shared.js @@ -1,6 +1,7 @@ import fs from 'node:fs'; import path from 'node:path'; import process from 'node:process'; +import { fileURLToPath } from 'node:url'; import readline from 'node:readline'; import { execFileSync } from 'node:child_process'; import { textEncode } from '../../src/domain/utils/bytes.js'; @@ -202,10 +203,9 @@ export async function readCheckpointDate(persistence, checkpointSha) { * @returns {import('../../src/domain/services/HookInstaller.js').HookInstaller} */ export function createHookInstaller() { - const __filename = new URL(import.meta.url).pathname; - const __dirname = path.dirname(__filename); - const templateDir = path.resolve(__dirname, '..', '..', 'scripts', 'hooks'); - const rawJson = fs.readFileSync(path.resolve(__dirname, '..', '..', 'package.json'), 'utf8'); + const packageRoot = fileURLToPath(new URL('../..', import.meta.url)); + const templateDir = path.join(packageRoot, 'scripts', 'hooks'); + const rawJson = fs.readFileSync(path.join(packageRoot, 'package.json'), 'utf8'); const version = readPackageVersion(rawJson); return new HookInstaller({ fs: /** @type {import('../../src/domain/services/HookInstaller.js').FsAdapter} */ (/** @type {unknown} */ (fs)), diff --git a/bin/warp-graph.js b/bin/warp-graph.js index 7b1a99eb..22d3da64 100755 --- a/bin/warp-graph.js +++ b/bin/warp-graph.js @@ -76,6 +76,7 @@ async function main() { // Long-running commands may return a `close` function. // Wait for SIGINT/SIGTERM instead of exiting immediately. const close = result !== null && result !== undefined && typeof result === 'object' && 'close' in /** @type {Record} */ (result) + // eslint-disable-next-line @typescript-eslint/dot-notation -- Record requires bracket access (TS4111) ? /** @type {() => Promise} */ (/** @type {Record} */ (result)['close']) : null; diff --git a/contracts/type-surface.m8.json b/contracts/type-surface.m8.json index 21b3a4c8..95bd66b5 100644 --- a/contracts/type-surface.m8.json +++ b/contracts/type-surface.m8.json @@ -3,6 +3,9 @@ "$comment": "M8 IRONCLAD type surface manifest — explicit runtime exports plus type-only declarations for index.d.ts validation", "version": 1, "exports": { + "AuditError": { + "kind": "class" + }, "BisectService": { "kind": "class" }, diff --git a/docs/ADVANCED_GUIDE.md b/docs/ADVANCED_GUIDE.md index 3f6499d5..01b7f03f 100644 --- a/docs/ADVANCED_GUIDE.md +++ b/docs/ADVANCED_GUIDE.md @@ -206,8 +206,8 @@ That is not a law of physics. It is a good operating default until real measurem Current design backlog: -- [OG-013 out-of-core materialization and streaming reads](../BACKLOG/OG-013-out-of-core-materialization-and-streaming-reads.md) -- [OG-014 streaming content attachments](../BACKLOG/OG-014-streaming-content-attachments.md) +- [Out-of-core materialization](method/backlog/PERF_out-of-core-materialization.md) +- [Streaming graph traversal](method/backlog/cool-ideas/PERF_streaming-graph-traversal.md) ## Where next diff --git a/docs/README.md b/docs/README.md index 42e3d207..a5a1cf0a 100644 --- a/docs/README.md +++ b/docs/README.md @@ -42,7 +42,7 @@ when you need that level of detail. System structure, public/core boundaries, and internal layering. - [Roadmap](ROADMAP.md) Current committed release and milestone inventory. -- [Release Guide](release.md) +- [Release Guide](method/release.md) Release and preflight process. - [Trust Migration](trust/TRUST_MIGRATION.md) Migration path for signed trust evidence. diff --git a/docs/ROADMAP.md b/docs/ROADMAP.md index 8d5e30ed..b0784894 100644 --- a/docs/ROADMAP.md +++ b/docs/ROADMAP.md @@ -1,5 +1,9 @@ # ROADMAP — @git-stunts/git-warp +> **MIGRATED:** All incomplete items have been migrated to `docs/method/backlog/`. +> See [METHOD.md](/METHOD.md) for the current process. +> Completed items remain in `docs/ROADMAP/COMPLETED.md`. This file is kept for reference only. + > **Current release on `main`:** v16.0.0 > **Next intended release:** v16.0.1 > **Last reconciled:** 2026-03-29 (v16.0.0 release. OG-014 streaming CAS blob storage, OG-015 JSR docs, deprecated TraversalService and createWriter removed.) @@ -203,6 +207,17 @@ P1 is complete on `v15`: B36 and B37 landed as the shared test-foundation pass, | B19 | ✅ **CANONICAL SERIALIZATION PROPERTY TESTS** — Seeded `fast-check` coverage now verifies `canonicalStringify()` idempotency and determinism. | S | | B22 | ✅ **CANONICAL PARSE DETERMINISM TEST** — Repeated `TrustRecordSchema.parse()` canonicalization is now property-tested for stable output. | S | +### P1b — TSC Zero Campaign Drift Audit ⚠️ HIGH PRIORITY + +The TSC zero campaign (PR #73) eliminated 1,707 type errors but introduced process drift that must be audited before the next release. + +| ID | Item | Effort | +| ---- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | +| B171 | **TSC CAMPAIGN AGENT-AUTHORED CODE AUDIT** — 27 files were merged via `checkout --theirs` during worktree conflict resolution without line-by-line review. Tests pass, but test coverage does not guarantee absence of subtle semantic drift (e.g. changed fallback values, widened types, reordered logic). Audit every agent-authored file diff against the pre-campaign baseline. Revert anything suspicious. | L | +| B172 | **RESTORE `dot-notation` VIA `@typescript-eslint/dot-notation`** — ESLint `dot-notation` was disabled globally to resolve conflict with `noPropertyAccessFromIndexSignature`. The proper fix is switching to `@typescript-eslint/dot-notation` which respects the tsconfig flag. This restores lint coverage for actual dot-notation misuse while allowing bracket access on index signatures. | S | +| B173 | **EFFECTSINKPORT BREAKING CHANGE HYGIENE** — `EffectSinkPort.deliver()` return type was widened from `DeliveryObservation` to `DeliveryObservation \| DeliveryObservation[]` in `index.d.ts`. This is a breaking API surface change that shipped without a `BREAKING CHANGE` commit footer. Assess downstream impact and decide: (a) revert the widening and fix MultiplexSink to unwrap, or (b) accept it and document as a breaking change for the next major version. | S | +| B174 | **`@git-stunts/trailer-codec` TYPE DECLARATIONS** — `getCodec()` in `MessageCodecInternal.js` returns an untyped `TrailerCodec`, forcing 6+ downstream files to cast through `unknown` intermediary. Root fix: add `index.d.ts` to the `@git-stunts/trailer-codec` package upstream. | M | + ### P2 — CI & Tooling (one batch PR) `B83`, `B85`, `B57`, `B86`, `B87`, and `B168` are now merged on `main`. PR #69 also landed the issue-45 content metadata API and closed the last open GitHub issue. The repo now runs both markdownlint and the Markdown JS/TS code-sample linter in the CI fast gate and the local `scripts/hooks/pre-push` firewall, and the hook's gate labels/quick-mode messaging now have dedicated regression coverage. The tracked backlog now stands at 26 standalone items after adding the native-vs-WASM roaring benchmark slice, and remaining P2 work still starts at B88. B123 is still the largest item and may need to split out if the PR gets too big. @@ -397,7 +412,7 @@ B158 (P7) ──→ B159 (P7) CDC seek cache | **Milestone (M12)** | 18 | B66, B67, B70, B73, B75, B105–B115, B117, B118 | | **Milestone (M13)** | 1 | B116 (internal: DONE; wire-format: DEFERRED) | | **Milestone (M14)** | 16 | B130–B145 | -| **Standalone** | 26 | B12, B28, B34–B35, B43, B53, B54, B76, B79, B88, B96, B98, B102–B104, B119, B123, B127–B129, B147, B152, B155–B156, B169–B170 | +| **Standalone** | 30 | B12, B28, B34–B35, B43, B53, B54, B76, B79, B88, B96, B98, B102–B104, B119, B123, B127–B129, B147, B152, B155–B156, B169–B174 | | **Standalone (done)** | 62 | B19, B22, B26, B36–B37, B44, B46, B47, B48–B52, B55, B57, B71, B72, B77, B78, B80–B87, B89–B95, B97, B99–B100, B120–B122, B124, B125, B126, B146, B148–B151, B153, B154, B157–B168 | | **Deferred** | 7 | B4, B7, B16, B20, B21, B27, B101 | | **Rejected** | 7 | B5, B6, B13, B17, B18, B25, B45 | @@ -507,9 +522,9 @@ All milestones are complete: M10 → M12 → M13 (internal) → M11 → M14. M13 The active roadmap is **26 standalone items** sorted into **8 priority tiers** (P0–P7) with **6 execution waves**. The GitHub issue queue is clear; Wave 1 is complete, and Wave 2 now starts at B88 in the CI & Tooling pack, with the roaring benchmark investigation queued in the performance lane. See [Execution Order](#execution-order) for the full sequence. -Rejected items live in `GRAVEYARD.md`. Resurrections require an RFC. -Promotable pre-design intake now lives in `BACKLOG/`. This file remains the -committed milestone/release inventory. +Rejected items live in `docs/method/graveyard/`. Resurrections must address the rejection note. +Promotable pre-design intake lives in `docs/method/backlog/` with lane organization. +This file remains the committed milestone/release inventory. --- @@ -700,13 +715,13 @@ Exploratory concepts captured during PR hardening. These are intentionally fully - Golden output tests for deterministic summary formatting. - Smoke test ensuring script exits non-zero on API/auth failures. -## Concern 4 — Documentation Drift: `ROADMAP.md` vs `BACKLOG/` +## Concern 4 — Documentation Drift: `ROADMAP.md` vs Backlog The roles are now split explicitly: - `ROADMAP.md` owns committed milestone/release inventory -- `BACKLOG/` owns promotable pre-design items -- `docs/design/` owns active design docs +- `docs/method/backlog/` owns promotable pre-design items (lane-organized) +- `docs/design/` owns active cycle design docs Backlog items should be promoted into `docs/design/` before tests and implementation begin. diff --git a/docs/SYSTEMS_STYLE_JAVASCRIPT.md b/docs/SYSTEMS_STYLE_JAVASCRIPT.md new file mode 100644 index 00000000..0fa01546 --- /dev/null +++ b/docs/SYSTEMS_STYLE_JAVASCRIPT.md @@ -0,0 +1,453 @@ +# System-Style JavaScript + +**How to write JavaScript infrastructure that lasts.** + +This is the engineering standard for **`git-stunts`** and all **`flyingrobots`** repositories. It is **not** a conventional style guide about semicolons, quotes, or formatting trivia. It is a doctrine for writing JavaScript infrastructure code that remains honest under execution, replay, migration, debugging, replication, failure, and time. + +### Rule 0: Runtime Truth Wins + +When the program is running, one question matters above all others: + +**What is actually true right now, in memory, under execution?** + +If the answer depends on comments, conventions, vanished types, wishful thinking, or editor vibes, the code is lying. + +Trusted domain values must be created through runtime construction, parsing, or validation that establishes their invariants. Once established, those invariants must be preserved for as long as the value remains trusted. + +This rule outranks documentation, build steps, editor hints, static overlays, compile-time tooling, team folklore, and "but the linter said it was fine." + +### What This Means in Practice + +Infrastructure cannot afford fake contracts: + +- A type that vanishes at runtime is not an authoritative contract. +- A comment describing a shape is not an authoritative contract. +- A plain object that "should" have valid fields is not an authoritative contract. +- An IDE tooltip is not an authoritative contract. +- A compile step is not an authoritative contract. + +These tools can be useful. None of them outrank the runtime. + +### Why It Matters Here + +Infrastructure code touches persistence, replication, cryptographic verification, conflict resolution, deterministic replay, failure handling, system boundaries, long-lived state, version migration, and auditability. This is not view-layer glue. Mushy assumptions here turn into real bugs with long half-lives. + +### The Hierarchy of Truth + +When layers disagree, authority flows in this order: + +1. **Runtime domain model** — constructors, invariants, methods, error types +2. **Boundary schemas and parsers** — Zod, CBOR decoders, protocol validators +3. **Tests** — the executable specification +4. **JSDoc and design docs** — human-facing explanations of the runtime model +5. **IDE and static tooling** — editor navigation, refactoring support +6. **TypeScript** — useful dialect, not final authority + +### Scope + +This standard is optimized for: + +- Infrastructure code with strong invariants +- Long-lived systems with explicit boundaries +- Direct execution workflows portable across hosts +- Browser-capable cores +- JavaScript-first repositories +- Code that must be teachable, legible, and publishable + +It is not a claim that every JavaScript project should follow this exact approach. It **is** a claim that, for this family of repositories, runtime-backed domain modeling beats soft shape trust. + +### Language Policy + +#### JavaScript Is the Default + +JavaScript is chosen deliberately. It is not perfect — parts of it are cursed and deserve open mockery — but it offers a rare combination: + +- Fast to write and change +- Highly portable +- Backed by a flexible object model +- Direct to execute +- Expressive enough for serious infrastructure +- Widely understood + +Many of these projects are built not just to run, but to be read, explained, taught from, and used as reference implementations. JavaScript lowers the barrier to entry for readers in a way few other languages can match. That readability is not a side benefit — it is part of the design. + +Fun matters too. A language that feels pleasant to iterate in yields tighter feedback loops, more experiments, and more finished work. That is sound engineering economics. + +#### TypeScript: Allowed, Not Authoritative + +TypeScript is a useful typed dialect that improves editor workflows, refactoring, and external compatibility. What this standard rejects is elevating TypeScript to the role of final authority. + +TypeScript may help with editor navigation, consumer ergonomics, and static checks. It does **not** replace runtime validation, preserve runtime invariants, or excuse weak domain modeling. + +The true sources of truth remain the runtime domain types, boundary parsing, and tests. **TypeScript is allowed. TypeScript is not king.** + +Use TypeScript where it helps. Never confuse it with the source of truth. + +#### Escape Hatch: Rust via WebAssembly + +When JavaScript is insufficient — tight CPU-bound loops, memory-sensitive systems, unsafe parsing of hostile binary inputs, cryptographic kernels — use Rust. + +Rust provides memory safety without garbage collection, explicit ownership, excellent performance, and strong WebAssembly support. It is the recommended companion when the problem outgrows JavaScript. + +**Preferred architecture split:** + +| Layer | Language | Role | +|--------------------------|-------------------|-------------------------------------------| +| Core domain logic | JavaScript | Default. Portable. Browser-ready. | +| Performance-critical kernels | Rust → Wasm | When safety/speed constraints justify it | +| Host adapters | JavaScript | Node, Deno, browser — behind ports | +| Orchestration | JavaScript | Glue between cores and hosts | + +### Architecture + +#### Browser-First Portability + +The browser is the most universal deployment platform and the ultimate portability test. Core logic prefers web-platform-friendly primitives: + +```javascript +// ✅ Portable +const bytes = new TextEncoder().encode(text); +const arr = new Uint8Array(buffer); +const url = new URL(path, base); + +// ❌ Node-only — belongs in adapters +const buf = Buffer.from(text, 'utf8'); +const resolved = require('path').resolve(p); +``` + +#### Hexagonal Architecture Is Mandatory + +Core domain logic must never depend directly on Node globals, filesystem APIs, `process`, `Buffer`, or host-specific calls. Those belong behind adapter ports. + +**Core rule:** Core logic should not know that Node exists. Node-only facilities must remain exclusively in adapter implementations. + +```javascript +// ✅ Core speaks in portable terms +class ReplicaEngine { + constructor(storage, clock, codec) { + // storage, clock, codec are ports — capabilities, not implementations + this._storage = storage; + this._clock = clock; + this._codec = codec; + } + + async applyOp(op) { + const timestamp = this._clock.now(); + const bytes = this._codec.encode(op); + await this._storage.put(op.key, bytes, timestamp); + } +} + +// ✅ Adapter implements the port for a specific host +class NodeFsStorageAdapter { + async put(key, bytes, timestamp) { + const filePath = path.join(this._root, key); + await fs.writeFile(filePath, bytes); + } +} + +// ✅ Browser adapter implements the same port +class IndexedDbStorageAdapter { + async put(key, bytes, timestamp) { + const tx = this._db.transaction('store', 'readwrite'); + await tx.objectStore('store').put({ key, bytes, timestamp }); + } +} +``` + +### The Object Model + +System-style JavaScript organizes code around four categories of **runtime-backed** objects: + +**Value Objects** — Meaningful domain values with invariants + +```javascript +class ObjectId { + constructor(hex) { + if (typeof hex !== 'string' || !/^[0-9a-f]{40,64}$/.test(hex)) { + throw new InvalidObjectId(hex); + } + this._hex = hex; + Object.freeze(this); + } + + toString() { return this._hex; } + equals(other) { return other instanceof ObjectId && other._hex === this._hex; } +} +``` + +**Entities** — Identity and lifecycle + +```javascript +class Replica { + constructor(id, clock) { + this._id = ReplicaId.from(id); + this._clock = clock; + this._log = []; + } + + append(op) { + const validated = Op.from(op); // boundary validation + this._log.push(validated); + return this._clock.tick(); + } +} +``` + +**Results and Outcomes** — Runtime-backed domain types, not tagged unions + +```javascript +class OpApplied { + constructor(op, timestamp) { + this.op = op; + this.timestamp = timestamp; + Object.freeze(this); + } +} + +class OpSuperseded { + constructor(op, winner) { + this.op = op; + this.winner = winner; + Object.freeze(this); + } +} + +// Runtime dispatch — not tag switching +if (outcome instanceof OpSuperseded) { + return outcome.winner; +} +``` + +**Errors** — Domain failures are first-class objects + +```javascript +class InvalidObjectId extends DomainError { + constructor(value) { + super(`Invalid object ID: ${typeof value === 'string' ? value.slice(0, 16) + '…' : typeof value}`); + this.name = 'InvalidObjectId'; + this.value = value; + } +} + +// ✅ Branch on type +if (err instanceof InvalidObjectId) { /* ... */ } + +// ❌ Never parse messages +if (err.message.includes('invalid')) { /* raccoon-in-a-dumpster energy */ } +``` + +### Principles + +These are the load-bearing architectural commitments. Violating any of these is a design-level issue. + +**P1: Domain Concepts Require Runtime-Backed Forms** +If a concept has invariants, identity, or behavior, it must have a runtime-backed representation — usually a class. A typedef or plain object is insufficient. + +```javascript +// ❌ Shape trust — nothing enforces this at runtime +/** @typedef {{ writerId: string, lamport: number }} EventId */ + +// ✅ Runtime-backed — invariants established on construction +class EventId { + constructor(writerId, lamport) { + this._writerId = WriterId.from(writerId); + this._lamport = Lamport.from(lamport); + Object.freeze(this); + } +} +``` + +**P2: Validation Happens at Boundaries and Construction Points** +Untrusted input becomes trusted data only through constructors or dedicated parse methods. Constructors establish invariants; they perform no I/O or async work. + +```javascript +// Boundary: raw bytes → validated domain object +const decoded = cborDecode(bytes); +const parsed = EventIdSchema.parse(decoded); // schema rejects malformed input +const eventId = new EventId(parsed.writerId, parsed.lamport); // constructor establishes invariants +``` + +**P3: Behavior Belongs on the Type That Owns It** +Avoid switching on `kind`/`type` tags. Put behavior on the owning type. + +```javascript +// ❌ External switch on tags +function describe(outcome) { + switch (outcome.type) { + case 'applied': return `Applied at ${outcome.timestamp}`; + case 'superseded': return `Beaten by ${outcome.winner}`; + } +} + +// ✅ Behavior lives on the type +class OpApplied { + describe() { return `Applied at ${this.timestamp}`; } +} + +class OpSuperseded { + describe() { return `Beaten by ${this.winner}`; } +} +``` + +**P4: Schemas Belong at Boundaries, Not in the Core** +Use schemas (e.g., Zod) to reject malformed input at the edge. Domain types own behavior and invariants inside the boundary. + +```javascript +// ✅ Edge: schema validates untrusted input +const ReplicaConfigSchema = z.object({ + id: z.string().uuid(), + maxLogSize: z.number().int().positive(), +}); + +// ✅ Core: domain type provides behavior +class ReplicaConfig { + constructor(id, maxLogSize) { + this._id = ReplicaId.from(id); + this._maxLogSize = maxLogSize; + Object.freeze(this); + } + + allowsAppend(currentSize) { + return currentSize < this._maxLogSize; + } +} + +// ✅ Boundary glue +function parseReplicaConfig(raw) { + const data = ReplicaConfigSchema.parse(raw); + return new ReplicaConfig(data.id, data.maxLogSize); +} +``` + +**P5: Serialization Is the Codec's Problem** +The byte layer (CBOR/JSON/etc.) stays separate from the meaning layer. Domain types do not know how they are encoded. + +```javascript +// ✅ Codec handles the wire format +class EventCodec { + encode(event) { + return cborEncode({ + writerId: event.writerId.toString(), + lamport: event.lamport.value, + payload: event.payload, + }); + } + + decode(bytes) { + const raw = cborDecode(bytes); + return new Event( + WriterId.from(raw.writerId), + Lamport.from(raw.lamport), + raw.payload + ); + } +} +``` + +**P6: Single Source of Truth** +Do not duplicate the same contract across JSDoc, TypeScript, and validators. Define the runtime model first. Everything else derives from or documents it. + +**P7: Runtime Dispatch Over Tag Switching** +Inside a coherent runtime, `instanceof` is often the correct dispatch mechanism. + +```javascript +// ✅ Direct dispatch +if (outcome instanceof OpSuperseded) { + return outcome.winner; +} + +// ✅ Policy objects instead of option flags +const replayPolicy = ReplayPolicy.speculativeForkAllowed(); +const result = await replayer.replaySegment(segment, replayPolicy); +``` + +**Cross-realm note:** `instanceof` breaks across realm boundaries (iframes, web workers, multiple module instances). When values cross realms, use branding instead: + +```javascript +class EventId { + static _brand = Symbol.for('flyingrobots.EventId'); + get [EventId._brand]() { return true; } + static is(v) { return v != null && v[EventId._brand] === true; } +} +``` + +### Practices + +These are concrete coding disciplines. Most are linter-enforceable. Violations should fail CI. + +- **`any` is banished; `unknown` is quarantined** — `any` is surrender. `unknown` is acceptable only at raw edges and must be eliminated through parsing immediately. +- **Trusted values must preserve integrity** — Use `Object.freeze()`, private fields, or defensive copying to protect invariants after construction. +- **Error type is primary; codes are optional metadata** — Use specific error classes. Never branch on `err.message`. Error codes are fine as boundary metadata. +- **Parameter objects must add semantic value** — Public APIs should not accept anonymous bags of options. + +```javascript +// ❌ Options sludge +await replayer.replay(segment, { allowFork: true, maxRetries: 3, strict: false }); + +// ✅ Named policy +const policy = ReplayPolicy.speculativeForkAllowed({ maxRetries: 3 }); +await replayer.replaySegment(segment, policy); +``` + +- **Raw objects may carry bytes, not meaning** — Plain objects are for decoded payloads or logging only. +- **Magic numbers and strings are banished** — Give semantic numbers a named constant. Centralize strings used for identifiers, events, or config keys. +- **Boolean trap parameters are banished** — Use named parameter objects or separate methods. + +```javascript +// ❌ What does `true` mean here? +engine.compact(log, true); + +// ✅ Intention is legible +engine.compact(log, { preserveTombstones: true }); +// or +engine.compactPreservingTombstones(log); +``` + +- **Structured data stays structured** — Machines must not be forced to parse prose to recover data. +- **Module scope is the first privacy boundary** — If it is not exported, it is private. +- **JSDoc documents the runtime model; it does not replace it** — JSDoc explains actual runtime behavior and contracts. It must never substitute for runtime-backed types or validation. + +### Tooling Discipline + +**Lint is law.** + +- Lint errors fail CI. +- Suppressions require a documented justification. +- Enforce hardest on: unsafe coercion, floating promises, raw `Error` objects, and host-specific API leakage into core code. + +**When TypeScript is used:** + +- It remains subordinate to runtime validation. +- It must not be treated as a substitute for domain modeling. +- `any` is banned. `unknown` at raw edges only, eliminated immediately. +- Type-only constructs must not create a false sense of safety that the runtime does not back up. + +### The Anti-Shape-Soup Doctrine + +Most bad JavaScript infrastructure stems from weak modeling. The discipline is: + +1. Name the concept. +2. Construct the concept — with validated invariants. +3. Protect the invariant — freeze, encapsulate, defend. +4. Attach the behavior — on the type that owns it. +5. Guard the boundary — schemas at the edge, domain types inside. +6. Separate the codec — serialization is not the domain's problem. +7. Isolate the host — Node behind adapters, core stays portable. +8. Document the runtime — JSDoc explains what actually exists. +9. Test the truth — executable specification, not wishful coverage. + +### Review Checklist + +Before merging, ask: + +- Is this a real domain concept? Where is its runtime-backed form? +- Where is `unknown` eliminated? +- Does construction establish trust? +- Does behavior live on the type that owns it? +- Is anyone parsing `err.message` like a raccoon in a dumpster? +- Are there magic numbers or strings? +- Could this logic run in a browser? +- Is tooling fiction being mistaken for architecture? + +**This is infrastructure.** Code cannot rely on costumes or pretend that comments are contracts. JavaScript is enough — not because it is magical, but because runtime truth beats phantom certainty every time. diff --git a/docs/archive/retrospectives/2026-04-01-tsc-zero-and-joinreducer-strategy.md b/docs/archive/retrospectives/2026-04-01-tsc-zero-and-joinreducer-strategy.md new file mode 100644 index 00000000..82f8d1aa --- /dev/null +++ b/docs/archive/retrospectives/2026-04-01-tsc-zero-and-joinreducer-strategy.md @@ -0,0 +1,174 @@ +# Retrospective: TSC Zero Campaign + JoinReducer OpStrategy + +Date: 2026-04-01 + +Cycle: IRONCLAD / JoinReducer structural coupling + +PR: git-stunts/git-warp#73 + +## Governing Design Inputs + +- `.claude/tsc-zero-campaign-prompt.md` — campaign brief (error landscape, lane + partitioning, gate list) +- `docs/design/joinreducer-op-strategy.md` — strategy registry design +- `adr/ADR-0001-*.md` — canonical op normalization (prior art for op type + taxonomy) + +## What Landed + +### TSC Zero Campaign + +- **1,707 TypeScript errors → 0** across 271 files +- **1,876 ESLint errors → 0** (from prior lint campaign, included in branch) +- **5 markdown lint issues → 0** +- All 8 pre-push gates green: tsc, IRONCLAD policy, consumer types, ESLint, + lint ratchet, declaration surface, markdown lint, unit tests +- 5,142 tests green — zero behavioral regressions + +Key changes: +- Mechanical TS4111 bracket-access sweep (614 errors, Node script) +- 8-lane parallel agent campaign for 1,093 strictness errors +- ESLint `dot-notation` rule disabled (conflicts with + `noPropertyAccessFromIndexSignature`) +- `.claude/**` added to ESLint ignores and vitest excludes +- `EffectSinkPort.deliver()` return type widened to + `DeliveryObservation | DeliveryObservation[]` in `index.d.ts` +- `publicLens` → `publicAperture` in consumer type fixture + +### JoinReducer OpStrategy Registry + +- Frozen `Map` with 8 entries (one per canonical op type) +- Each strategy defines 5 methods: `mutate`, `outcome`, `snapshot`, + `accumulate`, `validate` +- Load-time validation: missing method = hard error at import +- Three apply paths (`applyFast`, `applyWithReceipt`, `applyWithDiff`) rewired + to use registry — no more triplicated switches +- 15 new tests: 5 registry structure + 10 cross-path equivalence +- Net: +276 / -270 lines (file size neutral) + +## Design Alignment Audit + +### TSC Zero + +- all 8 pre-push gates pass: **aligned** +- no `@ts-ignore`, `@ts-expect-error`, `as any`: **aligned** (two `any` casts + were caught and removed before merge) +- no behavioral changes: **partially aligned** — three agent-authored files + (WarpRuntime.js, Observer.js, WormholeService.js) had behavioral regressions + caught by tests; originals restored with minimal type-only fixes +- ESLint zero preserved: **aligned** +- `dot-notation` rule disabled: **deliberate tradeoff** — `noPropertyAccessFromIndexSignature` + provides actual type safety; `dot-notation` is purely stylistic; they + conflict directly + +### JoinReducer OpStrategy + +- structural coupling guarantee (can't add op without all 5 methods): **aligned** +- `applyFast` zero overhead preserved: **aligned** — still calls only + `strategy.mutate()` (plus `strategy.validate()`, matching prior behavior) +- public API unchanged: **aligned** — all signatures and return types identical +- cross-path state equivalence tested: **aligned** +- dead code removed (5 switch bodies): **aligned** + +## Observed Drift (Updated — Honest Accounting) + +### 1. Agent over-refactoring (TSC campaign) + +Three of eight lane agents made behavioral changes while "fixing types": +- WarpRuntime.js: deleted `buildEffectPipeline`, rearranged imports +- Observer.js: added `_preInitFields()` that broke `_host` access +- WormholeService.js: removed null guard in `deserializeWormhole` + +425 test failures resulted. All caught by Gate 8 (unit tests). + +**Resolution:** Originals restored; minimal type-only fixes applied. Agent +prompts must be explicit: "NO behavioral changes, NO function deletion, NO +restructuring." + +**Status:** accepted — lesson captured in claude-think for future sessions. + +### 2. Worktree test/lint leakage + +Agent worktrees under `.claude/worktrees/` were picked up by ESLint (6,920 +false errors) and vitest (1 duplicate test failure). + +**Resolution:** Added `.claude/**` to ESLint ignores and vitest excludes. + +**Status:** accepted — permanent fix in config. + +### 3. `EffectSinkPort.deliver()` return type widened + +`MultiplexSink.deliver()` returns `DeliveryObservation[]` but the port +declared `DeliveryObservation`. Lane 3 agent widened the port; `index.d.ts` +updated to match. + +**Resolution:** This is a real API surface change. Downstream consumers that +call `.deliver()` may need to handle the array case. + +**Status:** accepted — the widening is correct (multiplex sink fans out to N +sinks, naturally returns N observations). **But:** shipped without a +`BREAKING CHANGE` commit footer, violating CLAUDE.md rules. Tracked as B173. + +### 4. No design doc for TSC campaign + +The CONTRIBUTING.md process says "design docs first." The TSC campaign went +straight from a prompt file to implementation with no design doc, no hills, +no explicit non-goals. The JoinReducer work followed the process correctly +(design doc → failing tests → implementation). The TSC work did not. + +**Status:** accepted — the TSC campaign was a mechanical cleanup, not a +design decision. But the process exception should have been called out +explicitly, not just skipped silently. + +### 5. ESLint `dot-notation` disabled globally (shortcut) + +Disabling the rule globally was a shortcut. The proper fix is +`@typescript-eslint/dot-notation` which respects +`noPropertyAccessFromIndexSignature`. We're already using the TS-ESLint +parser, so the switch is straightforward. Tracked as B172. + +**Status:** implementation shortcut — accepted for now, tracked for fix. + +### 6. 27 agent-authored files merged without line-by-line review + +Merge conflicts were resolved via `git checkout --theirs` (take the agent's +version). Tests caught the 3 egregious regressions, but 27 files have +agent-authored type fixes that were never audited for subtle semantic drift +(e.g. changed fallback values, widened types, reordered logic). Tests +passing does not guarantee absence of drift. + +**Status:** **not aligned** — tracked as B171 (high priority audit). + +## Playback + +### Hills + +1. **"A developer can `git push` without the pre-push firewall blocking on + type errors."** — Achieved. All 8 gates pass. + +2. **"Adding a 9th op type to JoinReducer without defining all behaviors is a + hard error at module load time."** — Achieved. Load-time validation + enforces completeness. + +### What surprised us + +- The TS4111 mechanical fix (614 errors) cascaded: fixing bracket access + resolved type inference for hundreds of downstream `noUncheckedIndexedAccess` + errors. 1,707 → 1,093 from a single category. +- `exactOptionalPropertyTypes` was the hardest strictness flag — it requires + conditional spread (`...(x !== undefined ? {x} : {})`) everywhere optional + params touch `undefined`. This is the flag most likely to generate ongoing + friction. +- The JoinReducer was less broken than the audit suggested. The CRDT kernel + was never bifurcated — only the metadata layers were triplicated. But the + strategy pattern is still the right fix for coupling. + +### What we'd do differently + +- **Gate agent behavior more tightly.** The prompt "fix TypeScript errors" + is too vague — agents interpret it as license to refactor. Future prompts + must say: "type annotations only, no behavioral changes, no function + deletion, no helper extraction." +- **Run tests between every merge, not just at the end.** We merged 8 + worktree branches before testing. Should have tested after each merge to + isolate regressions. diff --git a/docs/design/0001-method-bootstrap/method-bootstrap.md b/docs/design/0001-method-bootstrap/method-bootstrap.md new file mode 100644 index 00000000..29b49214 --- /dev/null +++ b/docs/design/0001-method-bootstrap/method-bootstrap.md @@ -0,0 +1,49 @@ +# Method Bootstrap + +**Cycle:** 0001-method-bootstrap +**Type:** Design +**Pulled from:** User direction (2026-04-01) + +## Sponsor human + +James — wants a calm, filesystem-native process that survives +context switches and makes both agent and human work legible. + +## Sponsor agent + +Claude — needs unambiguous structure to find work, classify it, +and operate without asking "where does this go?" + +## Hill + +The Method directory structure exists, all existing backlog items +live in it under descriptive names, and the old B-number system is +gone. From this point forward, `ls docs/method/backlog/` is the +only backlog query. + +## Playback questions + +### Agent + +- Can I find the next piece of work by running `ls` on a lane + directory? **YES/NO** +- Can I classify a new idea into the right lane without asking the + human? **YES/NO** +- Do any B-numbers remain in the repo? **NO** + +### Human + +- Does `ls docs/method/backlog/asap/` show me what matters most? + **YES/NO** +- Can I understand what each backlog item is from its filename + alone? **YES/NO** +- Is the old BACKLOG/ directory gone? **YES/NO** + +## Non-goals + +- Defining all legends upfront. Legends emerge from work. +- Migrating design docs or retrospectives — they stay where they + are. The Method structure is forward-looking. +- Writing process.md or release.md content beyond moving existing + docs into place. +- Code changes of any kind. diff --git a/docs/design/0002-code-nav-tool/code-nav-tool.md b/docs/design/0002-code-nav-tool/code-nav-tool.md new file mode 100644 index 00000000..bdad0f0d --- /dev/null +++ b/docs/design/0002-code-nav-tool/code-nav-tool.md @@ -0,0 +1,395 @@ +# Safe Context: Replay-Safe Structural Reads for Coding Agents + +**Cycle:** 0002-code-nav-tool +**Type:** Feature (new repo) + +## Sponsor human + +James — maintains large JS/TS and Rust codebases and wants coding +agents that can work precisely in large files without inflating +session cost. Has empirical data (Blacklight, 1,091 sessions, +291K messages) proving that context compounding from oversized reads +is the dominant cost driver in agentic coding. + +## Sponsor agent + +Claude — wastes context on full-file reads, oversized shell output, +and repeated exploration in long sessions. Read tool alone accounts +for 96.2 GB of context burden — 6.6x all other tools combined. 58% +of reads are full-file (no offset/limit). 64.5% of reads don't lead +to an edit of that file — they're exploration cost that could be +replaced by structural representations. Needs a policy-enforcing +access layer that returns the smallest correct representation needed +for the task. + +## Hill + +An agent working in a JS/TS or Rust codebase can obtain the minimum +structurally correct context required to act — file shape, export +surface, exact symbol body, or bounded source range — without +injecting large raw artifacts into long-lived conversation state. +The tool runs as an MCP server and CLI and enforces replay-safe +behavior by default. + +## Playback questions + +### Agent + +1. When I request a 2000-line file, do I get an outline instead of + the raw content? **YES/NO** +2. Can I extract just `StrandService.tick()` without reading + StrandService.js? **YES/NO** +3. Am I blocked from reading binary files, build output, and + generated artifacts? **YES/NO** +4. Does shell output get tailed instead of dumped in full? + **YES/NO** +5. Can I save/load session state across `/clear` boundaries? + **YES/NO** +6. Does it work on `.js`, `.ts`, `.tsx`, `.rs` files? **YES/NO** +7. Can I call every operation as an MCP tool? **YES/NO** + +### Human + +1. Does the tool install with one command and work without + configuration? **YES/NO** +2. Can I see measurable reduction in context burden in Blacklight + data after deploying it? **YES/NO** +3. Does it work across Claude Code, Gemini CLI, and Codex CLI? + **YES/NO** +4. Can I use it from the terminal as a standalone CLI? **YES/NO** + +## Non-goals + +- Full semantic code intelligence (LSP replacement) +- Cross-file reference resolution in v1 +- Persistent whole-repo index in v1 +- Code modification (this is read-only) +- Arbitrary raw artifact passthrough +- Convenience wrapper around `cat` +- General-purpose memory system +- "Whatever the agent asked for, but prettier" — this tool is + opinionated about what it returns + +## The thesis + +The biggest cost in agentic coding is not code generation. It is +replayed context. Safe-context replaces oversized raw reads with +bounded structural representations, so agents stay precise without +poisoning their own session state. + +### Evidence (from Blacklight, 1,091 sessions) + +| Finding | Number | +|---|---| +| Read context burden | 96.2 GB (6.6x all other tools) | +| Full-file reads (no offset/limit) | 58% of all reads | +| Reads that don't lead to editing that file | 64.5% | +| Dynamic read cap alone | 54.5% burden reduction | +| Session length cap alone | 58.9% burden reduction | +| Both combined | 75.1% burden reduction | +| Top 3 sessions (of 715) | 23% of all lifetime burden | +| WarpGraph.js | 1,053 reads, 85 sessions, 1.74 GB burden | +| Worst single session | 12.7 GB burden, 5,900 messages | + +The data says: + +1. Read is the monster. +2. Long sessions are money furnaces. +3. Shell output is material (especially Gemini). +4. Subagent dumps are context bombs. +5. Policy + session management handle 75% before any indexing. + +## Before and after + +Real scenarios. Token counts are raw output. Context burden = +tokens x messages remaining in session. + +### Scenario 1: Understand a god object + +**Before:** 7 Read calls across StrandService.js. ~4,700 tokens +raw, but at turn 5 of a 200-turn session that's +`4,700 x 195 = 916,500 tokens of context burden`. + +**After:** `safe_read("StrandService.js")` → policy intercepts, +returns `file_outline`. ~175 tokens raw, same position = +`175 x 195 = 34,125 burden`. Then `code_show("StrandService.tick")` +for the one method needed. **96% raw reduction, 96% burden +reduction.** + +### Scenario 2: Pre-refactor survey of 8 files + +**Before:** ~24 Read calls, ~9,400 tokens raw. At turn 3 of a +150-turn session: `9,400 x 147 = 1,381,800 burden`. + +**After:** 8 `file_outline` calls. ~1,400 tokens raw. +`1,400 x 147 = 205,800 burden`. **85% raw, 85% burden.** And the +context window stays clean for actual work — reasoning, test output, +edits. + +### Scenario 3: The compounding catastrophe + +**Before:** WarpGraph.js (800 LOC) read 12 times in a 400-message +session. Each read ~2,800 tokens. Total raw: 33,600. But +compounded across messages remaining at each read point: estimated +**5-8 million tokens of burden** from one file in one session. + +**After:** First access returns `file_outline` (~280 tokens). Agent +requests specific symbols as needed via `code_show`. Even 10 +targeted extractions total ~2,000 tokens raw. Burden drops by +**~95%** because each payload is small and the outline is never +re-read (agent has the shape). + +### Scenario 4: The GIF incident + +**Before:** `seek-demo.gif` read 4 times. 1.3 MB of binary per +read. **395 MB of context burden** from 4 tool calls. + +**After:** `safe_read("seek-demo.gif")` → policy refuses. Returns: +`Binary file (GIF, 1.3 MB). Use ls -lh for metadata.` Zero bytes +of context burden. **100% reduction.** + +### Scenario 5: The test loop + +**Before:** `npm test` run 30 times in an edit-test loop. Each run +outputs ~8 KB. Late in session with 200 messages remaining: +`8,000 x 30 x 100 (avg remaining) = 24,000,000 burden`. + +**After:** `run_capture("npm test", 60)` tees full output to +`/tmp/test.log`, returns only last 60 lines (~2 KB). If more needed, +`read_range("/tmp/test.log", 1, 50)`. Burden drops by **~75%**, and +the full output is still on disk if needed. + +## Architecture + +### Layer 1: Policy (the king) + +Decides what kind of answer is allowed. + +- No binary/media reads (`.gif`, `.png`, `.jpg`, `.pdf`, `.zip`, + `.wasm`, `.bin`, `.sqlite`) +- No build/generated reads (`dist/`, `build/`, `target/`, `.next/`, + `node_modules/`) +- Dynamic size cap based on session depth: + + | Session stage | Messages elapsed | Max raw output | + |---|---|---| + | Early | < 50 | 20 KB | + | Mid | 50-200 | 10 KB | + | Late | > 200 | 4 KB | + +- Over-cap reads are downgraded to `file_outline` + jump table +- Optional re-read warning ("you already read this file 3 turns + ago") + +Policy is the product. Everything else enables it. + +### Layer 2: Structural extraction (the enabler) + +Tree-sitter-backed extraction for JS/TS/Rust: + +- **File outline** — exports, declarations, class/impl members, + line ranges +- **Symbol body** — complete syntactic extent of a named + declaration, with doc comments +- **Export surface** — what this module exposes to importers +- **Definition finding** — where a symbol is defined (not used) + +Tree-sitter is the right foundation: +- Multi-language (JS, TS, TSX, Rust in one framework) +- Fast (single-digit ms per file parse) +- Battle-tested (GitHub, Neovim, Zed, Helix) +- Node.js bindings via native addon + +### Layer 3: Transport (necessary, not interesting) + +- **MCP server** (stdio) — primary delivery. Works with Claude Code, + Gemini CLI, Codex CLI +- **CLI** — for human use and testing + +### Layer 4: Session hygiene (the other big lever) + +- `state_save()` / `state_load()` — write/read + `WORKING_STATE.md` for cross-clear continuity +- Tripwires (phase 3): + - `messages > 500` + - `edit_bash_transitions > 30` + - `tool_calls_since_last_user_message > 80` + - Any single output > 20 KB after 300 messages + +## Command surface + +### 1. `safe_read(path, intent?)` + +Primary entry point. The main product. + +Returns one of: +- **Exact file content** when safely under the cap +- **Structural outline** when too large +- **"Pick a symbol/range" guidance** when exploration is needed +- **Refusal** for binary/build/generated garbage + +The `intent` parameter is optional. If provided ("I need to +understand the class shape" vs "I need to edit line 45"), the policy +can make smarter decisions. + +### 2. `file_outline(path, opts?)` + +Structural skeleton. Exports, top-level declarations, class/impl +members, line ranges. No bodies. + +Cheap, structural, high-signal. This is what replaces 64.5% of +exploration reads. + +### 3. `code_show(target, opts?)` + +Precise extraction. The scalpel. + +- `StrandService.tick` — a class method +- `src/foo.rs#VersionVector.merge` — file-qualified Rust method +- `reduceV5` — top-level function (project-wide search if ambiguous) + +Returns the complete syntactic extent: body, JSDoc/doc comments, +decorators/attributes. Nothing else. + +### 4. `code_find(symbol, opts?)` + +Definitions only. Not grep. Not references. Not "anything containing +this string." + +Returns file path + line number for every definition of the symbol +across the project. + +### 5. `read_range(path, start, end)` + +For when you know where you're going. Bounded, no policy +interception (the caller already has a precise target). + +### 6. `run_capture(cmd, tail?)` + +Runs a shell command. Tees full output to a log file. Returns only +the last N lines (default 60). Full output available on disk via +`read_range` if needed. + +Because the data shows shell output is material, and for Gemini it +was the #1 burden source. + +### 7. `state_save(content)` / `state_load()` + +Thin wrapper over `WORKING_STATE.md`. Saves/loads structured session +state for cross-clear continuity. + +Because the data is screaming that runaway sessions are the other +half of the disaster. + +## Open questions + +1. **Session depth tracking** — How does the MCP server know how + deep the session is? MCP tools don't receive conversation + metadata. Options: (a) the agent tells it via a parameter, + (b) the server counts its own tool calls as a proxy, + (c) a hook injects session depth. + +2. **Re-read detection** — Tracking "you already read this" requires + the server to maintain per-session state. Feasible since the + server lives for the session duration, but needs a simple + in-memory cache. + +3. **JSDoc attachment** — tree-sitter treats comments as standalone + nodes. Need a heuristic: "comment immediately preceding a + declaration with no blank line gap belongs to it." + +4. **Rust impl grouping** — `code_show VersionVector` should return + struct + all impl blocks, including trait impls. Requires walking + the full file AST, not just pattern matching. + +5. **Project root detection** — For `code_find` (project-wide + search), how to determine the project root? Options: + `.git` presence, `package.json`, `Cargo.toml`, or explicit + config. + +6. **Cross-LLM MCP compatibility** — Claude Code, Gemini CLI, and + Codex CLI all support MCP but with slightly different + configuration. Need to verify stdio transport works identically + across all three. + +## Phasing + +### Phase 1 — The Governor + +Ship: `safe_read`, `file_outline`, `read_range`, `run_capture`, +`state_save`/`state_load`. JS/TS only. MCP + CLI. + +**Goal:** change behavior immediately. This phase alone should +deliver the 54.5% read burden reduction that the dynamic cap +promises, plus shell output containment. + +### Phase 2 — Precision tools + +Add: `code_show`, `code_find`, `exports`. Rust support for all +structural operations. + +**Goal:** make safe reads frictionless. When the governor +downgrades a read to an outline, the agent can immediately request +the exact symbol it needs. + +### Phase 3 — Session intelligence + +Add: tripwires, re-read warnings, session-depth-aware enforcement, +automatic `WORKING_STATE.md` nudges. + +**Goal:** stop runaway sessions before they become archaeological +sites. + +### Phase 4 — Optional sophistication + +Maybe: lightweight symbol cache, import/deps views, +references-lite, symbol-aware revision diffs. + +Not before the first three phases prove themselves. + +## Project structure + +```text +@git-stunts/safe-context/ + bin/ + safe-context.js CLI entry point + src/ + policy/ + rules.js Ban lists, size caps, dynamic thresholds + gate.js Decision engine (pass/outline/refuse) + parser/ + index.js Tree-sitter init + grammar loading + javascript.js JS/TS/TSX extraction queries + rust.js Rust extraction queries + operations/ + safe-read.js Policy-enforced read + outline.js File skeleton + show.js Symbol extraction + find.js Definition search + range.js Bounded reads + capture.js Shell output tailing + state.js Session state save/load + mcp/ + server.js MCP server (stdio transport) + tools.js Tool definitions + handlers + output/ + formatter.js CLI output formatting + test/ + fixtures/ Sample JS/TS/Rust files + unit/ Operation tests + policy/ Policy decision tests + package.json + LICENSE Apache 2.0 +``` + +## Success criteria + +- Large exploratory reads are replaced by outlines and targeted + reads +- Binary/build/generated reads are blocked or redirected +- Long-session compounding is reduced through policy and state + resets +- Agents can operate effectively in 2K+ LOC files without reading + the whole file +- Measurable reduction in context burden visible in Blacklight data + after deployment diff --git a/docs/design/0003-safe-context/safe-context.md b/docs/design/0003-safe-context/safe-context.md new file mode 100644 index 00000000..598f7ee4 --- /dev/null +++ b/docs/design/0003-safe-context/safe-context.md @@ -0,0 +1,878 @@ +# Graft — Phase 1: The Governor + +**Cycle:** 0003-safe-context +**Type:** Feature (new repo: `@flyingrobots/graft`) +**Pulled from:** `asap/DX_safe-context-phase-1.md` +**Prior art:** `docs/design/0002-code-nav-tool/code-nav-tool.md` + +**Product:** `graft` — structural reads and context governance for +coding agents. CLI as `git graft`, MCP as `graft-mcp`. + +The name: Git has trees and branches. Grafting is attaching new +growth onto existing rootstock — semantic eyesight grafted onto +Git's history substrate. + +## Sponsor human + +James — maintains JS/TS and Rust codebases. Has empirical proof +(Blacklight, 1,091 sessions) that Read context burden is the +dominant cost in agentic coding. Wants a tool that enforces +replay-safe behavior across Claude Code, Gemini CLI, and Codex CLI +without requiring agents to be disciplined on their own. + +## Sponsor agent + +Claude — 96.2 GB of Read context burden. 58% full-file reads. 64.5% +exploration reads that never lead to an edit. Needs a policy layer +that prevents it from stuffing its own context with gravel, and +structural extraction that makes the policy usable instead of +obnoxious. + +## Hill + +An agent working in a JS/TS codebase can obtain the minimum +structurally correct context required to act — file shape, export +surface, or bounded source range — without injecting large raw +artifacts into long-lived conversation state. The tool runs as an +MCP server and CLI and enforces replay-safe behavior by default. + +Phase 1 scope: JS/TS only. `safe_read`, `file_outline`, +`read_range`, `run_capture`, `state_save`/`state_load`. No +`code_show` or `code_find` yet (Phase 2). + +## Playback questions + +### Agent + +1. When I `safe_read` a 2000-line JS file, do I get an outline + instead of raw content? **YES/NO** +2. When I `safe_read` a 50-line config file, do I get the raw + content? **YES/NO** +3. Am I blocked from reading `.gif`, `.png`, `.wasm`, and + `node_modules/`? **YES/NO** +4. Does `run_capture("npm test")` return only the tail, with full + output on disk? **YES/NO** +5. Can I `state_save` before a `/clear` and `state_load` after? + **YES/NO** +6. Can I call every operation as an MCP tool from Claude Code? + **YES/NO** +7. When I outline a half-edited file with broken syntax, do I get + a best-effort outline with `partial: true`? **YES/NO** + +### Human + +1. Can I `npm install -g @flyingrobots/graft` and it works? + **YES/NO** +2. Does `git graft outline src/domain/services/StrandService.js` + return a useful structural skeleton from the CLI? **YES/NO** +3. Can I point it at any JS/TS project with zero config? **YES/NO** +4. Can I register it as an MCP server in one line of JSON? **YES/NO** + +## Internal vocabulary + +Graft is WARP optics with a job: derive the smallest lawful view +of the code that lets the agent act without poisoning its own +context. + +These terms are internal architecture language, not public CLI +names: + +| Term | Meaning | +|---|---| +| **projection** | The output mode chosen by policy: content, outline, refusal, error. Not the full file — a lawful reduced view. | +| **focus** | The targeting mechanism: file, class, method, range, export surface. Bounds what the agent sees. | +| **residual** | The hidden context not surfaced to the agent. The 1,900 lines of StrandService that the outline doesn't show. Exists, acknowledged, not transmitted. | +| **receipt** | A structured decision log entry: what was requested, what was returned, why, bytes avoided. | +| **witness** | (Future) The exact focus chosen, the lines returned, why that focus was selected, what larger whole it came from. "What did the agent see before it made this edit?" | + +This vocabulary gives the architecture coherence: policy is not +arbitrary, projection is not just truncation, focus is not just +slicing, receipts are not just logs. + +## Non-goals + +- Rust support (Phase 2) +- `code_show` / `code_find` / `exports` (Phase 2) +- Persistent whole-repo index +- Cross-file reference resolution +- Code modification +- LSP replacement +- Semantic type resolution +- Session tripwires and auto-nudges (Phase 3) + +## Command contracts + +### `safe_read(path, intent?)` + +**Input:** +- `path` — file path (absolute or relative to project root) +- `root` — optional project root override +- `intent` — optional string hint ("understand shape", "find + method X", "edit line 45") + +**Policy decisions:** + +| Condition | Response | +|---|---| +| Binary extension (`.gif`, `.png`, `.jpg`, `.pdf`, `.zip`, `.wasm`, `.bin`, `.sqlite`, `.ico`, `.mp4`, `.mov`) | Refuse. Return file type + size metadata. | +| Build/generated path (`node_modules/`, `dist/`, `build/`, `.next/`, `target/`, `coverage/`) | Refuse. No source-path guessing — just state what was blocked and why. | +| File does not exist | Error (not a refusal — see error model below). | +| Secret file (`.env`, `*.pem`, `*.key`, `id_rsa`, `id_ed25519`, `credentials.json`) | Refuse. Built-in, not `.graftignore`-dependent. | +| File <= line threshold AND <= byte threshold | Return raw content. | +| File > either threshold | Return `file_outline` result + next-step hints. | +| Known junk patterns (`.min.js`, lockfiles, giant JSON) | Refuse. Return metadata only. | + +**Thresholds (configurable):** + +| Metric | Default | +|---|---| +| Max lines | 150 | +| Max bytes | 12 KB | + +Both must pass for raw content. A 40-line minified atrocity that's +50 KB still gets outlined. Lockfiles (`package-lock.json`, +`pnpm-lock.yaml`, `yarn.lock`) and `.min.js` files are always +refused regardless of size. + +**Intent is advisory only.** It may affect messaging and next-step +hints. It never weakens safety bounds. An agent saying "edit line +45" does not unlock a larger read. + +**Action model:** + +| Action | Meaning | +|---|---| +| `content` | Raw file returned (under thresholds) | +| `outline` | Structural skeleton returned (over thresholds) | +| `refused` | Policy blocked the read (binary, build, secret, graftignore) | +| `error` | Operational failure (missing file, unreadable, bad path) | + +`refused` = the governor said no. `error` = something broke. These +are different: a refusal is correct behavior; an error is a problem. + +**Output shape:** +```json +{ + "action": "content" | "outline" | "refused" | "error", + "path": "src/foo.js", + "lines": 2048, + "bytes": 68402, + "content": "..." | null, + "outline": { ... } | null, + "reason": "over_line_threshold" | "binary_extension" | ... | null, + "explain": "File exceeded 150-line cap; outline returned instead." | null, + "policy": { "lineThreshold": 150, "byteThreshold": 12000, "triggeredBy": "over_line_threshold" } | null, + "next": ["read_range(path, 1240, 1271) for method tick"] | null, + "savings": { "bytesAvoided": 68402 } | null +} +``` + +### `file_outline(path)` + +**Input:** file path. + +**Output:** structural skeleton of the file. + +**Formatting bounds:** +- Parameter strings truncated at 60 chars (ellipsized) +- Default values and destructuring patterns compacted +- Generic type parameters summarized, not expanded +- Max 80 chars per signature line +- Output capped at 200 entries (declarations + members). If a file + has more, the tail is elided with metadata: + +```json +{ + "entryCount": 200, + "totalEntryCount": 317, + "truncated": true, + "elidedCount": 117 +} +``` + +**Broken files (syntactically invalid JS/TS):** + +Agents constantly work on half-edited, mid-refactor files. This is +normal, not an error. Tree-sitter produces partial parse trees for +broken syntax — it does not bail. + +Contract: outline is **best-effort**. If the file has parse errors, +the outline includes whatever structure tree-sitter recovered, plus +metadata: + +```json +{ + "partial": true, + "parseErrors": [ + { "line": 188, "message": "unterminated class body" } + ] +} +``` + +The outline is still useful — it shows the symbols that parsed +cleanly. The `partial` flag tells the agent "this file is broken, +so the outline may be incomplete." This is strictly better than +refusing to outline a broken file. + +**Root parameter:** `file_outline(path, { root?, focus? })` + +`root` overrides project root detection for this call. `focus` +limits output to a single class or top-level declaration by name. + +```json +{ + "path": "src/domain/services/StrandService.js", + "lines": 2048, + "language": "javascript", + "exports": [ + { "name": "STRAND_SCHEMA_VERSION", "kind": "const", "line": 89 }, + { "name": "default", "kind": "class", "alias": "StrandService", "line": 901 } + ], + "declarations": [ + { "name": "compareStrings", "kind": "function", "line": 100, "endLine": 102 }, + { "name": "normalizeCreateOptions", "kind": "function", "line": 245, "endLine": 308 } + ], + "classes": [ + { + "name": "StrandService", + "line": 901, + "endLine": 2048, + "members": [ + { "name": "constructor", "kind": "method", "line": 907, "params": "{ graph }" }, + { "name": "create", "kind": "method", "line": 917, "async": true, "params": "options = {}" }, + { "name": "braid", "kind": "method", "line": 952, "async": true, "params": "strandId, options = {}" }, + { "name": "get", "kind": "method", "line": 985, "async": true, "params": "strandId" }, + { "name": "tick", "kind": "method", "line": 1240, "async": true, "params": "strandId" }, + { "name": "_buildRef", "kind": "method", "line": 1563, "params": "strandId", "private": true } + ] + } + ] +} +``` + +**CLI text output:** + +```text +src/domain/services/StrandService.js (2048 lines, javascript) + + exports: + const STRAND_SCHEMA_VERSION :89 + default class StrandService :901 + + functions: + compareStrings(a, b) :100-102 + normalizeCreateOptions(options) :245-308 + ... + + class StrandService :901-2048 + constructor({ graph }) :907 + async create(options = {}) :917 + async braid(strandId, options = {}) :952 + async get(strandId) :985 + async list() :1000 + async drop(strandId) :1024 + async materialize(strandId, options = {}) :1055 + async createPatchBuilder(strandId) :1076 + async patch(strandId, build) :1134 + async queueIntent(strandId, build) :1165 + async listIntents(strandId) :1207 + async tick(strandId) :1240 + async getPatchEntries(strandId, options = {}) :1505 + async patchesFor(strandId, entityId, options = {}) :1520 + async getOrThrow(strandId) :1545 + _buildRef(strandId) :1563 [private] + _buildOverlayRef(strandId) :1582 [private] + _buildBraidPrefix(strandId) :1601 [private] + _buildBraidRef(strandId, braidedStrandId) :1621 [private] + _readDescriptorByOid(oid, strandId) :1642 [private] + _writeDescriptor(descriptor) :1677 [private] + _loadBraidedReadOverlays(target, braidedStrandIds) :1693 [private] + _readOverlayMetadata(strandId) :1724 [private] + _hydrateOverlayMetadata(descriptor) :1744 [private] + _collectBasePatches(descriptor) :1781 [private] + _collectOverlayPatches(descriptor) :1813 [private] + _collectBraidedOverlayPatches(descriptor) :1827 [private] + _collectPatchEntries(descriptor, { ceiling }) :1850 [private] + _materializeDescriptor(descriptor, opts) :1881 [private] + _syncOverlayDescriptor(descriptor, { patch, sha }) :1936 [private] + _commitQueuedPatch(params) :1973 [private] + _syncBraidRefs(strandId, readOverlays) :2027 [private] +``` + +That is 35 lines. Not 2048. + +### `read_range(path, start, end)` + +**Input:** file path, start line (1-indexed), end line (inclusive), +optional `root` override. + +**Output:** raw content of the specified range with line numbers. + +**Bounded.** The governor still governs: + +| Constraint | Default | +|---|---| +| Max line span | 250 lines | +| Max byte output | 20 KB | + +If the requested range exceeds either cap, the response is clipped +and metadata shows what happened: + +```json +{ + "path": "src/foo.js", + "requested": { "start": 1, "end": 800 }, + "returned": { "start": 1, "end": 250 }, + "truncated": true, + "reason": "range_exceeds_max_lines", + "content": "..." +} +``` + +Binary and build-path bans still apply. `read_range("foo.gif", 1, 10)` +is still refused. + +This is a scoped read, not a policy bypass. + +### `run_capture(cmd, tail?)` + +**Input:** +- `cmd` — shell command string +- `tail` — number of lines to return (default 60) +- `cwd` — working directory (default: project root) +- `timeout` — max seconds (default: 120) + +**Behavior:** +1. Execute `cmd` via the user's default shell +2. Tee full output (stdout + stderr merged) to a log file +3. Return last `tail` lines + the log file path +4. Return exit code + +**Execution contract:** + +| Setting | Value | +|---|---| +| Working directory | Project root (or explicit `cwd` param) | +| Environment | Inherited from parent process | +| Timeout | 120 seconds default (configurable via `timeout` param) | +| Max log size | 5 MB. If output exceeds this, the log is truncated from the head and the tail is preserved. | +| Nonzero exit | Not an error — return the exit code + tail normally. Tests fail; that's expected. | + +**Output shape:** +```json +{ + "exitCode": 1, + "tail": "... last 60 lines ...", + "logFile": ".graft/logs/capture-1712023456.log", + "totalLines": 342, + "truncated": true +} +``` + +Agent can `read_range` the log file if it needs more. + +### `state_save(content)` / `state_load()` + +**Input (save):** markdown string of session state. +**Output (load):** the saved content, or null if no state file. + +**Storage:** `.graft/WORKING_STATE.md` in the project root. + +**Capped at 8 KB.** If content exceeds the cap, the save is +rejected with `reason: "state_exceeds_max_bytes"`. The agent must +be concise. This is a breadcrumb trail, not a second context +window. + +Recommended template (not enforced, but nudged in error messages): + +```markdown +# Task +# Current hypothesis +# Files touched +# Next 3 actions +# Open questions +``` + +The human can read it with `cat`. The agent can load it after +`/clear` and pick up where it left off. + +## `.graftignore` + +A gitignore-style file in the project root. Paths matching any +pattern are always refused by `safe_read` and `read_range`, with +`reason: "graftignore"`. + +```text +# Secrets +.env +.env.* +credentials.json +**/secrets/** + +# Large generated files +*.sql.dump +*.csv +data/ + +# Project-specific +src/generated/** +``` + +If `.graftignore` does not exist, only the built-in bans (binary +extensions, build paths, lockfiles, minified) apply. The file is +optional — graft works without it. + +Uses `.gitignore` glob syntax via `picomatch` (declared dependency, +not transitive — don't build product behavior on accidental dep +chains). + +## Project root + +All paths are resolved relative to the project root. Detection +order: + +1. Explicit `--root` flag (CLI) or `root` param (MCP) +2. Nearest ancestor directory containing `.git/` +3. Current working directory (fallback) + +**Rules:** +- Symlinks are resolved before path checks +- Paths that escape the project root are refused + (`reason: "path_escapes_root"`) +- Temp log files from `run_capture` live in `.graft/logs/` inside + the project root, not `/tmp/` +- `.graft/` should be added to `.gitignore` + +## Reason codes + +All policy decisions use machine-stable enum strings, not prose. + +| Code | Trigger | +|---|---| +| `binary_extension` | File has banned extension | +| `generated_path` | Path matches build/generated pattern | +| `lockfile` | `package-lock.json`, `pnpm-lock.yaml`, `yarn.lock` | +| `minified` | `.min.js`, `.min.css` | +| `over_line_threshold` | Lines exceed safe_read threshold | +| `over_byte_threshold` | Bytes exceed safe_read threshold | +| `range_exceeds_max_lines` | read_range span too large | +| `range_exceeds_max_bytes` | read_range output too large | +| `state_exceeds_max_bytes` | state_save content too large | +| `path_escapes_root` | Path resolves outside project root | +| `secret_file` | Built-in secret ban (`.env`, `*.pem`, etc.) | +| `graftignore` | Path matches `.graftignore` pattern | +| `missing_file` | File does not exist | + +## Technology + +### Tree-sitter + +- `tree-sitter` npm package (native addon) +- `tree-sitter-javascript` grammar (covers JS + JSX) +- `tree-sitter-typescript` grammar (covers TS + TSX) +- Parses any file in single-digit ms +- No persistent process needed — parse on demand + +### MCP + +- `@modelcontextprotocol/sdk` for server implementation +- stdio transport (standard for all three LLM agents) +- One tool definition per command + +### Runtime + +- Node.js >= 20 (tree-sitter native addon) +- Zero config — no tsconfig, no build step, no daemon +- `pnpm` for package management + +### Install and binary names + +```bash +npm install -g @flyingrobots/graft +``` + +This installs two binaries: + +| Binary | Purpose | +|---|---| +| `graft` | Standalone CLI (`graft outline foo.js`) | +| `git-graft` | Git subcommand shim (`git graft outline foo.js`) | + +Git automatically finds `git-graft` on `$PATH` and exposes it as +`git graft`. Both binaries are the same entrypoint. + +MCP server is started via: + +```bash +graft mcp +``` + +Claude Code config: + +```json +{ + "mcpServers": { + "graft": { + "command": "graft", + "args": ["mcp"] + } + } +} +``` + +## Project structure + +```text +graft/ + bin/ + graft.js CLI entry point + src/ + policy/ + rules.js Ban lists, thresholds + gate.js Decision engine + parser/ + index.js Tree-sitter init + grammar loading + javascript.js JS/TS/TSX outline extraction + operations/ + safe-read.js Policy-enforced read + outline.js Structural skeleton + range.js Bounded reads + capture.js Shell output tailing + state.js Session state save/load + hooks/ + gate.js Hook enforcement (Read gate, Bash gate) + mcp/ + server.js MCP server (stdio) + tools.js Tool definitions + handlers + format/ + text.js CLI text formatter + json.js JSON output formatter + metrics/ + logger.js NDJSON decision logger + stats.js Summary stats from log + test/ + fixtures/ + small.js Under both thresholds (pass-through) + large-class.js Over line threshold (outline) + wide-minified.js Under lines, over bytes (outline) + huge-file.js 300+ declarations (outline cap test) + plain-functions.js Top-level functions only + typescript.ts TS-specific constructs + binary.gif Binary refusal + vendor.min.js Minified refusal + broken-syntax.js Partial parse (missing braces) + secret.env Secret file refusal + generated/ Build path refusal + unit/ + policy.test.js Gate decisions + outline.test.js Structural extraction + safe-read.test.js Integration (policy + extraction) + capture.test.js Shell capture + state.test.js State save/load + integration/ + mcp.test.js MCP server round-trip + package.json + LICENSE Apache 2.0 + README.md +``` + +## Test strategy + +Tests are the spec. Playback questions map directly to test cases. + +### Policy tests (`policy.test.js`) + +```text +safe_read("foo.gif") -> refused, reason: binary_extension +safe_read("node_modules/x.js") -> refused, reason: generated_path +safe_read("dist/bundle.js") -> refused, reason: generated_path +safe_read("package-lock.json") -> refused, reason: lockfile +safe_read("vendor.min.js") -> refused, reason: minified +safe_read("small.js") -> content (under both thresholds) +safe_read("large-class.js") -> outline (over line threshold) +safe_read("wide-minified.js") -> outline (under lines, over bytes) +safe_read("missing.js") -> error, reason: missing_file +safe_read("../../etc/passwd") -> refused, reason: path_escapes_root +safe_read("/tmp -> ../../etc") -> refused (symlink resolved, escapes root) +safe_read(".env") -> refused, reason: secret_file (built-in, no .graftignore needed) +safe_read(".env.production") -> refused, reason: secret_file +safe_read("deploy.pem") -> refused, reason: secret_file +safe_read("data/dump.csv") -> refused, reason: graftignore (with .graftignore) +safe_read(bigFile, intent="edit") -> outline (intent does NOT relax policy) +safe_read("missing.js") -> action: error, reason: missing_file + +read_range("foo.js", 1, 800) -> truncated to 250 lines +read_range("foo.js", 1, 100) -> exact range returned +read_range("foo.gif", 1, 10) -> refused, reason: binary_extension + +state_save("# short") -> saved +state_save("x".repeat(9000)) -> refused, reason: state_exceeds_max_bytes +state_load() after save -> returns saved content +state_load() with no prior save -> returns null +``` + +### Outline tests (`outline.test.js`) + +```text +outline("large-class.js") + -> has exports array + -> has classes array with members + -> members have name, kind, line, params + -> async methods marked async: true + -> private methods (leading _) marked private: true + -> no function bodies in output + -> line numbers are accurate (spot-check) + -> params truncated at 60 chars when long + -> total entries <= 200 + +outline("plain-functions.js") + -> has declarations array + -> each has name, kind, line, endLine + +outline("typescript.ts") + -> handles interfaces, type aliases, enums + -> handles decorated classes + +outline("huge-file-300-functions.js") + -> entries capped at 200 + -> tail elided with elidedCount: 100+ + +outline("broken-syntax.js") + -> partial: true + -> parseErrors array present + -> recovered symbols still included + -> still useful, not an error + +outline("large-class.js", { focus: "StrandService" }) + -> only StrandService members returned + -> other classes/functions excluded +``` + +### Capture tests (`capture.test.js`) + +```text +run_capture("echo hello", 10) + -> exitCode: 0 + -> tail contains "hello" + -> logFile exists on disk + -> logFile contains "hello" + +run_capture("seq 1 500", 5) + -> tail contains lines 496-500 + -> truncated: true + -> totalLines: 500 + -> logFile contains all 500 lines +``` + +### State tests (`state.test.js`) + +```text +state_save("# Working on X") + -> file exists at .graft/WORKING_STATE.md + -> content matches + +state_load() + -> returns saved content + +state_load() with no prior save + -> returns null +``` + +### MCP integration tests (`mcp.test.js`) + +```text +spawn MCP server via stdio + -> server lists all tools + -> safe_read call returns valid response + -> file_outline call returns valid response + -> run_capture call returns valid response + -> state_save + state_load round-trips +``` + +## Enforcement: hooks + +The MCP server is voluntary. The agent can still call native `Read` +and bypass the governor entirely. And it will — not maliciously, +just because `Read` is familiar and consequences are later. + +The research says it plainly: + +> Models often "agree and then ignore" instruction-only rules. +> Enforcement is stronger. + +So graft ships **two layers**: + +### Layer 1: MCP server (cross-LLM, voluntary) + +The tools described above. Works on Claude Code, Gemini CLI, +Codex CLI. Agent uses these instead of native Read/Bash. Relies on +project instructions (CLAUDE.md, GEMINI.md) to prefer graft tools. + +### Layer 2: Claude Code hooks (enforced) + +`PreToolUse` hooks intercept native tool calls and route them +through graft's policy gate. + +**Read hook:** + +When the agent calls native `Read`, the hook: + +1. Runs the path through graft's policy (binary? build? over + threshold?) +2. If policy says **content** (small, safe): allow the Read through + unchanged +3. If policy says **outline** (too large): block the Read, return + the outline as the tool result with next-step hints +4. If policy says **refused** (binary, build, lockfile): block the + Read, return the reason and metadata + +The agent never sees the raw 2000-line file. It gets the outline +and can follow up with `read_range` for specific sections. + +**Bash hook (test capture):** + +When the agent calls native `Bash` with a command matching known +test runners (`npm test`, `vitest`, `jest`, `cargo test`, `pytest`, +`make test`), the hook: + +1. Routes through `run_capture` instead +2. Tees full output to `.graft/logs/` +3. Returns only the tail + +The agent gets the test result without the full dump. The full +output is on disk if needed. + +**Hook configuration:** + +Graft ships a `graft hooks install` command that writes the hook +config. For Claude Code: + +```json +{ + "hooks": { + "PreToolUse": [ + { + "matcher": "Read", + "command": "graft gate read" + }, + { + "matcher": "Bash", + "command": "graft gate bash" + } + ] + } +} +``` + +The `graft gate` subcommands read the tool call input from stdin +(hook protocol), apply policy, and exit 0 (allow) or exit 2 +(block + replacement output). + +**Gemini/Codex:** No equivalent hook mechanism yet. Enforcement is +MCP-only + project instructions. When those agents add hooks, graft +adapts. + +## Additional commands + +### `graft doctor` + +Diagnostic command for debugging policy behavior. + +```bash +git graft doctor +``` + +```text +project root: /Users/james/git/git-stunts/git-warp (.git detected) +line threshold: 150 +byte threshold: 12,000 +range max lines: 250 +range max bytes: 20,000 +state max bytes: 8,192 +log directory: .graft/logs/ (exists, 3 files, 42 KB) +state file: .graft/WORKING_STATE.md (exists, 1.2 KB) +parser: tree-sitter (javascript, typescript loaded) +node version: v22.3.0 +hooks installed: yes (Read gate, Bash gate) +.graftignore: present (7 patterns) +.gitignore: .graft/ present +``` + +Answers "why did my read get blocked?" before anyone has to ask. + +### `graft stats` + +Minimal decision metrics. Not a dashboard — a quick summary. + +```bash +git graft stats +``` + +```text +session decisions (since last clear): + content: 12 reads passed through + outline: 8 reads downgraded to outline + refused: 3 reads blocked (2 binary, 1 generated) + ranges: 5 bounded reads + captures: 4 shell captures (avg 47 tail lines) + +estimated bytes avoided: ~340 KB +``` + +Graft logs every decision to `.graft/metrics.jsonl` as append-only +NDJSON. One line per decision. This is how we prove graft works +when Blacklight re-analyzes post-deployment. + +```json +{"ts":"...","op":"safe_read","action":"outline","path":"StrandService.js","lines":2048,"bytes":68402,"reason":"over_line_threshold"} +{"ts":"...","op":"read_range","path":"StrandService.js","start":1240,"end":1271,"truncated":false} +{"ts":"...","op":"safe_read","action":"refused","path":"foo.gif","reason":"binary_extension"} +``` + +**Log retention:** +- `metrics.jsonl`: max 1 MB. When exceeded, oldest entries are + pruned (keep the tail). +- `.graft/logs/` (capture logs): max 10 MB total. Oldest logs + pruned first. Individual capture logs capped at 5 MB. +- `graft stats --since-clear` resets the metric window. + +`graft doctor` and `graft stats` both accept `--json` for machine +consumption. + +## Parse cache + +Tree-sitter is fast, but the MCP server lives for the session +duration. If the same file is outlined twice, cache the parse tree. + +`Map` — invalidated by mtime change. +In-memory only, no persistence. This matters because the agent will +outline a file, read a range, then outline again to re-orient. + +## Smart next-step hints + +When `safe_read` returns an outline, the `next` array references +specific symbols by name, not generic suggestions. If the outline +shows a class with 25 methods, the hints name the public methods +and their line ranges: + +```json +"next": [ + "read_range(path, 917, 950) — create()", + "read_range(path, 1240, 1271) — tick()", + "file_outline(path, { focus: 'StrandService' }) — just this class" +] +``` + +When `intent` mentions a symbol name and it appears in the outline, +that symbol's range is promoted to the first hint. + +## Estimated savings + +Every graft response that avoids returning raw content includes: + +```json +"savings": { "bytesAvoided": 68402 } +``` + +Not rigorous. Perfect for a README. Makes the value visible on +every call — agent and human both see "this outline saved 68 KB." diff --git a/docs/method/backlog/DX_api-examples-review-checklist.md b/docs/method/backlog/DX_api-examples-review-checklist.md new file mode 100644 index 00000000..6068e396 --- /dev/null +++ b/docs/method/backlog/DX_api-examples-review-checklist.md @@ -0,0 +1,11 @@ +# API Examples Review Checklist + +**Effort:** S + +## Problem + +Add to `CONTRIBUTING.md`: each `createPatch()`/`commit()` uses own builder, async methods `await`ed, examples copy-pasteable. + +## Notes + +- Low urgency — fold into PRs that touch related files diff --git a/docs/method/backlog/DX_archived-doc-status-guardrail.md b/docs/method/backlog/DX_archived-doc-status-guardrail.md new file mode 100644 index 00000000..807835c7 --- /dev/null +++ b/docs/method/backlog/DX_archived-doc-status-guardrail.md @@ -0,0 +1,11 @@ +# Archived Doc Status Guardrail + +**Effort:** XS + +## Problem + +Add a docs checklist or automated check preventing time-sensitive branch-state wording such as `pending merge` from landing in archive/history docs like `docs/ROADMAP/COMPLETED.md`. + +## Notes + +- Low urgency — fold into PRs that touch related files diff --git a/docs/method/backlog/DX_batch-review-fix-commits.md b/docs/method/backlog/DX_batch-review-fix-commits.md new file mode 100644 index 00000000..aae9cd75 --- /dev/null +++ b/docs/method/backlog/DX_batch-review-fix-commits.md @@ -0,0 +1,12 @@ +# Batch Review Fix Commits + +**Effort:** XS + +## Problem + +Batch all review fixes into one commit before re-requesting CodeRabbit. Reduces duplicate findings across incremental pushes. + +## Notes + +- Process improvement, not code change +- Low urgency diff --git a/BACKLOG/OG-018-browser-guide.md b/docs/method/backlog/DX_browser-guide.md similarity index 96% rename from BACKLOG/OG-018-browser-guide.md rename to docs/method/backlog/DX_browser-guide.md index 6e13e910..f63fba6e 100644 --- a/BACKLOG/OG-018-browser-guide.md +++ b/docs/method/backlog/DX_browser-guide.md @@ -1,6 +1,5 @@ -# OG-018 — Browser guide and storage adapter documentation +# Browser guide and storage adapter documentation -Status: QUEUED Legend: Observer Geometry diff --git a/docs/method/backlog/DX_consumer-test-type-import-coverage.md b/docs/method/backlog/DX_consumer-test-type-import-coverage.md new file mode 100644 index 00000000..a1ccd8b3 --- /dev/null +++ b/docs/method/backlog/DX_consumer-test-type-import-coverage.md @@ -0,0 +1,12 @@ +# Consumer Test Type-Only Import Coverage + +**Effort:** M + +## Problem + +Exercise all exported types beyond just declaring variables. Types like `OpOutcome`, `TraversalDirection`, `LogLevelValue` aren't tested at all. The consumer type test should verify all exported types are usable, not just importable. + +## Notes + +- File: `test/type-check/consumer.ts` +- Part of P3 Type Safety tier diff --git a/docs/method/backlog/DX_contributor-review-hygiene-guide.md b/docs/method/backlog/DX_contributor-review-hygiene-guide.md new file mode 100644 index 00000000..d0ec2a5b --- /dev/null +++ b/docs/method/backlog/DX_contributor-review-hygiene-guide.md @@ -0,0 +1,11 @@ +# Contributor Review-Loop Hygiene Guide + +**Effort:** S + +## Problem + +Add section to `CONTRIBUTING.md` covering commit sizing, CodeRabbit cooldown strategy, and when to request bot review. + +## Notes + +- Low urgency — fold into PRs that touch related files diff --git a/docs/method/backlog/DX_deno-smoke-test.md b/docs/method/backlog/DX_deno-smoke-test.md new file mode 100644 index 00000000..a8ae37d2 --- /dev/null +++ b/docs/method/backlog/DX_deno-smoke-test.md @@ -0,0 +1,11 @@ +# Deno Smoke Test + +**Effort:** S + +## Problem + +`npm run test:deno:smoke` for fast local pre-push confidence without full Docker matrix. + +## Notes + +- Platform item diff --git a/docs/method/backlog/DX_docs-consistency-preflight.md b/docs/method/backlog/DX_docs-consistency-preflight.md new file mode 100644 index 00000000..71c29909 --- /dev/null +++ b/docs/method/backlog/DX_docs-consistency-preflight.md @@ -0,0 +1,11 @@ +# Docs Consistency Preflight + +**Effort:** S + +## Problem + +Automated pass in `release:preflight` verifying changelog/readme/guide updates for behavior changes in hot paths (materialize, checkpoint, sync). Prevents releasing behavior changes without corresponding documentation updates. + +## Notes + +- Part of P2 CI & Tooling batch diff --git a/docs/method/backlog/DX_docs-version-sync-precommit.md b/docs/method/backlog/DX_docs-version-sync-precommit.md new file mode 100644 index 00000000..049f9a03 --- /dev/null +++ b/docs/method/backlog/DX_docs-version-sync-precommit.md @@ -0,0 +1,11 @@ +# Docs-Version-Sync Pre-Commit Check + +**Effort:** S + +## Problem + +Grep version literals in `.md` files against `package.json` to catch stale version references before they're committed. + +## Notes + +- Part of P2 CI & Tooling batch diff --git a/docs/method/backlog/DX_jsr-publish-deno-panic.md b/docs/method/backlog/DX_jsr-publish-deno-panic.md new file mode 100644 index 00000000..08e1956d --- /dev/null +++ b/docs/method/backlog/DX_jsr-publish-deno-panic.md @@ -0,0 +1,11 @@ +# Fix JSR Publish Dry-Run Deno Panic + +**Effort:** M + +## Problem + +Deno 2.6.7 `deno_ast` panics on overlapping text changes from duplicate `roaring` import rewrites. Either pin Deno version, vendor the import, or file upstream issue and add workaround. + +## Notes + +- Promote if JSR publish becomes imminent diff --git a/docs/method/backlog/DX_pr-ready-merge-cli.md b/docs/method/backlog/DX_pr-ready-merge-cli.md new file mode 100644 index 00000000..975a6995 --- /dev/null +++ b/docs/method/backlog/DX_pr-ready-merge-cli.md @@ -0,0 +1,11 @@ +# `scripts/pr-ready` Merge-Readiness CLI + +**Effort:** M + +## Problem + +No single tool aggregates unresolved review threads, pending/failed checks, CodeRabbit status/cooldown, and human-review count into one deterministic verdict. A `scripts/pr-ready` CLI would provide a single go/no-go answer before attempting merge. + +## Notes + +- Part of P2 CI & Tooling batch diff --git a/BACKLOG/OG-011-public-api-catalog-and-playground.md b/docs/method/backlog/DX_public-api-catalog-playground.md similarity index 96% rename from BACKLOG/OG-011-public-api-catalog-and-playground.md rename to docs/method/backlog/DX_public-api-catalog-playground.md index d4fc479e..2ef37cb5 100644 --- a/BACKLOG/OG-011-public-api-catalog-and-playground.md +++ b/docs/method/backlog/DX_public-api-catalog-playground.md @@ -1,6 +1,5 @@ -# OG-011 — Public API Catalog And Browser Documentation Playground +# Public API Catalog And Browser Documentation Playground -Status: QUEUED ## Why diff --git a/docs/method/backlog/DX_pure-typescript-example-app.md b/docs/method/backlog/DX_pure-typescript-example-app.md new file mode 100644 index 00000000..0cc396bb --- /dev/null +++ b/docs/method/backlog/DX_pure-typescript-example-app.md @@ -0,0 +1,11 @@ +# Pure TypeScript Example App + +**Effort:** M + +## Problem + +CI compile-only stub (`tsc --noEmit` on minimal TS consumer) to verify the type declarations work end-to-end for TypeScript consumers. + +## Notes + +- Part of P3 Type Safety tier diff --git a/docs/method/backlog/DX_readme-install-section.md b/docs/method/backlog/DX_readme-install-section.md new file mode 100644 index 00000000..dd66ec34 --- /dev/null +++ b/docs/method/backlog/DX_readme-install-section.md @@ -0,0 +1,11 @@ +# Docs: README Install Section + +**Effort:** S + +## Problem + +Quick Install section with Docker + native paths for the README. Current install instructions are incomplete. + +## Notes + +- Low urgency — fold into PRs that touch related files diff --git a/docs/method/backlog/DX_rfc-field-count-drift-detector.md b/docs/method/backlog/DX_rfc-field-count-drift-detector.md new file mode 100644 index 00000000..9d984c3c --- /dev/null +++ b/docs/method/backlog/DX_rfc-field-count-drift-detector.md @@ -0,0 +1,12 @@ +# RFC Field Count Drift Detector + +**Effort:** S + +## Problem + +Script that counts WarpGraph instance fields (grep `this._` in constructor) and warns if design RFC field counts diverge. Prevents stale numbers in `warpgraph-decomposition.md`. + +## Notes + +- Depends on `docs/design/warpgraph-decomposition.md` +- Low urgency — fold into PRs that touch related files diff --git a/docs/method/backlog/DX_security-sync-docs.md b/docs/method/backlog/DX_security-sync-docs.md new file mode 100644 index 00000000..8ebe70db --- /dev/null +++ b/docs/method/backlog/DX_security-sync-docs.md @@ -0,0 +1,11 @@ +# Docs: SECURITY_SYNC.md + +**Effort:** M + +## Problem + +Extract threat model from JSDoc into operator-facing documentation. The sync threat model is currently buried in code comments and not accessible to operators deploying git-warp sync. + +## Notes + +- Low urgency — fold into PRs that touch related files diff --git a/docs/method/backlog/DX_test-file-wildcard-ratchet.md b/docs/method/backlog/DX_test-file-wildcard-ratchet.md new file mode 100644 index 00000000..e29316f6 --- /dev/null +++ b/docs/method/backlog/DX_test-file-wildcard-ratchet.md @@ -0,0 +1,12 @@ +# Test-File Wildcard Ratchet + +**Effort:** S + +## Problem + +`ts-policy-check.js` excludes test files entirely. Need to either add a separate ratchet with higher threshold or document exclusion as intentional. + +## Notes + +- File: `scripts/ts-policy-check.js` +- Part of P3 Type Safety tier diff --git a/docs/method/backlog/DX_typed-custom-zod-helper.md b/docs/method/backlog/DX_typed-custom-zod-helper.md new file mode 100644 index 00000000..6c77d21b --- /dev/null +++ b/docs/method/backlog/DX_typed-custom-zod-helper.md @@ -0,0 +1,11 @@ +# `typedCustom()` Zod Helper + +**Effort:** S + +## Problem + +`z.custom()` without a generic yields `unknown` in JS; a JSDoc-friendly wrapper (or `@typedef`-based pattern) would eliminate verbose `/** @type {z.ZodType} */ (z.custom(...))` casts across HttpSyncServer and future Zod schemas. + +## Notes + +- Part of P3 Type Safety tier diff --git a/docs/method/backlog/DX_vitest-runtime-excludes.md b/docs/method/backlog/DX_vitest-runtime-excludes.md new file mode 100644 index 00000000..d848f7f5 --- /dev/null +++ b/docs/method/backlog/DX_vitest-runtime-excludes.md @@ -0,0 +1,11 @@ +# Vitest Explicit Runtime Excludes + +**Effort:** S + +## Problem + +Prevent accidental local runs of Docker-only suites by adding explicit runtime excludes to the vitest configuration. + +## Notes + +- Part of P2 CI & Tooling batch diff --git a/docs/method/backlog/DX_warpgraph-constructor-lifecycle-docs.md b/docs/method/backlog/DX_warpgraph-constructor-lifecycle-docs.md new file mode 100644 index 00000000..dddbb52d --- /dev/null +++ b/docs/method/backlog/DX_warpgraph-constructor-lifecycle-docs.md @@ -0,0 +1,13 @@ +# WarpGraph Constructor Lifecycle Docs + +**Effort:** M + +## Problem + +Document cache invalidation strategy for 25 instance variables: which operations dirty which caches, which flush them. + +## Notes + +- File: `src/domain/WarpGraph.js:69-198` +- Depends on B143 RFC (exists at `docs/design/warpgraph-decomposition.md`) +- Low urgency — fold into PRs that touch related files diff --git a/docs/method/backlog/DX_warpgraph-invisible-api-docs.md b/docs/method/backlog/DX_warpgraph-invisible-api-docs.md new file mode 100644 index 00000000..461070ee --- /dev/null +++ b/docs/method/backlog/DX_warpgraph-invisible-api-docs.md @@ -0,0 +1,12 @@ +# WarpGraph Invisible API Surface Docs + +**Effort:** M + +## Problem + +Add `// API Surface` block listing all 40+ dynamically wired methods with source module. Consider generating as a build step. The dynamically composed API surface is invisible to developers reading the source. + +## Notes + +- File: `src/domain/WarpGraph.js:451-478` +- Low urgency — fold into PRs that touch related files diff --git a/docs/method/backlog/PERF_benchmark-budgets-ci-gate.md b/docs/method/backlog/PERF_benchmark-budgets-ci-gate.md new file mode 100644 index 00000000..dc9f9cef --- /dev/null +++ b/docs/method/backlog/PERF_benchmark-budgets-ci-gate.md @@ -0,0 +1,12 @@ +# Benchmark Budgets + CI Regression Gate + +**Effort:** L + +## Problem + +Define perf thresholds for eager post-commit and materialize hash cost; fail CI on agreed regression. Without budgets, performance regressions slip in undetected. + +## Notes + +- Part of P2 CI & Tooling batch +- Largest remaining P2 item — may need to split out into its own PR diff --git a/BACKLOG/OG-013-out-of-core-materialization-and-streaming-reads.md b/docs/method/backlog/PERF_out-of-core-materialization.md similarity index 97% rename from BACKLOG/OG-013-out-of-core-materialization-and-streaming-reads.md rename to docs/method/backlog/PERF_out-of-core-materialization.md index 44ae1278..974d87ce 100644 --- a/BACKLOG/OG-013-out-of-core-materialization-and-streaming-reads.md +++ b/docs/method/backlog/PERF_out-of-core-materialization.md @@ -1,6 +1,5 @@ -# OG-013 — Out-of-core materialization and streaming reads +# Out-of-core materialization and streaming reads -Status: QUEUED ## Problem diff --git a/docs/method/backlog/TRUST_keystore-prevalidated-cache.md b/docs/method/backlog/TRUST_keystore-prevalidated-cache.md new file mode 100644 index 00000000..f21f4cb5 --- /dev/null +++ b/docs/method/backlog/TRUST_keystore-prevalidated-cache.md @@ -0,0 +1,11 @@ +# `TrustKeyStore` Pre-Validated Key Cache + +**Effort:** S + +## Problem + +Cache pre-validated keys in `TrustKeyStore` to avoid repeated validation on hot paths. + +## Notes + +- **Trigger:** Promote when `verifySignature` appears in any p95 flame graph above 5% of call time diff --git a/docs/method/backlog/TRUST_property-based-fuzz-test.md b/docs/method/backlog/TRUST_property-based-fuzz-test.md new file mode 100644 index 00000000..e5e111cc --- /dev/null +++ b/docs/method/backlog/TRUST_property-based-fuzz-test.md @@ -0,0 +1,11 @@ +# Doctor: Property-Based Fuzz Test + +**Effort:** M + +## Problem + +Property-based fuzz testing for the `doctor` / health check system. + +## Notes + +- **Trigger:** Promote when doctor check count exceeds 8 diff --git a/docs/method/backlog/TRUST_record-round-trip-snapshot.md b/docs/method/backlog/TRUST_record-round-trip-snapshot.md new file mode 100644 index 00000000..fd7a1e39 --- /dev/null +++ b/docs/method/backlog/TRUST_record-round-trip-snapshot.md @@ -0,0 +1,11 @@ +# Trust Record Round-Trip Snapshot Test + +**Effort:** S + +## Problem + +Snapshot test verifying trust record round-trip serialization stability. + +## Notes + +- **Trigger:** Promote if trust record schema changes diff --git a/docs/method/backlog/TRUST_schema-discriminated-union.md b/docs/method/backlog/TRUST_schema-discriminated-union.md new file mode 100644 index 00000000..ba5475de --- /dev/null +++ b/docs/method/backlog/TRUST_schema-discriminated-union.md @@ -0,0 +1,11 @@ +# Trust Schema Discriminated Union + +**Effort:** S + +## Problem + +Refactor trust schema from `superRefine` to discriminated union for cleaner validation. + +## Notes + +- **Trigger:** Promote if superRefine causes a bug or blocks a feature diff --git a/docs/method/backlog/TRUST_unsigned-record-edge-cases.md b/docs/method/backlog/TRUST_unsigned-record-edge-cases.md new file mode 100644 index 00000000..b4fb8b8c --- /dev/null +++ b/docs/method/backlog/TRUST_unsigned-record-edge-cases.md @@ -0,0 +1,11 @@ +# `unsignedRecordForId` Edge-Case Tests + +**Effort:** S + +## Problem + +Additional edge-case test coverage for `unsignedRecordForId`. + +## Notes + +- **Trigger:** Promote if canonical format changes diff --git a/docs/method/backlog/VIZ_mermaid-diagram-content-checklist.md b/docs/method/backlog/VIZ_mermaid-diagram-content-checklist.md new file mode 100644 index 00000000..4edb5f79 --- /dev/null +++ b/docs/method/backlog/VIZ_mermaid-diagram-content-checklist.md @@ -0,0 +1,11 @@ +# Mermaid Diagram Content Checklist + +**Effort:** XS + +## Problem + +For diagram migrations: count annotations in source/target, verify edge labels survive, check complexity annotations preserved. Prevents information loss when converting diagrams. + +## Notes + +- Low urgency — fold into PRs that touch related files diff --git a/docs/method/backlog/VIZ_mermaid-invisible-link-fragility.md b/docs/method/backlog/VIZ_mermaid-invisible-link-fragility.md new file mode 100644 index 00000000..4e794a10 --- /dev/null +++ b/docs/method/backlog/VIZ_mermaid-invisible-link-fragility.md @@ -0,0 +1,11 @@ +# Mermaid `~~~` Invisible-Link Fragility + +**Effort:** XS + +## Problem + +Undocumented Mermaid feature (`~~~`) used for positioning. Fragile and could break on renderer updates. + +## Notes + +- **Trigger:** Promote if Mermaid renderer update breaks `~~~` positioning diff --git a/docs/method/backlog/VIZ_mermaid-rendering-smoke-test.md b/docs/method/backlog/VIZ_mermaid-rendering-smoke-test.md new file mode 100644 index 00000000..a737072a --- /dev/null +++ b/docs/method/backlog/VIZ_mermaid-rendering-smoke-test.md @@ -0,0 +1,12 @@ +# Mermaid Rendering Smoke Test + +**Effort:** S + +## Problem + +Parse all ` ```mermaid ` blocks with `@mermaid-js/mermaid-cli` in CI to catch syntax errors in documentation diagrams before they reach users. + +## Notes + +- Target: `.github/workflows/ci.yml` or `scripts/` +- Part of P2 CI & Tooling batch diff --git a/docs/method/backlog/asap/DX_agent-code-audit.md b/docs/method/backlog/asap/DX_agent-code-audit.md new file mode 100644 index 00000000..92cf5414 --- /dev/null +++ b/docs/method/backlog/asap/DX_agent-code-audit.md @@ -0,0 +1,12 @@ +# TSC Campaign Agent-Authored Code Audit + +**Effort:** L + +## Problem + +27 files were merged via `checkout --theirs` during worktree conflict resolution without line-by-line review. Tests pass, but test coverage does not guarantee absence of subtle semantic drift (e.g. changed fallback values, widened types, reordered logic). Audit every agent-authored file diff against the pre-campaign baseline. Revert anything suspicious. + +## Notes + +- Source: P1b priority tier (TSC Zero Campaign Drift Audit) +- High priority — should be audited before next release diff --git a/docs/method/backlog/asap/DX_max-file-size-policy.md b/docs/method/backlog/asap/DX_max-file-size-policy.md new file mode 100644 index 00000000..6c5c3bce --- /dev/null +++ b/docs/method/backlog/asap/DX_max-file-size-policy.md @@ -0,0 +1,70 @@ +# Enforce Max File Size + One-Thing-Per-File Policy + +**Effort:** L + +## Problem + +The codebase has files ranging up to 2,572 LOC (ConflictAnalyzerService) with the combined WarpRuntime + warp/ mixin surface at 6,613 LOC. Large files are hard to navigate, attract merge conflicts, and resist comprehension. The lack of a file-size ceiling means files grow silently until someone notices. + +More fundamentally, files frequently contain multiple exports that serve different purposes — helper functions, type definitions, constants, and the primary class all living in the same file. This violates the principle that a file should be about one thing. + +## Policy + +### Max LOC + +Hard ceiling enforced by ESLint or a lint script: + +- **Source files (`src/`)**: 500 LOC max +- **Test files (`test/`)**: 800 LOC max (tests are inherently more verbose) +- **CLI commands (`bin/`)**: 300 LOC max +- **Scripts (`scripts/`)**: 300 LOC max + +Files over the limit must be split. The pre-commit or pre-push gate blocks violations. + +### One Thing Per File + +Each file exports **one primary thing** — a class, a function, a type, or a closely-related set of constants. If a file exports a class AND standalone helper functions that aren't private to that class, the helpers belong in their own module. + +Exceptions: +- Re-export barrels (`index.js`) are fine +- A function + its directly-related typedef is one thing +- A small set of related factory functions (e.g. `createNodeAdd`, `createEdgeAdd`) is one thing + +### Current Violators + +Files over 500 LOC that need splitting (source only): + +| File | LOC | What to split | +|---|---|---| +| ConflictAnalyzerService.js | 2,572 | 27 standalone helpers → separate module(s) | +| StrandService.js | 2,048 | 8 concerns → separate services (see B176) | +| GraphTraversal.js | 1,620 | Algorithm families could be separate files | +| PatchBuilderV2.js | 1,103 | Content ops, effect emission → extract | +| comparison.methods.js | 1,088 | Comparison helpers → separate modules | +| GitGraphAdapter.js | 1,036 | Already clean SRP, but could split by Git operation family | +| IncrementalIndexUpdater.js | 956 | Node/edge/prop update logic → separate strategies | +| query.methods.js | 906 | Query execution vs query building | +| QueryBuilder.js | 852 | Query DSL vs query execution | +| StreamingBitmapIndexBuilder.js | 835 | Build vs serialize | +| AuditVerifierService.js | 835 | Verification vs chain walking | +| InMemoryGraphAdapter.js | 815 | Already clean SRP | +| VisibleStateComparisonV5.js | 808 | Comparison algorithms | +| materializeAdvanced.methods.js | 716 | Advanced materialization paths | +| DagPathFinding.js | 705 | Path algorithms | +| WarpRuntime.js | 683 | See B176 | +| SyncController.js | 680 | Already extracted, near limit | + +## Implementation + +1. Add ESLint `max-lines` rule (already exists, just need to tighten the threshold) +2. Add the ceiling to `eslint.config.js` — 500 for src, 800 for test, 300 for bin/scripts +3. Existing violators get added to a temporary relaxation block (like the complexity relaxation) +4. Each file split is its own cycle — pull from backlog, split, verify tests, commit +5. Ratchet: the relaxation block must shrink over time, never grow + +## Notes + +- ESLint `max-lines` rule supports `skipBlankLines` and `skipComments` — use both for a fair count +- The `one thing per file` policy is harder to lint — enforce via code review and the bad_code.md journal +- GraphTraversal.js (1,620 LOC) was flagged as NOT a god object in the audit — single responsibility (algorithm library). The split here is by algorithm family, not by concern. Still worth doing for navigability. +- This policy should go in CONTRIBUTING.md and CLAUDE.md once agreed diff --git a/docs/method/backlog/asap/DX_trailer-codec-dts.md b/docs/method/backlog/asap/DX_trailer-codec-dts.md new file mode 100644 index 00000000..11101ba4 --- /dev/null +++ b/docs/method/backlog/asap/DX_trailer-codec-dts.md @@ -0,0 +1,12 @@ +# `@git-stunts/trailer-codec` Type Declarations + +**Effort:** M + +## Problem + +`getCodec()` in `MessageCodecInternal.js` returns an untyped `TrailerCodec`, forcing 6+ downstream files to cast through `unknown` intermediary. Root fix: add `index.d.ts` to the `@git-stunts/trailer-codec` package upstream. + +## Notes + +- Source: P1b priority tier (TSC Zero Campaign Drift Audit) +- Fix is upstream in `@git-stunts/trailer-codec`, not in this repo diff --git a/docs/method/backlog/asap/NDNM_comparison-pipeline-class-hierarchy.md b/docs/method/backlog/asap/NDNM_comparison-pipeline-class-hierarchy.md new file mode 100644 index 00000000..0fb1d02e --- /dev/null +++ b/docs/method/backlog/asap/NDNM_comparison-pipeline-class-hierarchy.md @@ -0,0 +1,25 @@ +# Comparison pipeline: proper class hierarchy + +**Effort:** L + +## Problem + +ComparisonController's comparison pipeline uses `unknown` params, +validator functions, and string-switched dispatch that should be +class hierarchies with constructors. + +Partially addressed in this PR: +- NormalizedSelector → LiveSelector, CoordinateSelector, StrandSelector, + StrandBaseSelector subclasses (each implements resolve()) +- OpOutcomeResult → OpApplied, OpSuperseded, OpRedundant subclasses +- ResolvedComparisonSide class +- ComparisonSideResolver eliminated (selectors resolve themselves) + +Still needed: +- LamportCeiling value object (validates non-negative int in constructor) +- StrandId value object (validates non-empty string in constructor) +- WriterId value object (same pattern) +- Remove all `normalizeX(unknown)` validator functions — these become + constructors +- Remove all `assertX(unknown)` guard functions — same +- Replace `Record` options bags with typed classes diff --git a/docs/method/backlog/asap/PROTO_effectsink-breaking-change.md b/docs/method/backlog/asap/PROTO_effectsink-breaking-change.md new file mode 100644 index 00000000..1795f569 --- /dev/null +++ b/docs/method/backlog/asap/PROTO_effectsink-breaking-change.md @@ -0,0 +1,12 @@ +# EffectSinkPort Breaking Change Hygiene + +**Effort:** S + +## Problem + +`EffectSinkPort.deliver()` return type was widened from `DeliveryObservation` to `DeliveryObservation | DeliveryObservation[]` in `index.d.ts`. This is a breaking API surface change that shipped without a `BREAKING CHANGE` commit footer. Assess downstream impact and decide: (a) revert the widening and fix MultiplexSink to unwrap, or (b) accept it and document as a breaking change for the next major version. + +## Notes + +- Source: P1b priority tier (TSC Zero Campaign Drift Audit) +- High priority diff --git a/docs/method/backlog/asap/PROTO_typedef-orset-to-class.md b/docs/method/backlog/asap/PROTO_typedef-orset-to-class.md new file mode 100644 index 00000000..e487fd25 --- /dev/null +++ b/docs/method/backlog/asap/PROTO_typedef-orset-to-class.md @@ -0,0 +1,9 @@ +# Promote ORSet from @typedef to class + +**Effort:** M + +## Problem + +`src/domain/crdt/ORSet.js` defines `ORSet` as a `@typedef {Object}` but +it has 10+ functions operating on it (add, remove, join, compact, contains, +encode). This is a full CRDT data structure — should be a class. diff --git a/docs/method/backlog/asap/PROTO_typedef-patchv2-to-class.md b/docs/method/backlog/asap/PROTO_typedef-patchv2-to-class.md new file mode 100644 index 00000000..171715d2 --- /dev/null +++ b/docs/method/backlog/asap/PROTO_typedef-patchv2-to-class.md @@ -0,0 +1,9 @@ +# Promote PatchV2 from @typedef to class + +**Effort:** M + +## Problem + +`src/domain/types/WarpTypesV2.js` defines `PatchV2` as a `@typedef {Object}`. +Core domain entity — created by PatchBuilder, serialized to CBOR, consumed +by JoinReducer. Should be a class. diff --git a/docs/method/backlog/asap/PROTO_typedef-trustrecord-to-class.md b/docs/method/backlog/asap/PROTO_typedef-trustrecord-to-class.md new file mode 100644 index 00000000..45a6346a --- /dev/null +++ b/docs/method/backlog/asap/PROTO_typedef-trustrecord-to-class.md @@ -0,0 +1,44 @@ +# Promote TrustRecord from @typedef to class + +**Effort:** M (upgraded from S — root cause is deeper than the typedef) + +## Problem + +The entire trust pipeline operates on `Record` — the +JavaScript equivalent of `any` in a trench coat. Trust records are +CBOR-decoded to `unknown`, cast to `Record`, and +passed through 20+ function signatures in that form across 5 files: + +- `TrustRecordService.js` — 10 occurrences +- `TrustCanonical.js` — 3 occurrences +- `TrustStateBuilder.js` — 1 occurrence +- `TrustEvaluator.js` — 1 occurrence +- `schemas.js` — 4 occurrences + +The `TrustRecord` typedef exists but is never enforced at the decode +boundary. Every consumer does bracket access and manual casting +because the type system says "bag of unknowns." + +## Root cause + +`codec.decode()` returns `unknown`. The trust pipeline casts to +`Record` and never narrows further. The Zod schema +(`TrustRecordSchema`) validates the shape but doesn't produce a +typed output that propagates — the parse result is immediately +consumed and the validated shape is lost. + +## Fix + +1. Create `TrustRecord` class in `TrustStateBuilder.js` (or own file) +2. At the CBOR decode boundary in `TrustRecordService.js`, Zod-parse + then wrap: `new TrustRecord(parsed.data)` +3. Replace all `Record` signatures downstream with + `TrustRecord` +4. `computeSignaturePayload`, `computeRecordId`, `verifyRecordId` in + `TrustCanonical.js` — accept `TrustRecord` instead of + `Record` +5. `buildState` in `TrustStateBuilder.js` — accept `TrustRecord[]` +6. Schema validators in `schemas.js` — accept `TrustRecord` + +The class eliminates bracket access, manual casts, and the pretense +that we don't know what a trust record looks like. diff --git a/docs/method/backlog/asap/PROTO_warpkernel-port-cleanup.md b/docs/method/backlog/asap/PROTO_warpkernel-port-cleanup.md new file mode 100644 index 00000000..4569aa22 --- /dev/null +++ b/docs/method/backlog/asap/PROTO_warpkernel-port-cleanup.md @@ -0,0 +1,27 @@ +# Cohesive WarpKernelPort (Persistence Union Type Cleanup) + +**Effort:** L + +## Problem + +The persistence dependency is expressed as an intersection of 4 fine-grained ports: + +```js +/** @typedef {CommitPort & BlobPort & TreePort & RefPort} CorePersistence */ +``` + +In practice, every concrete adapter (GitGraphAdapter, InMemoryGraphAdapter) implements all four simultaneously. Services that need persistence must type their parameters as this intersection, which is verbose and brittle. When a service needs a method that exists on the adapter but not on any individual port (e.g. `getConfig()`), developers historically resorted to `/** @type {any} */` casts to silence the compiler. + +The TSC zero campaign eliminated all 161 `any` casts, but the underlying problem remains: the port surface is too fragmented for the actual usage pattern. A single `WarpKernelPort` that composes the four sub-ports would: + +1. Give services a single, honest type to depend on +2. Eliminate the need for intersection type gymnastics in JSDoc +3. Make it possible to add persistence methods without updating 4 separate port files +4. Restore the ability to verify adapter completeness statically + +## Notes + +- The 4 sub-ports (CommitPort, BlobPort, TreePort, RefPort) can remain as building blocks — `WarpKernelPort extends CommitPort, BlobPort, TreePort, RefPort` (or intersection in JSDoc) +- GraphPersistencePort.js already exists as a 5th port with a different shape — reconcile or deprecate +- The 0 remaining `any` casts and 0 `TODO(ts-cleanup)` markers suggest the worst symptoms are fixed, but the root cause (fragmented port surface) still exists +- ConfigPort is a 6th port used by some adapters but not part of CorePersistence — decide if it belongs diff --git a/docs/method/backlog/asap/PROTO_warpruntime-god-class.md b/docs/method/backlog/asap/PROTO_warpruntime-god-class.md new file mode 100644 index 00000000..8df9ba25 --- /dev/null +++ b/docs/method/backlog/asap/PROTO_warpruntime-god-class.md @@ -0,0 +1,42 @@ +# WarpRuntime + warp/ Methods God Class Decomposition + +**Effort:** XL + +## Problem + +The WarpRuntime class (683 LOC) delegates to 12 method-mixin files in `src/domain/warp/` (5,930 LOC combined = 6,613 LOC total surface). While the SyncController has already been extracted, the class still orchestrates: + +- Git reference manipulation (via persistence port) +- Pathfinding queries (via GraphTraversal) +- Materialization and checkpoint management +- Temporal state routing (seek, coordinate, strand) +- Memory GC mechanisms +- Writer/PatchSession lifecycle +- Observer creation and management +- Subscription and watch APIs +- Effect pipeline integration +- Trust/audit integration +- Index management + +This coupling makes the class a merge conflict magnet and prevents new contributors from building a mental model. The `warp/` method mixins are wired dynamically via `Object.defineProperty` in `_wire.js`, which defeats static analysis and makes the API surface invisible without reading the wiring code. + +## Decomposition Direction + +The existing `warp/` method files already represent a partial decomposition — they just need to become proper service classes instead of method mixins bolted onto one god class. WarpRuntime becomes a thin facade delegating to: + +- `MaterializationService` (materialize, checkpoint, incremental) +- `QueryService` (query builder, getNodes, getNodeProps, getEdges) +- `TraversalService` (path, traversal algorithms) +- `TemporalService` (seek, coordinate, strand routing) +- `SubscriptionService` (watch, subscribe, diff streaming) +- `WriterService` (writer lifecycle, patch sessions) + +Each already exists as a `warp/*.methods.js` file — the refactor is promoting them from mixins to injected services. + +## Notes + +- Existing RFC: B143 (WarpGraph decomposition design) — check if still current +- SyncController already extracted (M10 era) — good precedent +- The `_wire.js` dynamic wiring must go — it defeats type checking and IDE navigation +- 16 ports already exist — the port surface is fine, it's the orchestrator that's too fat +- `CorePersistence` typedef (`CommitPort & BlobPort & TreePort & RefPort`) is the right pattern — intersection types over a single fat interface diff --git a/docs/method/backlog/asap/TRUST_sync-auth-ed25519.md b/docs/method/backlog/asap/TRUST_sync-auth-ed25519.md new file mode 100644 index 00000000..17fd5066 --- /dev/null +++ b/docs/method/backlog/asap/TRUST_sync-auth-ed25519.md @@ -0,0 +1,33 @@ +# Sync Auth: Migrate from Symmetric HMAC to Ed25519 Asymmetric Signatures + +**Effort:** L + +## Problem + +`SyncAuthService.js` uses HMAC-SHA256 with a shared secret for sync request authentication. The nonce-reservation system (UUID + 5-minute clock-skew window + LRU cache) effectively prevents replay attacks, but the underlying cryptographic model is symmetric — all authorized nodes hold the same secret key. + +In a multi-writer network, this means: + +1. **Single point of compromise** — one node's key leak exposes the entire network +2. **No attribution** — HMAC proves the sender knows the secret, not *which* sender it is +3. **Key distribution problem** — adding a new writer requires secure distribution of the shared secret to all existing nodes +4. **No revocation** — revoking one writer's access means rotating the secret for everyone + +## Fix + +Migrate to Ed25519 asymmetric signatures: + +- Each writer holds a private key; the network knows their public key +- Sync requests are signed with the sender's private key, verified with their public key +- Compromising one node exposes only that node's private key — blast radius is localized +- Writer revocation = remove their public key from the trust set, no secret rotation needed +- Attribution is inherent — the signature proves which specific writer sent the request + +## Notes + +- The trust subsystem already has key management infrastructure (`TrustRecordService`, `TrustEvaluator`, `TrustKeyStore`) — the sync auth migration should build on this, not create a parallel key system +- `@git-stunts/vault` handles OS-native keychain storage — private keys should go through Vault, not `.env` files +- The nonce-reservation + clock-skew mechanism is sound and should be preserved regardless of signature scheme +- Wire format change: sync request headers will carry a signature + public key ID instead of an HMAC tag. This is a breaking protocol change — needs versioned negotiation or a migration window where both are accepted +- `WebCryptoAdapter` already exists for multi-runtime crypto — Ed25519 is available via `crypto.subtle` in Node 20+, Bun, and Deno +- Consider: should the public key set be stored in the graph itself (as trust records) or out-of-band? The trust subsystem already stores writer trust assessments in-graph — public keys could follow the same pattern diff --git a/docs/method/backlog/bad-code/DX_exact-optional-conditional-spread.md b/docs/method/backlog/bad-code/DX_exact-optional-conditional-spread.md new file mode 100644 index 00000000..c570c0d8 --- /dev/null +++ b/docs/method/backlog/bad-code/DX_exact-optional-conditional-spread.md @@ -0,0 +1,15 @@ +# exactOptionalPropertyTypes conditional spread boilerplate + +**Effort:** M + +## Problem + +`exactOptionalPropertyTypes: true` means you can't pass +`{ key: undefined }` to a function expecting `{ key?: T }`. The fix +is conditional spread: `...(x !== undefined ? { key: x } : {})`. +This is correct but verbose. ~30 call sites across `WarpRuntime.js`, +`SyncController.js`, `WormholeService.js`, `StrandService.js`, and +others. + +A shared `omitUndefined()` utility could DRY it up, but premature +until the pattern stabilizes. diff --git a/docs/method/backlog/bad-code/DX_trailer-codec-type-poison.md b/docs/method/backlog/bad-code/DX_trailer-codec-type-poison.md new file mode 100644 index 00000000..c9c4201e --- /dev/null +++ b/docs/method/backlog/bad-code/DX_trailer-codec-type-poison.md @@ -0,0 +1,18 @@ +# @git-stunts/trailer-codec type poison at the boundary + +**Effort:** M + +## Problem + +`MessageCodecInternal.js` `getCodec()` returns an untyped +`TrailerCodec` from `@git-stunts/trailer-codec` (no `.d.ts`). Every +consumer must cast through `unknown` intermediary. Six files carry +this workaround: `AnchorMessageCodec`, `AuditMessageCodec`, +`CheckpointMessageCodec`, `PatchMessageCodec`, `SyncPayloadSchema`, +and any future codec consumer. + +## Fix + +Add `trailer-codec/index.d.ts` upstream so the return type flows +naturally. This is the same root cause as +`DX_trailer-codec-dts.md` in asap/ — fixing that fixes this. diff --git a/docs/method/backlog/bad-code/DX_warpcore-jsdoc-block-style.md b/docs/method/backlog/bad-code/DX_warpcore-jsdoc-block-style.md new file mode 100644 index 00000000..ed341363 --- /dev/null +++ b/docs/method/backlog/bad-code/DX_warpcore-jsdoc-block-style.md @@ -0,0 +1,9 @@ +# WarpCore content methods use single-line JSDoc + +**Effort:** XS + +## Problem + +`WarpCore.js` lines 162-184 collapse content method JSDoc into +single-line comments. Functional but inconsistent with the block +JSDoc style used everywhere else in the codebase. diff --git a/docs/method/backlog/bad-code/DX_warpruntime-delegation-dry.md b/docs/method/backlog/bad-code/DX_warpruntime-delegation-dry.md new file mode 100644 index 00000000..f30ce32a --- /dev/null +++ b/docs/method/backlog/bad-code/DX_warpruntime-delegation-dry.md @@ -0,0 +1,16 @@ +# DRY up WarpRuntime delegation boilerplate + +**Effort:** XS + +## Problem + +`WarpRuntime.js` lines 646-813 repeat the same `Object.defineProperty` +delegation loop 7 times (StrandController, QueryController, +ForkController, ProvenanceController, SubscriptionController, +ComparisonController, SyncController). Each loop is identical except +for the controller field name and method list. + +## Fix + +Extract a helper: `delegateToController(Class, controllerField, methods)`. +One call per controller, zero boilerplate. diff --git a/docs/method/backlog/bad-code/PERF_toposort-full-adjacency.md b/docs/method/backlog/bad-code/PERF_toposort-full-adjacency.md new file mode 100644 index 00000000..e385ca7f --- /dev/null +++ b/docs/method/backlog/bad-code/PERF_toposort-full-adjacency.md @@ -0,0 +1,23 @@ +# topologicalSort always materializes full adjacency + +**Effort:** M + +## Problem + +`GraphTraversal.js` `topologicalSort()` (~line 693) unconditionally +builds `adjList: Map` AND +`neighborEdgeMap: Map` for every reachable +node. Both structures hold the full edge set in memory (O(V+E)). The +`_returnAdjList` flag only controls whether `neighborEdgeMap` is +*returned* — it's always *built*. + +For callers that only need the sorted order, this is wasted memory. +Root cause behind `levels()` and `transitiveReduction()` inheriting +full-graph materialization from their `topologicalSort()` call. + +## Possible fix + +Split topo sort into two modes: lightweight (in-degree counting only, +no adj list caching) and current mode (full caching for callers that +need it). Or: make the Kahn phase re-fetch from provider, relying on +LRU neighbor cache for amortization. diff --git a/docs/method/backlog/bad-code/PROTO_patchbuilder-12-param-constructor.md b/docs/method/backlog/bad-code/PROTO_patchbuilder-12-param-constructor.md new file mode 100644 index 00000000..ee5174e2 --- /dev/null +++ b/docs/method/backlog/bad-code/PROTO_patchbuilder-12-param-constructor.md @@ -0,0 +1,18 @@ +# PatchBuilderV2 12-parameter constructor + +**Effort:** M + +## Problem + +`PatchBuilderV2` constructor accepts 12+ parameters including +`persistence`, `graphName`, `writerId`, `lamport`, `versionVector`, +`getCurrentState`, `expectedParentSha`, `targetRefPath`, +`onCommitSuccess`, `onDeleteWithData`, `codec`, `logger`, +`blobStorage`, `patchBlobStorage`. This is a configuration object, +not dependency injection — most params are runtime state, not +services. + +## Possible fix + +Split into a `PatchBuilderConfig` value object for static config and +pass mutable state separately. diff --git a/docs/method/backlog/bad-code/PROTO_strand-service-god-object.md b/docs/method/backlog/bad-code/PROTO_strand-service-god-object.md new file mode 100644 index 00000000..6f5898dc --- /dev/null +++ b/docs/method/backlog/bad-code/PROTO_strand-service-god-object.md @@ -0,0 +1,21 @@ +# StrandService is a god object (2048 LOC) + +**Effort:** L + +## Problem + +StrandService handles: strand CRUD, strand materialization, strand +patching, intent queuing/dequeuing, strand transfer planning, strand +braiding, strand overlays, strand comparison, and descriptor +serialization. It owns 40+ methods across ~2048 lines. + +## Decomposition candidates + +- `StrandMaterializationService` — materialize/compare/snapshot +- `StrandIntentService` — intent queue (queue, dequeue, tick, drain) +- `StrandBraidService` — braid overlay pinning and resolution +- `StrandTransferService` — transfer plan computation +- `StrandDescriptorCodec` — serialization/deserialization + +Each sub-service takes the same `graph` + `persistence` deps. +StrandService becomes a thin facade. diff --git a/docs/method/backlog/bad-code/PROTO_typedef-statediffresult-to-class.md b/docs/method/backlog/bad-code/PROTO_typedef-statediffresult-to-class.md new file mode 100644 index 00000000..15e55c12 --- /dev/null +++ b/docs/method/backlog/bad-code/PROTO_typedef-statediffresult-to-class.md @@ -0,0 +1,9 @@ +# Promote StateDiffResult from @typedef to class + +**Effort:** S + +## Problem + +`src/domain/services/StateDiff.js` defines `StateDiffResult` as a +`@typedef {Object}`. Computed diffs pushed to subscribers via +`graph.subscribe()`. Should be a class. diff --git a/docs/method/backlog/bad-code/PROTO_warpserve-domain-infra-blur.md b/docs/method/backlog/bad-code/PROTO_warpserve-domain-infra-blur.md new file mode 100644 index 00000000..f8b8fdcb --- /dev/null +++ b/docs/method/backlog/bad-code/PROTO_warpserve-domain-infra-blur.md @@ -0,0 +1,16 @@ +# WarpServeService domain/infra boundary blur + +**Effort:** S + +## Problem + +`WarpServeService` lives in `src/domain/services/` but requires a +`WebSocketServerPort` — a port whose only implementations are +infrastructure adapters. The service orchestrates WebSocket protocol +handling which is domain logic, but its constructor requires I/O +infrastructure to function. This blurs the hexagonal boundary and +makes unit testing harder. + +Not a bug. Acceptable today. If more I/O-dependent services emerge, +consider an "application services" layer between domain and +infrastructure. diff --git a/docs/method/backlog/cool-ideas/DX_cross-path-equivalence-test-dsl.md b/docs/method/backlog/cool-ideas/DX_cross-path-equivalence-test-dsl.md new file mode 100644 index 00000000..e2a65cd8 --- /dev/null +++ b/docs/method/backlog/cool-ideas/DX_cross-path-equivalence-test-dsl.md @@ -0,0 +1,12 @@ +# Cross-path equivalence as a general testing pattern + +The JoinReducer `pathEquivalence.test.js` applies the same input +through N code paths and asserts identical output. This generalizes: + +- Serialization round-trips +- Checkpoint save/restore vs fresh materialize +- Sync request/response vs local materialize +- Incremental vs full reduce + +A possible test DSL is: +`assertPathEquivalence(input, [pathA, pathB, pathC], comparator)`. diff --git a/docs/method/backlog/cool-ideas/DX_graft-cool-ideas.md b/docs/method/backlog/cool-ideas/DX_graft-cool-ideas.md new file mode 100644 index 00000000..ac63e581 --- /dev/null +++ b/docs/method/backlog/cool-ideas/DX_graft-cool-ideas.md @@ -0,0 +1,40 @@ +# Graft cool ideas (post-Phase 1) + +Ideas surfaced during the design review. Not Phase 1 scope. + +## Commands + +- **graft pack** — one-shot handoff bundle: WORKING_STATE.md, top + touched files, last 10 decisions, recommended next reads. Great + for `/clear`, bug reports, "what was I doing yesterday?" +- **graft since ``** — symbols changed since HEAD~1, main, + or a specific commit. The Git/WARP bridge starts here. +- **graft explain ``** — built-in help for machine + codes (`graft explain over_byte_threshold`) +- **graft init** — scaffolds `.graftignore`, `.gitignore` update, + CLAUDE/GEMINI/Codex instruction snippets, optional hook install + +## Features + +- **focus: "auto"** — if intent mentions a symbol name, auto-promote + it in next hints and optionally return focused outline first +- **capture_range(handle, start, end)** — opaque log handles instead + of path-based artifacts. Cleaner, harder to misuse. +- **policy profiles** — `balanced`, `strict`, `feral`. Yes, feral + is ridiculous. Yes, people will use it immediately. +- **receipt mode** — every decision emits a compact receipt blob for + Blacklight: what was requested, returned, why, bytes avoided, what + the agent did next +- **symbol heatmap** — after enough metrics, show which files/symbols + most often trigger outlines, bounded reads, re-orientation. Gold + for Phase 2 prioritization. + +## The line to WARP + +- **graft changed-since-last-read** — the doorway. Not Phase 1. + This is where graft stops being a governor and starts being a + substrate. +- Graft = governor at the edge (what context is allowed) +- WARP = memory underneath (structural truth over time) +- The mutation happens when "current file shape" stops being enough + and you need observer-relative structural history as a primitive diff --git a/docs/method/backlog/cool-ideas/DX_tsc-autofix-tool.md b/docs/method/backlog/cool-ideas/DX_tsc-autofix-tool.md new file mode 100644 index 00000000..2c31256f --- /dev/null +++ b/docs/method/backlog/cool-ideas/DX_tsc-autofix-tool.md @@ -0,0 +1,13 @@ +# Mechanical tsc autofix tool + +The TS4111 fixer script from the TSC zero campaign generalized well. +A `tsc-autofix` CLI that reads `tsc --noEmit` stderr, classifies +errors by fixability, and applies mechanical fixes: + +- TS4111 (bracket access): `.prop` -> `['prop']` +- TS6133 (unused vars/imports): delete the declaration +- TS2464 (computed property): wrap in cast +- TS2532/TS18048 (possibly undefined): suggest `?? defaultValue` + +Non-mechanical errors (TS2345, TS2322) left as report. Could live in +`scripts/` or become a `@git-stunts` tool. diff --git a/docs/method/backlog/cool-ideas/PERF_encrypted-stores-fixed-chunking.md b/docs/method/backlog/cool-ideas/PERF_encrypted-stores-fixed-chunking.md new file mode 100644 index 00000000..511950d9 --- /dev/null +++ b/docs/method/backlog/cool-ideas/PERF_encrypted-stores-fixed-chunking.md @@ -0,0 +1,8 @@ +# Switch encrypted stores to fixed chunking + +Both `CasSeekCacheAdapter` and `CasBlobAdapter` use +`{ strategy: 'cdc' }` unconditionally. Ciphertext is pseudorandom +so CDC boundaries provide no dedup benefit. The adapter could check +`_encryptionKey` at init and pick `fixed` vs `cdc` accordingly — +suppressing the git-cas runtime warning and saving rolling hash +overhead. diff --git a/docs/method/backlog/cool-ideas/PERF_native-vs-wasm-roaring-benchmark.md b/docs/method/backlog/cool-ideas/PERF_native-vs-wasm-roaring-benchmark.md new file mode 100644 index 00000000..8ac9e888 --- /dev/null +++ b/docs/method/backlog/cool-ideas/PERF_native-vs-wasm-roaring-benchmark.md @@ -0,0 +1,11 @@ +# Native vs WASM Roaring Benchmark Pack + +**Effort:** M + +## Problem + +Design hot-path workload simulations from actual bitmap usage in `BitmapIndexBuilder`, `LogicalBitmapIndexBuilder`, `StreamingBitmapIndexBuilder`, `BitmapIndexReader`, and `IncrementalIndexUpdater`; benchmark native `roaring` under heavy-load scenarios, rerun the same workloads with `roaring-wasm`, and publish a decision memo with throughput/latency deltas and operational recommendations. + +## Notes + +- Part of P4 Large-Graph Performance tier diff --git a/docs/method/backlog/cool-ideas/PERF_restore-buffer-guard.md b/docs/method/backlog/cool-ideas/PERF_restore-buffer-guard.md new file mode 100644 index 00000000..aea76070 --- /dev/null +++ b/docs/method/backlog/cool-ideas/PERF_restore-buffer-guard.md @@ -0,0 +1,6 @@ +# Restore buffer guard for seek cache + blob adapter + +git-cas 5.3.0 added `maxRestoreBufferSize` (default 512 MiB). +Neither `CasSeekCacheAdapter` nor `CasBlobAdapter` passes this +option. A tighter limit (64 MiB for blobs, 32 MiB for seek cache) +would fail fast instead of OOM. diff --git a/docs/method/backlog/cool-ideas/PERF_streaming-graph-traversal.md b/docs/method/backlog/cool-ideas/PERF_streaming-graph-traversal.md new file mode 100644 index 00000000..7661b570 --- /dev/null +++ b/docs/method/backlog/cool-ideas/PERF_streaming-graph-traversal.md @@ -0,0 +1,16 @@ +# Streaming GraphTraversal — async generators + +Every traversal algorithm could offer a streaming variant: +`bfs()` -> `bfs*()`, `dfs()` -> `dfs*()`, etc. The current API +collects results into arrays, forcing O(V) memory even when the +caller only needs the first match, a count, or a pipeline. + +`AsyncGenerator` return type lets callers break early, +compose with other iterables, or pipe into backpressure-aware sinks. +The array-returning methods become sugar. + +The tricky part is stats: can't return `{ nodes, stats }` from a +generator. Options: stats callback in hooks, generator `.return()` +value, or separate `statsForLastRun()` accessor. + +Start with `transitiveClosure` as proof-of-concept, then generalize. diff --git a/docs/method/backlog/cool-ideas/PROTO_encrypted-trailer-rename.md b/docs/method/backlog/cool-ideas/PROTO_encrypted-trailer-rename.md new file mode 100644 index 00000000..17e740b9 --- /dev/null +++ b/docs/method/backlog/cool-ideas/PROTO_encrypted-trailer-rename.md @@ -0,0 +1,7 @@ +# Rename `encrypted` trailer to `eg-encrypted` + +The Git commit trailer key `encrypted` should be namespaced to avoid +collisions. But renaming is a wire format change — existing commits +use `encrypted`. Needs the same ADR + migration approach as edge +property ops: keep reading old key, start writing new one, eventually +stop reading old. Breaking change, major version bump. diff --git a/docs/method/backlog/cool-ideas/PROTO_safe-context-warp-provenance-layer.md b/docs/method/backlog/cool-ideas/PROTO_safe-context-warp-provenance-layer.md new file mode 100644 index 00000000..1ffe2d3a --- /dev/null +++ b/docs/method/backlog/cool-ideas/PROTO_safe-context-warp-provenance-layer.md @@ -0,0 +1,45 @@ +# WARP provenance layer for safe-context + +Tree-sitter is the parser. WARP graphs are the memory of structural +truth over time. + +## The insight + +Line numbers are trash — they drift constantly. A provenance-aware +structure can track symbol lineage across edits: + +- "this is the same symbol, just transformed" +- "what changed since I last observed this file?" +- "read only the delta, in the smallest meaningful unit" + +## What WARP models here + +- File revision worldlines +- Symbol identity across revisions (stable even when lines drift) +- Structural rewrite events (method moved, param list changed, + export surface widened — not line diffs) +- Agent observations of symbols +- Tool outputs as witnesses + +## Concrete features this unlocks + +- `since_last_read` — symbols changed since last observation +- `symbol_diff` — structural delta between worldlines +- `hot_regions` — symbols that churn most under edit-test loops +- `structural_checkpoint` — working state as touched symbol lineage +- Observer-relative views — human sees public API changes, agent + sees exact changed symbols + dependencies + +## The ramp + +1. MVP (Phase 1): tree-sitter, no WARP. Ship safe-context. +2. Provenance model: file version, symbol identities, ranges, + hashes, parent container, export status, observation timestamps. +3. Full: worldlines, structural deltas, observer-relative views. + +## Key distinction + +Track symbol lineage and structural deltas, not raw AST tombstones. +Current parse tree is ephemeral. Structural entities matter. +Provenance of those entities matters. Replayable transformations +matter. diff --git a/docs/method/backlog/cool-ideas/PROTO_writer-isolated-bisect.md b/docs/method/backlog/cool-ideas/PROTO_writer-isolated-bisect.md new file mode 100644 index 00000000..a7f3b07a --- /dev/null +++ b/docs/method/backlog/cool-ideas/PROTO_writer-isolated-bisect.md @@ -0,0 +1,10 @@ +# Writer-isolated bisect mode + +A `--isolated` flag on bisect that materializes only the target +writer's patches up to a given point, ignoring other writers +entirely. Useful for debugging single-writer regressions without +cross-writer interference. Trade-off: faster materialization but +may miss interaction bugs. + +If pursued: add `materializeForWriter(writerId, ceiling)` to +WarpGraph, wire `--isolated` flag in bisect CLI. diff --git a/docs/method/backlog/cool-ideas/TRUST_per-writer-kek-wrapping.md b/docs/method/backlog/cool-ideas/TRUST_per-writer-kek-wrapping.md new file mode 100644 index 00000000..da0be1b2 --- /dev/null +++ b/docs/method/backlog/cool-ideas/TRUST_per-writer-kek-wrapping.md @@ -0,0 +1,8 @@ +# Per-writer key envelope encryption (KEK wrapping) + +Each writer gets their own DEK wrapped by a shared KEK. git-cas +already supports envelope encryption — the DEK/KEK split could be +wired at the `CasBlobAdapter` level, with writer ID selecting which +wrapped DEK to use. Lets you revoke a single writer's access by +re-wrapping without re-encrypting all data. Pairs with +`@git-stunts/vault` for KEK storage. diff --git a/docs/method/backlog/cool-ideas/VIZ_graph-diff-transitive-reduction.md b/docs/method/backlog/cool-ideas/VIZ_graph-diff-transitive-reduction.md new file mode 100644 index 00000000..9afbf07e --- /dev/null +++ b/docs/method/backlog/cool-ideas/VIZ_graph-diff-transitive-reduction.md @@ -0,0 +1,9 @@ +# Graph diff via transitive reduction comparison + +Compute `transitiveReduction(graphA)` and +`transitiveReduction(graphB)`, diff those minimal edge sets. Much +more compact structural summary than raw edge-set diff — strips +implied edges, shows only load-bearing structural changes. + +Could feed into time-travel delta engine as +`warp diff --mode=structural`. diff --git a/docs/method/backlog/cool-ideas/VIZ_levels-lightweight-layout.md b/docs/method/backlog/cool-ideas/VIZ_levels-lightweight-layout.md new file mode 100644 index 00000000..bd935124 --- /dev/null +++ b/docs/method/backlog/cool-ideas/VIZ_levels-lightweight-layout.md @@ -0,0 +1,12 @@ +# `levels()` as Lightweight `--view` Layout + +**Effort:** M + +## Problem + +`levels()` is exactly the Y-axis assignment a layered DAG layout needs. For simple DAGs, `levels()` + left-to-right X sweep could produce clean layouts without the 2.5MB ELK import. Offer `--view --layout=levels` as an instant rendering mode, reserving ELK for complex graphs. + +## Notes + +- Files: `src/visualization/layouts/`, `bin/cli/commands/view.js` +- Part of P5 Features & Visualization tier diff --git a/docs/method/backlog/cool-ideas/VIZ_structural-diff-transitive-reduction.md b/docs/method/backlog/cool-ideas/VIZ_structural-diff-transitive-reduction.md new file mode 100644 index 00000000..56a03536 --- /dev/null +++ b/docs/method/backlog/cool-ideas/VIZ_structural-diff-transitive-reduction.md @@ -0,0 +1,11 @@ +# Structural Diff via Transitive Reduction + +**Effort:** L + +## Problem + +Compute `transitiveReduction(stateA)` vs `transitiveReduction(stateB)` to produce a compact structural diff that strips implied edges and shows only "load-bearing" changes. Natural fit for H1 (Time-Travel Delta Engine) as `warp diff --mode=structural`. + +## Notes + +- Part of P5 Features & Visualization tier diff --git a/docs/method/backlog/cool-ideas/VIZ_warp-ui-visualizer.md b/docs/method/backlog/cool-ideas/VIZ_warp-ui-visualizer.md new file mode 100644 index 00000000..3f6c2f12 --- /dev/null +++ b/docs/method/backlog/cool-ideas/VIZ_warp-ui-visualizer.md @@ -0,0 +1,12 @@ +# WARP UI Visualizer + +**Effort:** L + +## Problem + +Full UI visualizer for WARP graphs. Scope and UX goals not yet defined. + +## Notes + +- **Trigger:** Promote when RFC filed with scoped UX goals +- B157 (browser compatibility) is complete — unblocks browser-side work diff --git a/docs/method/backlog/inbox/DX_bearing-md.md b/docs/method/backlog/inbox/DX_bearing-md.md new file mode 100644 index 00000000..313a17a8 --- /dev/null +++ b/docs/method/backlog/inbox/DX_bearing-md.md @@ -0,0 +1,7 @@ +# Create BEARING.md + +Single living signpost at `docs/BEARING.md`. Updated at cycle +boundaries. Three questions: Where are we going? What just shipped? +What feels wrong? + +Replaces the role that ROADMAP.md currently half-fills. diff --git a/docs/method/backlog/inbox/DX_method-cli-tooling.md b/docs/method/backlog/inbox/DX_method-cli-tooling.md new file mode 100644 index 00000000..5f1fd969 --- /dev/null +++ b/docs/method/backlog/inbox/DX_method-cli-tooling.md @@ -0,0 +1,8 @@ +# METHOD CLI tooling + +Small CLI for METHOD workspace operations: `method inbox`, +`method pull`, `method close`, `method status`. Scaffolds files, +numbers cycles, summarizes backlog lanes. + +Open question: TypeScript or plain JavaScript? Bijou dependency +or zero-dep shell scripts? diff --git a/docs/method/backlog/inbox/DX_method-v2-upgrade.md b/docs/method/backlog/inbox/DX_method-v2-upgrade.md new file mode 100644 index 00000000..303275c8 --- /dev/null +++ b/docs/method/backlog/inbox/DX_method-v2-upgrade.md @@ -0,0 +1,16 @@ +# Upgrade METHOD.md to v2 draft + +Incorporate improvements from the pre-0002 draft review: + +- Stances section (agent-human parity, agent-surface-first) +- Design constraints (meaning without decoration, a11y, l10n) +- Playback witness definition +- Disagreement protocol (dual-sponsor consensus) +- BEARING.md coordination mechanism +- Cycle types (feature, design, debt) +- Naming conventions table + +Defer or qualify: +- Localization posture (mark as "when applicable" for dev tools) +- Debt cycle posture sections (make optional) +- "Does not go back" commitment language (align with pivot option) diff --git a/docs/method/backlog/inbox/DX_witness-directory-convention.md b/docs/method/backlog/inbox/DX_witness-directory-convention.md new file mode 100644 index 00000000..d83fa670 --- /dev/null +++ b/docs/method/backlog/inbox/DX_witness-directory-convention.md @@ -0,0 +1,6 @@ +# Witness directory convention for playback + +Each cycle retro gets a `witness/` directory containing the concrete +playback artifacts: test output logs, transcripts, screenshots, +recordings. Currently retros are prose-only — adding structured +evidence makes the playback step auditable. diff --git a/BACKLOG/OG-017-modular-type-declarations.md b/docs/method/backlog/up-next/DX_modular-type-declarations.md similarity index 97% rename from BACKLOG/OG-017-modular-type-declarations.md rename to docs/method/backlog/up-next/DX_modular-type-declarations.md index b602c5df..48d32e8d 100644 --- a/BACKLOG/OG-017-modular-type-declarations.md +++ b/docs/method/backlog/up-next/DX_modular-type-declarations.md @@ -1,6 +1,5 @@ -# OG-017 — Break up the `index.d.ts` monolith +# Break up the `index.d.ts` monolith -Status: QUEUED Legend: Observer Geometry diff --git a/docs/method/backlog/up-next/DX_observer-first-guide.md b/docs/method/backlog/up-next/DX_observer-first-guide.md new file mode 100644 index 00000000..da7dd13c --- /dev/null +++ b/docs/method/backlog/up-next/DX_observer-first-guide.md @@ -0,0 +1,29 @@ +# Guide: Observer-First Client Pattern + +**Effort:** M + +## Problem + +The GUIDE and ADVANCED_GUIDE don't strongly enough convey that the primary client interaction model is through Observer APIs. Clients should be reading state through Observers (projections over worldlines through apertures) and letting git-warp manage the underlying graph topology, materialization, and CRDT mechanics. + +The current docs teach low-level graph manipulation (createPatch, addNode, etc.) with equal weight to the Observer read path, which gives the impression that clients should be directly managing graph state. In practice, most consumers should: + +1. Write through `Writer` / `PatchBuilderV2` (thin, scoped mutations) +2. Read through `Observer` (projected, filtered, cached views) +3. Let git-warp handle materialization, conflict resolution, and indexing + +## Notes + +- Review `docs/GUIDE.md` and `docs/ADVANCED_GUIDE.md` for teaching order +- The Observer API (apertures, worldlines, seek, strand-scoped reads) should be the primary "how to read data" section +- Direct `getNodes()` / `getNodeProps()` / `query()` are escape hatches, not the default path +- This aligns with Paper IV's observer geometry: observers are the projection layer, not an optional feature + +### Redaction and encryption guidance + +The guide should clearly explain the security model for sensitive data: + +- Aperture `redact` is **application-layer filtering** — useful for multi-tenant query isolation, but not a cryptographic boundary. Anyone with filesystem access to `.git/objects/` can read raw patch blobs. +- For actual data protection, enable **graph encryption at rest** via `patchBlobStorage` with an encryption key (B164). This encrypts patch CBOR with AES-256-GCM before writing to Git objects. +- The guide should teach: redact for convenience, encrypt for security. Show how to configure `CasBlobAdapter` with an encryption key and wire it through `WarpGraph.open({ patchBlobStorage })`. +- Also explain that `@git-stunts/vault` manages encryption keys via OS-native keychains — no `.env` files for secrets. diff --git a/docs/method/backlog/up-next/PERF_async-generator-traversal.md b/docs/method/backlog/up-next/PERF_async-generator-traversal.md new file mode 100644 index 00000000..5dc11cdc --- /dev/null +++ b/docs/method/backlog/up-next/PERF_async-generator-traversal.md @@ -0,0 +1,12 @@ +# Async Generator Traversal API + +**Effort:** L + +## Problem + +Streaming variants of the remaining GraphTraversal algorithms (`bfsStream()`, `dfsStream()`, etc.) returning `AsyncGenerator` instead of collected arrays. Array-returning methods become sugar over `collect()`. + +## Notes + +- Prerequisite B151 (transitiveClosure streaming) is complete +- Part of P4 Large-Graph Performance tier diff --git a/BACKLOG/OG-009-playback-head-alignment.md b/docs/method/backlog/up-next/PROTO_playback-head-alignment.md similarity index 79% rename from BACKLOG/OG-009-playback-head-alignment.md rename to docs/method/backlog/up-next/PROTO_playback-head-alignment.md index d425e03a..881eace8 100644 --- a/BACKLOG/OG-009-playback-head-alignment.md +++ b/docs/method/backlog/up-next/PROTO_playback-head-alignment.md @@ -1,6 +1,5 @@ -# OG-009 — Align Playback-Head And TTD Consumers After Read Nouns Stabilize +# Align Playback-Head And TTD Consumers After Read Nouns Stabilize -Status: QUEUED ## Problem diff --git a/docs/method/backlog/up-next/PROTO_wire-format-migration-edgepropset.md b/docs/method/backlog/up-next/PROTO_wire-format-migration-edgepropset.md new file mode 100644 index 00000000..6ec9e081 --- /dev/null +++ b/docs/method/backlog/up-next/PROTO_wire-format-migration-edgepropset.md @@ -0,0 +1,20 @@ +# Persisted Wire-Format Migration (ADR 2) — EdgePropSet + +**Effort:** XL + +## Problem + +Promote `EdgePropSet` to persisted raw op type (schema version 4). Requires graph capability ratchet, mixed v3+v4 materialization, read-path accepting both legacy and new format, and sync emitting raw `EdgePropSet` only after graph capability cutover. + +## Notes + +- **Status:** DEFERRED — governed by ADR 3 readiness gates +- **Risk:** HIGH +- **Depends on:** ADR 3 Gate 1 satisfaction +- ADR 3 Gate 1 prerequisites (not yet met): + - Historical identifier audit complete + - Observability plan exists + - Graph capability design approved + - Rollout playbook exists + - ADR 2 tripwire tests written (beyond current wire gate tests) +- Gate: Mixed-schema materialization deterministic. `WarpGraph.noCoordination.test.js` passes with v3+v4 writers. No regression in existing patch replay. Full test suite green. ADR 3 Gate 1 and Gate 2 both satisfied. diff --git a/docs/method/legends/CLEAN_CODE.md b/docs/method/legends/CLEAN_CODE.md new file mode 100644 index 00000000..2936d1ce --- /dev/null +++ b/docs/method/legends/CLEAN_CODE.md @@ -0,0 +1,93 @@ +# CLEAN_CODE + +Un-shittifying the codebase. Systematically. + +## What it covers + +Structural quality work that makes the code honest: god object +decomposition, raw error replacement, type boundary cleanup, +constructor hygiene, redundant data structure elimination, and +enforcing the policies that prevent regression (file size limits, +one-thing-per-file, lint ratchets). + +This is not feature work. This is not performance optimization. +This is making the code say what it means, and meaning what it says. + +## Who cares + +### Sponsor human + +James — maintains this codebase long-term. Wants to open any file +and understand it without scrolling past 500 lines of mixed +concerns. Wants `new Error()` to never appear where a domain error +class exists. Wants the hexagonal boundary to be real, not +aspirational. + +### Sponsor agent + +Claude — reads and modifies this code every session. God objects +force full-file reads. Mixed concerns make targeted edits risky. +Raw errors lose context in stack traces. Type poison cascades +through downstream files. Every structural problem multiplies the +cost of every future task. + +## What success looks like + +- No source file exceeds 500 LOC (test files 800, CLI 300) +- Every thrown error is a domain error class, never raw `Error` +- Each file exports one primary thing +- Port boundaries are honest — domain services don't require I/O + infrastructure +- Constructor parameter lists are legible (config objects, not + positional sprawl) +- No redundant data structures sitting in memory alongside each + other +- The ESLint `max-lines` ratchet enforces the ceiling and the + relaxation list only shrinks + +## How you know + +- `npm run lint` passes with the `max-lines` rule enforced +- `grep -r 'new Error(' src/domain/` returns zero hits +- The relaxation block in `eslint.config.js` has fewer entries than + it did last cycle +- No file in `bad-code/` has been there for more than 3 cycles + without being pulled + +## Current surface + +### bad-code/ + +All 10 items in `docs/method/backlog/bad-code/` fall under this +legend: + +- `PROTO_strand-service-god-object.md` — 2048 LOC, 40+ methods +- `PROTO_audit-receipt-raw-error.md` — 18 raw Error throws +- `PROTO_sync-protocol-raw-error.md` — raw Error with manual code +- `PROTO_patchbuilder-12-param-constructor.md` — config sprawl +- `PROTO_receipt-op-type-redundant.md` — dead mapping table +- `PROTO_warpserve-domain-infra-blur.md` — hex boundary violation +- `DX_trailer-codec-type-poison.md` — untyped boundary infects 6 files +- `DX_exact-optional-conditional-spread.md` — 30 verbose sites +- `PERF_toposort-full-adjacency.md` — wasteful memory allocation +- `PERF_transitive-reduction-redundant-adjlist.md` — redundant structure + +### asap/ + +- `DX_max-file-size-policy.md` — the ratchet that prevents regression +- `DX_restore-dot-notation.md` — lint rule gap from TSC campaign +- `DX_agent-code-audit.md` — audit agent-authored code from TSC blitz +- `DX_trailer-codec-dts.md` — upstream fix that kills type poison +- `PROTO_effectsink-breaking-change.md` — breaking change hygiene +- `PROTO_warpkernel-port-cleanup.md` — persistence union types +- `PROTO_warpruntime-god-class.md` — the other god object + +## Legend code + +`CC` — for backlog items that belong to this legend. + +```text +CC_strand-service-decomposition.md +CC_raw-error-purge.md +CC_max-lines-ratchet.md +``` diff --git a/docs/method/legends/NO_DOGS_NO_MASTERS.md b/docs/method/legends/NO_DOGS_NO_MASTERS.md new file mode 100644 index 00000000..a78bec89 --- /dev/null +++ b/docs/method/legends/NO_DOGS_NO_MASTERS.md @@ -0,0 +1,102 @@ +# NO_DOGS_NO_MASTERS + +Break up the gods. Free their vassals. + +## What it covers + +God object decomposition and phantom-type liberation. Two sides +of the same coin: god objects hoard responsibilities behind a +single class, and `@typedef {Object}` phantoms masquerade as data +structures while contributing nothing at runtime. + +**The Gods** — bloated service classes that own everything and +delegate nothing. StrandService (2,048 LOC, 40+ methods). +WarpRuntime (6,613 LOC across warp/ mixins). They know too much, +do too much, and make every edit a full-context-window affair. + +**The Vassals** — `@typedef {Object}` shapes that get constructed, +frozen, serialized, passed around, and queried — doing all the +work of a class without ever becoming one. They exist only at +type-check time. No `instanceof`. No constructors. No methods. +Phantom types serving phantom masters. + +The fix is the same for both: real JavaScript. Classes with +constructors that validate. Methods that live next to their data. +Files you can grep for with `instanceof`. Code that exists at +runtime because it has something to do at runtime. + +## Who cares + +### Sponsor human + +James — wants to open a file and find one thing. Wants `instanceof` +to work. Wants the TypeScript layer to describe reality, not +invent a parallel universe of shapes that vanish when you +`console.log` them. + +### Sponsor agent + +Claude — god objects force full-file reads that burn context window. +Phantom types force guessing at runtime shapes. Both multiply the +cost of every edit. A 200-LOC class with a constructor is readable +in one pass. A 2,000-LOC god object with 14 typedef vassals is not. + +## What success looks like + +- No service file exceeds 500 LOC +- Every data entity that gets constructed is a `class`, not a + `@typedef {Object}` +- `@typedef` is reserved for genuinely type-only concepts: unions, + callback signatures, import aliases +- `grep -rn '@typedef {Object}' src/domain/` returns only options + bags, never entities +- `instanceof` works on every domain value object + +## How you know + +- Count of `@typedef {Object}` in `src/domain/` trends toward zero + (options bags excepted) +- God object LOC counts shrink each cycle +- New domain entities are born as classes, never typedefs + +## Current surface + +### The Gods + +| Item | LOC | Location | +|------|-----|----------| +| `PROTO_warpruntime-god-class` (asap/) | 6,613 | WarpRuntime + warp/ mixins | +| `PROTO_strand-service-god-object` (bad-code/) | 2,048 | StrandService.js | + +### The Vassals (typedef → class) + +| Item | Effort | Entity | +|------|--------|--------| +| `PROTO_typedef-dot-to-class` | XS | Dot (CRDT primitive) | +| `PROTO_typedef-eventid-to-class` | XS | EventId (causal ordering) | +| `PROTO_typedef-effectemission-to-class` | XS | EffectEmission (domain event) | +| `PROTO_typedef-deliveryobservation-to-class` | XS | DeliveryObservation (trace record) | +| `PROTO_typedef-lww-to-class` | S | LWWRegister (CRDT) | +| `PROTO_typedef-tickreceipt-to-class` | S | TickReceipt (public API) | +| `PROTO_typedef-patchdiff-to-class` | S | PatchDiff (reduce output) | +| `PROTO_typedef-trustrecord-to-class` | S | TrustRecord (trust chain) | +| `PROTO_typedef-truststate-to-class` | S | TrustState (trust aggregate) | +| `PROTO_typedef-btr-to-class` | S | BTR (tamper-evident package) | +| `PROTO_typedef-statediffresult-to-class` | S | StateDiffResult (subscriber diffs) | +| `PROTO_typedef-orset-to-class` | M | ORSet (CRDT, 10+ operations) | +| `PROTO_typedef-patchv2-to-class` | M | PatchV2 (core domain entity) | +| `PROTO_typedef-warpstatev5-to-class` | L | WarpStateV5 (CRDT materialized state) | + +### Already liberated + +- `AuditReceipt` — promoted from typedef to class (this session) + +## Legend code + +`NDNM` — for backlog items that belong to this legend. + +```text +NDNM_warpruntime-decomposition.md +NDNM_typedef-tickreceipt.md +NDNM_typedef-orset.md +``` diff --git a/docs/method/process.md b/docs/method/process.md new file mode 100644 index 00000000..f6f92b51 --- /dev/null +++ b/docs/method/process.md @@ -0,0 +1,41 @@ +# How cycles run + +See [METHOD.md](../../METHOD.md) for the full philosophy. This file is +the quick-reference for operating a cycle. + +## Starting a cycle + +1. Pick work from a lane (`asap/` first, then `up-next/`). +2. Create `docs/design//` with the next sequential + number. +3. Move the backlog file into the cycle directory as the design doc. + Flesh it out: sponsor human, sponsor agent, hill, playback + questions, non-goals. +4. You are now committed. + +## During a cycle + +- RED: write failing tests from playback questions. +- GREEN: make them pass. +- Do not reorganize the backlog mid-cycle. + +## Ending a cycle + +1. **Playback** — produce a witness artifact for each playback + question. Agent answers agent questions. Human answers human + questions. Write it down. +2. **PR** — open, review, merge to main. +3. **Retro** — write `docs/method/retro//`. + - Drift check (mandatory). + - New debt to `bad-code/`. + - Cool ideas to `cool-ideas/`. + - Backlog maintenance: process inbox, re-prioritize, merge + duplicates, kill the dead. +4. **Release** — only when externally meaningful behavior changed. + See [release.md](release.md). + +## Outcomes + +- **Hill met** — merge, close. +- **Partial** — merge what is honest. Retro explains the gap. +- **Not met** — write the retro anyway. Every cycle ends with one. diff --git a/docs/release.md b/docs/method/release.md similarity index 100% rename from docs/release.md rename to docs/method/release.md diff --git a/docs/method/retro/0001-method-bootstrap/method-bootstrap.md b/docs/method/retro/0001-method-bootstrap/method-bootstrap.md new file mode 100644 index 00000000..7ecb0440 --- /dev/null +++ b/docs/method/retro/0001-method-bootstrap/method-bootstrap.md @@ -0,0 +1,55 @@ +# Retrospective: 0001-method-bootstrap + +**Date:** 2026-04-01 +**Type:** Design +**Outcome:** Hill met + +## What happened + +Introduced The Method as the development process framework for +git-warp. Created `METHOD.md` signpost, stood up the full directory +structure (`docs/method/backlog/` with 5 lane directories, `legends/`, +`retro/`, `graveyard/`), and migrated all existing backlog items. + +49 B-number and OG items migrated from `BACKLOG/` to named files in +appropriate lanes. 10 tech debt entries from `.claude/bad_code.md` +became individual files in `bad-code/`. 13 cool ideas from +`.claude/cool_ideas.md` became individual files in `cool-ideas/`. +B-number headers stripped from all migrated files. + +## Drift check + +- `docs/release.md` moved to `docs/method/release.md` — CLAUDE.md + reference updated. +- `docs/ROADMAP.md` still references old structure — updated + migration notice. +- `.claude/bad_code.md` and `.claude/cool_ideas.md` replaced with + forwarding notices. +- No code changes. No test impact. No drift. + +## Playback + +### Agent + +- Can I find work by `ls` on a lane? **YES** — each lane is a + directory with descriptive filenames. +- Can I classify a new idea without asking? **YES** — lane + definitions are clear in METHOD.md. +- Do any B-numbers remain? **NO** — all stripped from headers and + filenames. Git history preserves provenance. + +### Human + +- Does `ls docs/method/backlog/asap/` show what matters? **YES** — + 9 high-priority items with legend prefixes. +- Can I understand items from filenames? **YES** — + `PROTO_strand-service-god-object.md` beats `B176.md`. +- Is BACKLOG/ gone? **YES** — `git rm -r BACKLOG/` done. + +## New debt + +None introduced. + +## Cool ideas + +None surfaced. diff --git a/docs/method/retro/0002-code-nav-tool/code-nav-tool.md b/docs/method/retro/0002-code-nav-tool/code-nav-tool.md new file mode 100644 index 00000000..4c72cb09 --- /dev/null +++ b/docs/method/retro/0002-code-nav-tool/code-nav-tool.md @@ -0,0 +1,91 @@ +# Retrospective: 0002-code-nav-tool + +**Date:** 2026-04-01 +**Type:** Design +**Outcome:** Partial — pivoted + +## What happened + +Started as a design cycle for "code-nav" — an AST-aware symbol +extraction tool for LLM agents. Wrote a full design doc with hill, +playback questions, phasing, and project structure. Added concrete +before/after scenarios with token cost analysis. + +Then James introduced empirical data from Blacklight (1,091 sessions, +291K messages, 4.5 months). The data reframed the problem: + +- Read burden is 96.2 GB — 6.6x all other tools combined +- The dominant cost is context compounding, not individual reads +- A dynamic read cap alone cuts burden by 54.5% +- Session length caps cut it by 58.9% +- Both combined: 75.1% + +James's Editor's Edition review delivered the verdict: + +- **APPROVE** the insight (AST-aware extraction is right) +- **REJECT** the framing (code-nav alone is too small) +- **ENHANCE** into safe-context — a policy-enforcing read layer + where AST extraction is one capability, not the product + +The design doc was rewritten from scratch as safe-context. The cycle +is closing as a pivot — the design deliverable is complete, but the +product identity changed fundamentally mid-cycle. + +## Hill assessment + +**Original hill:** "An agent can extract any named symbol's source +code, see the structural outline of any file, and find where symbols +are defined — without reading full files." + +**Status:** Not met (pivoted before implementation). The hill was +correct but undersized. It was replaced by: + +"An agent can obtain the minimum structurally correct context +required to act — without injecting large raw artifacts into +long-lived conversation state." + +## Drift check + +- Cycle 0002's design directory contains the full evolution: the + original code-nav doc and its rewrite as safe-context. Provenance + is intact. +- No code was written. No tests. No code drift possible. +- The Method structure from cycle 0001 worked as designed — the + design doc lived in `docs/design/0002-code-nav-tool/` throughout. + +## What we learned + +1. **Design before data is design in the dark.** The original + code-nav design was reasonable — correct technology choice + (tree-sitter), correct operations (outline, show, find), correct + phasing. But it was solving a symptom. The Blacklight data + revealed the disease: context compounding. Without that data, we + would have shipped a nice utility that addressed ~25% of the + problem. + +2. **The Editor's Edition pattern works.** James reviewed the design + not as "is this correct?" but as "is this ambitious enough?" The + APPROVE/REJECT/ENHANCE framework forced a clear verdict that + preserved the good work while upgrading the framing. + +3. **Pivoting mid-design is cheap.** No code was written, no tests + to rewrite, no sunk cost. This is exactly why The Method puts + design before RED. The cost of this pivot was one document + rewrite. + +## New debt + +None. + +## Cool ideas + +- **Blacklight as validation harness** — after deploying + safe-context, re-run the Blacklight analysis to measure actual + burden reduction. The before/after data is the ultimate playback + witness. + +## Backlog impact + +Remaining work re-enters the backlog as a new item: +`DX_safe-context-phase-1.md` in `asap/`. The pivot doesn't kill the +work — it sharpens it. diff --git a/docs/method/retro/0003-safe-context/safe-context.md b/docs/method/retro/0003-safe-context/safe-context.md new file mode 100644 index 00000000..53a55284 --- /dev/null +++ b/docs/method/retro/0003-safe-context/safe-context.md @@ -0,0 +1,115 @@ +# Retrospective: 0003-safe-context + +**Date:** 2026-04-01 +**Type:** Design +**Outcome:** Hill met (pivoted to new repo) + +## What happened + +Design cycle for Graft — a context governor for coding agents. +Started from the code-nav pivot (cycle 0002) and iterated through +two full review rounds with APPROVE/REJECT/ENHANCE feedback. + +The design doc went through three major evolutions: + +1. **Initial draft** — command contracts, output shapes, test + strategy, project structure. Tree-sitter for parsing, MCP + CLI + for transport. + +2. **Round 1 review** — closed all escape hatches. read_range + bounded (250 lines / 20 KB), state_save capped (8 KB), dual + thresholds (lines + bytes), built-in secret bans, machine-stable + reason codes, project root definition, .graftignore. + +3. **Round 2 review** — error vs refused distinction, broken-file + best-effort outlines (partial: true), run_capture execution + contract (cwd/env/timeout/log size), explicit CLI binary names, + log retention, outline truncation metadata. + +Final additions: enforcement hooks (PreToolUse on Read and Bash), +graft doctor/stats, internal vocabulary (projection, focus, +residual, receipt, witness), and the WARP optics framing. + +Product named "Graft" — grafting semantic eyesight onto Git's +history substrate. Repo created at `flyingrobots/graft`, scaffolded +with METHOD.md, pushed to GitHub. + +## Hill assessment + +**Hill:** "An agent working in a JS/TS codebase can obtain the +minimum structurally correct context required to act — without +injecting large raw artifacts into long-lived conversation state." + +**Status:** Design complete. The hill is fully specified with +command contracts, policy rules, enforcement layers, error models, +edge cases (broken files, secrets, symlinks), and test strategy. +Implementation begins as graft cycle 0001. + +## Drift check + +- Design doc lives in `docs/design/0003-safe-context/safe-context.md` +- Cool ideas logged in `docs/method/backlog/cool-ideas/DX_graft-cool-ideas.md` +- WARP provenance layer logged in `cool-ideas/PROTO_safe-context-warp-provenance-layer.md` +- CLEAN_CODE legend declared in `docs/method/legends/CLEAN_CODE.md` +- No code written. No test drift. No architecture drift. +- Cycle directory still named `0003-safe-context` (pre-rename to + graft). Provenance preserved intentionally. + +## Playback + +### Agent + +Design questions answered clearly: +- Command contracts with exact output shapes? **YES** +- All escape hatches bounded? **YES** (read_range, state, outline) +- Broken-file behavior specified? **YES** (best-effort, partial) +- Enforcement architecture defined? **YES** (MCP + hooks) +- Internal vocabulary coherent? **YES** (projection/focus/residual/receipt/witness) + +### Human + +- Does the design feel like a product? **YES** (per review: "first + version that feels like a product instead of a clever utility") +- Are the governor's bounds tight? **YES** (per review: "stops + feeling like a design sketch and starts feeling like a repo that + wants to exist") +- Is the naming right? **YES** — Graft. Git has trees and branches. + +## What we learned + +1. **Data before design.** The Blacklight research transformed a + nice utility into a real product. Without empirical evidence of + 96.2 GB Read burden, we would have built code-nav and missed 75% + of the problem. + +2. **Two review rounds caught real bugs.** Unrestricted read_range + was a policy bypass. Unbounded state_save would recreate the + problem in markdown. Flat line thresholds ignored byte-heavy + files. These weren't obvious until someone said "the governor + only works if it's hard to accidentally bypass." + +3. **Internal vocabulary matters.** Naming the concepts (projection, + focus, residual, receipt) gave the architecture coherence that + made the review rounds productive instead of circular. + +4. **Design cycles can spawn repos.** The Method worked across the + boundary — design doc in git-warp, product in flyingrobots/graft. + The cycle closes here; implementation opens there. + +## New debt + +None in git-warp. + +## Cool ideas + +Logged during cycle: +- graft pack, graft since, graft explain, graft init +- focus auto, capture handles, policy profiles (balanced/strict/feral) +- receipt mode, symbol heatmap, changed-since-last-read +- WARP provenance layer as Phase 3+ substrate + +## Backlog impact + +Implementation continues as graft repo cycle 0001. +`asap/DX_safe-context-phase-1.md` was consumed by this cycle's +design doc — no orphan backlog item remains. diff --git a/eslint.config.js b/eslint.config.js index e2d6dff0..b0d9d3a9 100644 --- a/eslint.config.js +++ b/eslint.config.js @@ -170,8 +170,9 @@ export default tseslint.config( }], "no-useless-computed-key": "error", "no-useless-rename": "error", - // dot-notation disabled: conflicts with tsconfig noPropertyAccessFromIndexSignature + // Base dot-notation off; type-aware version below respects noPropertyAccessFromIndexSignature "dot-notation": "off", + "@typescript-eslint/dot-notation": "error", "grouped-accessor-pairs": ["error", "getBeforeSet"], "accessor-pairs": "error", @@ -238,10 +239,10 @@ export default tseslint.config( { files: [ "src/domain/WarpGraph.js", - "src/domain/warp/query.methods.js", - "src/domain/warp/subscribe.methods.js", - "src/domain/warp/provenance.methods.js", - "src/domain/warp/fork.methods.js", + "src/domain/services/QueryController.js", + "src/domain/services/SubscriptionController.js", + "src/domain/services/ProvenanceController.js", + "src/domain/services/ForkController.js", "src/domain/warp/checkpoint.methods.js", "src/domain/warp/patch.methods.js", "src/domain/warp/materialize.methods.js", diff --git a/index.d.ts b/index.d.ts index 20011749..edecec31 100644 --- a/index.d.ts +++ b/index.d.ts @@ -1210,6 +1210,26 @@ export class PatchError extends Error { }); } +/** + * Error class for audit receipt validation and persistence failures. + */ +export class AuditError extends Error { + readonly name: 'AuditError'; + readonly code: string; + readonly context: Record; + + static readonly E_AUDIT_INVALID: 'E_AUDIT_INVALID'; + static readonly E_AUDIT_CAS_FAILED: 'E_AUDIT_CAS_FAILED'; + static readonly E_AUDIT_DEGRADED: 'E_AUDIT_DEGRADED'; + static readonly E_AUDIT_CHAIN_GAP: 'E_AUDIT_CHAIN_GAP'; + static readonly E_AUDIT_WRITER_MISMATCH: 'E_AUDIT_WRITER_MISMATCH'; + + constructor(message: string, options?: { + code?: string; + context?: Record; + }); +} + /** * Error class for sync transport operations. */ diff --git a/index.js b/index.js index 07c12f5e..ab4c18ba 100644 --- a/index.js +++ b/index.js @@ -49,6 +49,7 @@ import NoOpLogger from './src/infrastructure/adapters/NoOpLogger.js'; import ConsoleLogger, { LogLevel } from './src/infrastructure/adapters/ConsoleLogger.js'; import ClockAdapter from './src/infrastructure/adapters/ClockAdapter.js'; import { + AuditError, EncryptionError, ForkError, IndexError, @@ -210,6 +211,7 @@ export { DenoHttpAdapter, // Error types for integrity failure handling + AuditError, EncryptionError, PatchError, ForkError, diff --git a/src/domain/WarpApp.js b/src/domain/WarpApp.js index 33a3bb19..a0817a62 100644 --- a/src/domain/WarpApp.js +++ b/src/domain/WarpApp.js @@ -1,14 +1,5 @@ import WarpCore from './WarpCore.js'; -import { - getContent as _getContent, - getContentStream as _getContentStream, - getContentOid as _getContentOid, - getContentMeta as _getContentMeta, - getEdgeContent as _getEdgeContent, - getEdgeContentStream as _getEdgeContentStream, - getEdgeContentOid as _getEdgeContentOid, - getEdgeContentMeta as _getEdgeContentMeta, -} from './warp/query.methods.js'; +import { callInternalRuntimeMethod } from './utils/callInternalRuntimeMethod.js'; /** * Curated product-facing WARP surface. @@ -182,54 +173,52 @@ export default class WarpApp { } // ── Content attachment reads ────────────────────────────────────────── - // Imported from query.methods.js and called with the runtime as this binding. - /** Reads the full content blob attached to a node. * @param {string} nodeId @returns {Promise} */ async getContent(nodeId) { - return await _getContent.call(this._runtime(), nodeId); + return /** @type {Uint8Array|null} */ (await callInternalRuntimeMethod(this._runtime(), 'getContent', nodeId)); } /** Returns a streaming reader for the content blob attached to a node. * @param {string} nodeId @returns {Promise|null>} */ async getContentStream(nodeId) { - return await _getContentStream.call(this._runtime(), nodeId); + return /** @type {AsyncIterable|null} */ (await callInternalRuntimeMethod(this._runtime(), 'getContentStream', nodeId)); } /** Returns the Git object ID of the content blob attached to a node. * @param {string} nodeId @returns {Promise} */ async getContentOid(nodeId) { - return await _getContentOid.call(this._runtime(), nodeId); + return /** @type {string|null} */ (await callInternalRuntimeMethod(this._runtime(), 'getContentOid', nodeId)); } /** Returns structured content metadata (oid, mime, size) for a node. * @param {string} nodeId @returns {Promise<{ oid: string, mime: string|null, size: number|null }|null>} */ async getContentMeta(nodeId) { - return await _getContentMeta.call(this._runtime(), nodeId); + return /** @type {{ oid: string, mime: string|null, size: number|null }|null} */ (await callInternalRuntimeMethod(this._runtime(), 'getContentMeta', nodeId)); } /** Reads the full content blob attached to an edge. * @param {string} from @param {string} to @param {string} label @returns {Promise} */ async getEdgeContent(from, to, label) { - return await _getEdgeContent.call(this._runtime(), from, to, label); + return /** @type {Uint8Array|null} */ (await callInternalRuntimeMethod(this._runtime(), 'getEdgeContent', from, to, label)); } /** Returns a streaming reader for the content blob attached to an edge. * @param {string} from @param {string} to @param {string} label @returns {Promise|null>} */ async getEdgeContentStream(from, to, label) { - return await _getEdgeContentStream.call(this._runtime(), from, to, label); + return /** @type {AsyncIterable|null} */ (await callInternalRuntimeMethod(this._runtime(), 'getEdgeContentStream', from, to, label)); } /** Returns the Git object ID of the content blob attached to an edge. * @param {string} from @param {string} to @param {string} label @returns {Promise} */ async getEdgeContentOid(from, to, label) { - return await _getEdgeContentOid.call(this._runtime(), from, to, label); + return /** @type {string|null} */ (await callInternalRuntimeMethod(this._runtime(), 'getEdgeContentOid', from, to, label)); } /** Returns structured content metadata (oid, mime, size) for an edge. * @param {string} from @param {string} to @param {string} label @returns {Promise<{ oid: string, mime: string|null, size: number|null }|null>} */ async getEdgeContentMeta(from, to, label) { - return await _getEdgeContentMeta.call(this._runtime(), from, to, label); + return /** @type {{ oid: string, mime: string|null, size: number|null }|null} */ (await callInternalRuntimeMethod(this._runtime(), 'getEdgeContentMeta', from, to, label)); } // ── Strands ───────────────────────────────────────────────────────── diff --git a/src/domain/WarpCore.js b/src/domain/WarpCore.js index 84563725..35078c9b 100644 --- a/src/domain/WarpCore.js +++ b/src/domain/WarpCore.js @@ -1,14 +1,5 @@ import WarpRuntime from './WarpRuntime.js'; -import { - getContent as _getContent, - getContentStream as _getContentStream, - getContentOid as _getContentOid, - getContentMeta as _getContentMeta, - getEdgeContent as _getEdgeContent, - getEdgeContentStream as _getEdgeContentStream, - getEdgeContentOid as _getEdgeContentOid, - getEdgeContentMeta as _getEdgeContentMeta, -} from './warp/query.methods.js'; +import { callInternalRuntimeMethod } from './utils/callInternalRuntimeMethod.js'; import { toInternalStrandShape, toPublicStrandShape } from './utils/strandPublicShape.js'; import { buildCoordinateComparisonFact, @@ -16,6 +7,8 @@ import { } from './services/CoordinateFactExport.js'; import { computeChecksum } from './utils/checksumUtils.js'; + +/** @import { CoordinateComparisonSelectorV1, CoordinateComparisonV1, CoordinateTransferPlanSelectorV1, CoordinateTransferPlanV1, CryptoPort, StrandBraidOptions, StrandCreateOptions, StrandDescriptor, StrandIntentDescriptor, StrandTickRecord, VisibleStateScopeV1 } from '../../index.js' */ /** @typedef {Parameters[1]} InternalBraidStrandOptions */ /** @typedef {Parameters[1]} InternalMaterializeStrandOptions */ /** @typedef {Parameters[1]} InternalCompareStrandOptions */ @@ -23,17 +16,7 @@ import { computeChecksum } from './utils/checksumUtils.js'; /** @typedef {Parameters[0]} InternalCompareCoordinatesOptions */ /** @typedef {Parameters[0]} InternalPlanCoordinateTransferOptions */ /** @typedef {Parameters[0]} InternalConflictAnalyzeOptions */ -/** @typedef {import('../../index.js').CoordinateComparisonV1} CoordinateComparisonV1 */ -/** @typedef {import('../../index.js').CoordinateTransferPlanV1} CoordinateTransferPlanV1 */ -/** @typedef {import('../../index.js').CoordinateComparisonSelectorV1} CoordinateComparisonSelectorV1 */ -/** @typedef {import('../../index.js').CoordinateTransferPlanSelectorV1} CoordinateTransferPlanSelectorV1 */ -/** @typedef {import('../../index.js').VisibleStateScopeV1} VisibleStateScopeV1 */ -/** @typedef {import('../../index.js').CryptoPort} CryptoPort */ -/** @typedef {import('../../index.js').StrandCreateOptions} StrandCreateOptions */ -/** @typedef {import('../../index.js').StrandBraidOptions} StrandBraidOptions */ -/** @typedef {import('../../index.js').StrandDescriptor} StrandDescriptor */ -/** @typedef {import('../../index.js').StrandIntentDescriptor} StrandIntentDescriptor */ -/** @typedef {import('../../index.js').StrandTickRecord} StrandTickRecord */ + /** * Refreshes the comparison digest for a coordinate comparison result. @@ -164,8 +147,7 @@ export default class WarpCore { } // ── Content attachment reads ────────────────────────────────────────── - // Imported from query.methods.js and called with WarpRuntime-typed this. - // WarpCore is a WarpRuntime at runtime (via Object.setPrototypeOf in _adopt). + // Delegated to the runtime's QueryController via prototype methods. /** * Returns the internal WarpRuntime instance. @@ -177,101 +159,29 @@ export default class WarpCore { return /** @type {WarpRuntime} */ (/** @type {unknown} */ (this)); } - /** - * Returns a content attachment by node ID. - * - * @param {string} nodeId - * @returns {Promise} - */ - async getContent(nodeId) { - const fn = /** @type {{ call: (thisArg: WarpRuntime, nodeId: string) => Promise }} */ (/** @type {unknown} */ (_getContent)); - return await fn.call(this._asRuntime(), nodeId); - } + /** Returns a content attachment by node ID. @param {string} nodeId @returns {Promise} */ + async getContent(nodeId) { return /** @type {Uint8Array|null} */ (await callInternalRuntimeMethod(this._asRuntime(), 'getContent', nodeId)); } - /** - * Returns a content attachment stream by node ID. - * - * @param {string} nodeId - * @returns {Promise|null>} - */ - async getContentStream(nodeId) { - const fn = /** @type {{ call: (thisArg: WarpRuntime, nodeId: string) => Promise|null> }} */ (/** @type {unknown} */ (_getContentStream)); - return await fn.call(this._asRuntime(), nodeId); - } + /** Returns a content stream by node ID. @param {string} nodeId @returns {Promise|null>} */ + async getContentStream(nodeId) { return /** @type {AsyncIterable|null} */ (await callInternalRuntimeMethod(this._asRuntime(), 'getContentStream', nodeId)); } - /** - * Returns the storage OID for a content attachment. - * - * @param {string} nodeId - * @returns {Promise} - */ - async getContentOid(nodeId) { - const fn = /** @type {{ call: (thisArg: WarpRuntime, nodeId: string) => Promise }} */ (/** @type {unknown} */ (_getContentOid)); - return await fn.call(this._asRuntime(), nodeId); - } + /** Returns the OID for a content attachment. @param {string} nodeId @returns {Promise} */ + async getContentOid(nodeId) { return /** @type {string|null} */ (await callInternalRuntimeMethod(this._asRuntime(), 'getContentOid', nodeId)); } - /** - * Returns metadata for a content attachment. - * - * @param {string} nodeId - * @returns {Promise<{ oid: string, mime: string|null, size: number|null }|null>} - */ - async getContentMeta(nodeId) { - const fn = /** @type {{ call: (thisArg: WarpRuntime, nodeId: string) => Promise<{ oid: string, mime: string|null, size: number|null }|null> }} */ (/** @type {unknown} */ (_getContentMeta)); - return await fn.call(this._asRuntime(), nodeId); - } + /** Returns content metadata. @param {string} nodeId @returns {Promise<{ oid: string, mime: string|null, size: number|null }|null>} */ + async getContentMeta(nodeId) { return /** @type {{ oid: string, mime: string|null, size: number|null }|null} */ (await callInternalRuntimeMethod(this._asRuntime(), 'getContentMeta', nodeId)); } - /** - * Returns a content attachment for an edge. - * - * @param {string} from - * @param {string} to - * @param {string} label - * @returns {Promise} - */ - async getEdgeContent(from, to, label) { - const fn = /** @type {{ call: (thisArg: WarpRuntime, from: string, to: string, label: string) => Promise }} */ (/** @type {unknown} */ (_getEdgeContent)); - return await fn.call(this._asRuntime(), from, to, label); - } + /** Returns a content attachment for an edge. @param {string} from @param {string} to @param {string} label @returns {Promise} */ + async getEdgeContent(from, to, label) { return /** @type {Uint8Array|null} */ (await callInternalRuntimeMethod(this._asRuntime(), 'getEdgeContent', from, to, label)); } - /** - * Returns a content attachment stream for an edge. - * - * @param {string} from - * @param {string} to - * @param {string} label - * @returns {Promise|null>} - */ - async getEdgeContentStream(from, to, label) { - const fn = /** @type {{ call: (thisArg: WarpRuntime, from: string, to: string, label: string) => Promise|null> }} */ (/** @type {unknown} */ (_getEdgeContentStream)); - return await fn.call(this._asRuntime(), from, to, label); - } + /** Returns a content stream for an edge. @param {string} from @param {string} to @param {string} label @returns {Promise|null>} */ + async getEdgeContentStream(from, to, label) { return /** @type {AsyncIterable|null} */ (await callInternalRuntimeMethod(this._asRuntime(), 'getEdgeContentStream', from, to, label)); } - /** - * Returns the storage OID for an edge content attachment. - * - * @param {string} from - * @param {string} to - * @param {string} label - * @returns {Promise} - */ - async getEdgeContentOid(from, to, label) { - const fn = /** @type {{ call: (thisArg: WarpRuntime, from: string, to: string, label: string) => Promise }} */ (/** @type {unknown} */ (_getEdgeContentOid)); - return await fn.call(this._asRuntime(), from, to, label); - } + /** Returns the OID for an edge content attachment. @param {string} from @param {string} to @param {string} label @returns {Promise} */ + async getEdgeContentOid(from, to, label) { return /** @type {string|null} */ (await callInternalRuntimeMethod(this._asRuntime(), 'getEdgeContentOid', from, to, label)); } - /** - * Returns metadata for an edge content attachment. - * - * @param {string} from - * @param {string} to - * @param {string} label - * @returns {Promise<{ oid: string, mime: string|null, size: number|null }|null>} - */ - async getEdgeContentMeta(from, to, label) { - const fn = /** @type {{ call: (thisArg: WarpRuntime, from: string, to: string, label: string) => Promise<{ oid: string, mime: string|null, size: number|null }|null> }} */ (/** @type {unknown} */ (_getEdgeContentMeta)); - return await fn.call(this._asRuntime(), from, to, label); - } + /** Returns metadata for an edge content attachment. @param {string} from @param {string} to @param {string} label @returns {Promise<{ oid: string, mime: string|null, size: number|null }|null>} */ + async getEdgeContentMeta(from, to, label) { return /** @type {{ oid: string, mime: string|null, size: number|null }|null} */ (await callInternalRuntimeMethod(this._asRuntime(), 'getEdgeContentMeta', from, to, label)); } // ── Strands ───────────────────────────────────────────────────────── diff --git a/src/domain/WarpRuntime.js b/src/domain/WarpRuntime.js index 87e50b72..7b3add17 100644 --- a/src/domain/WarpRuntime.js +++ b/src/domain/WarpRuntime.js @@ -19,22 +19,21 @@ import defaultClock from './utils/defaultClock.js'; import LogicalTraversal from './services/LogicalTraversal.js'; import LRUCache from './utils/LRUCache.js'; import SyncController from './services/SyncController.js'; +import StrandController from './services/StrandController.js'; +import ComparisonController from './services/ComparisonController.js'; +import SubscriptionController from './services/SubscriptionController.js'; +import ProvenanceController from './services/ProvenanceController.js'; +import ForkController from './services/ForkController.js'; +import QueryController from './services/QueryController.js'; import SyncTrustGate from './services/SyncTrustGate.js'; import { AuditVerifierService } from './services/AuditVerifierService.js'; import MaterializedViewService from './services/MaterializedViewService.js'; import InMemoryBlobStorageAdapter from './utils/defaultBlobStorage.js'; import { wireWarpMethods } from './warp/_wire.js'; -import * as queryMethods from './warp/query.methods.js'; -import * as subscribeMethods from './warp/subscribe.methods.js'; -import * as provenanceMethods from './warp/provenance.methods.js'; -import * as forkMethods from './warp/fork.methods.js'; import * as checkpointMethods from './warp/checkpoint.methods.js'; import * as patchMethods from './warp/patch.methods.js'; import * as materializeMethods from './warp/materialize.methods.js'; import * as materializeAdvancedMethods from './warp/materializeAdvanced.methods.js'; -import * as strandMethods from './warp/strand.methods.js'; -import * as conflictMethods from './warp/conflict.methods.js'; -import * as comparisonMethods from './warp/comparison.methods.js'; /** @typedef {import('./types/WarpPersistence.js').CorePersistence} CorePersistence */ @@ -310,6 +309,24 @@ export default class WarpRuntime { ...(trustGate !== undefined ? { trustGate } : {}), }); + /** @type {StrandController} */ + this._strandController = new StrandController(this); + + /** @type {ComparisonController} */ + this._comparisonController = new ComparisonController(this); + + /** @type {SubscriptionController} */ + this._subscriptionController = new SubscriptionController(this); + + /** @type {ProvenanceController} */ + this._provenanceController = new ProvenanceController(/** @type {import('./warp/_internal.js').WarpGraphWithMixins} */ (/** @type {unknown} */ (this))); + + /** @type {ForkController} */ + this._forkController = new ForkController(this); + + /** @type {QueryController} */ + this._queryController = new QueryController(/** @type {import('./warp/_internal.js').WarpGraphWithMixins} */ (/** @type {unknown} */ (this))); + /** @type {MaterializedViewService} */ this._viewService = new MaterializedViewService({ codec: this._codec, @@ -646,19 +663,150 @@ export default class WarpRuntime { // ── Wire extracted method groups onto WarpRuntime.prototype ─────────────────── wireWarpMethods(WarpRuntime, [ - queryMethods, - subscribeMethods, - provenanceMethods, - forkMethods, checkpointMethods, patchMethods, materializeMethods, materializeAdvancedMethods, - strandMethods, - conflictMethods, - comparisonMethods, ]); +// ── Strand + conflict methods: direct delegation to StrandController ──────── +const strandDelegates = /** @type {const} */ ([ + 'createStrand', 'braidStrand', 'getStrand', 'listStrands', 'dropStrand', + 'materializeStrand', 'getStrandPatches', 'patchesForStrand', + 'createStrandPatch', 'patchStrand', + 'queueStrandIntent', 'listStrandIntents', 'tickStrand', + 'analyzeConflicts', +]); +for (const method of strandDelegates) { + Object.defineProperty(WarpRuntime.prototype, method, { + // eslint-disable-next-line object-shorthand -- function keyword needed for `this` binding + value: /** Delegates to StrandController. @param {unknown[]} args @returns {unknown} */ function (...args) { + /** @type {unknown} */ + const raw = this; + const self = /** @type {WarpRuntime} */ (raw); + const ctrl = /** @type {Record} */ (/** @type {unknown} */ (self._strandController)); + const fn = /** @type {(...a: unknown[]) => unknown} */ (ctrl[method]); + return fn.call(ctrl, ...args); + }, + writable: true, + configurable: true, + enumerable: false, + }); +} + +// ── Query methods: direct delegation to QueryController ────────────────────── +const queryDelegates = /** @type {const} */ ([ + 'hasNode', 'getNodeProps', 'getEdgeProps', 'neighbors', + 'getStateSnapshot', 'getNodes', 'getEdges', 'getPropertyCount', + 'query', 'worldline', 'observer', 'translationCost', + 'getContentOid', 'getContentMeta', 'getContent', + 'getEdgeContentOid', 'getEdgeContentMeta', 'getEdgeContent', + 'getContentStream', 'getEdgeContentStream', +]); +for (const method of queryDelegates) { + Object.defineProperty(WarpRuntime.prototype, method, { + // eslint-disable-next-line object-shorthand -- function keyword needed for `this` binding + value: /** Delegates to QueryController. @param {unknown[]} args @returns {unknown} */ function (...args) { + /** @type {unknown} */ + const raw = this; + const self = /** @type {WarpRuntime} */ (raw); + const ctrl = /** @type {Record} */ (/** @type {unknown} */ (self._queryController)); + const fn = /** @type {(...a: unknown[]) => unknown} */ (ctrl[method]); + return fn.call(ctrl, ...args); + }, + writable: true, + configurable: true, + enumerable: false, + }); +} + +// ── Fork methods: direct delegation to ForkController ──────────────────────── +const forkDelegates = /** @type {const} */ ([ + 'fork', 'createWormhole', + '_isAncestor', '_relationToCheckpointHead', '_validatePatchAgainstCheckpoint', +]); +for (const method of forkDelegates) { + Object.defineProperty(WarpRuntime.prototype, method, { + // eslint-disable-next-line object-shorthand -- function keyword needed for `this` binding + value: /** Delegates to ForkController. @param {unknown[]} args @returns {unknown} */ function (...args) { + /** @type {unknown} */ + const raw = this; + const self = /** @type {WarpRuntime} */ (raw); + const ctrl = /** @type {Record} */ (/** @type {unknown} */ (self._forkController)); + const fn = /** @type {(...a: unknown[]) => unknown} */ (ctrl[method]); + return fn.call(ctrl, ...args); + }, + writable: true, + configurable: true, + enumerable: false, + }); +} + +// ── Provenance methods: direct delegation to ProvenanceController ──────────── +const provenanceDelegates = /** @type {const} */ ([ + 'patchesFor', 'materializeSlice', '_computeBackwardCone', + 'loadPatchBySha', '_loadPatchBySha', '_loadPatchesBySha', '_sortPatchesCausally', +]); +for (const method of provenanceDelegates) { + Object.defineProperty(WarpRuntime.prototype, method, { + // eslint-disable-next-line object-shorthand -- function keyword needed for `this` binding + value: /** Delegates to ProvenanceController. @param {unknown[]} args @returns {unknown} */ function (...args) { + /** @type {unknown} */ + const raw = this; + const self = /** @type {WarpRuntime} */ (raw); + const ctrl = /** @type {Record} */ (/** @type {unknown} */ (self._provenanceController)); + const fn = /** @type {(...a: unknown[]) => unknown} */ (ctrl[method]); + return fn.call(ctrl, ...args); + }, + writable: true, + configurable: true, + enumerable: false, + }); +} + +// ── Subscription methods: direct delegation to SubscriptionController ──────── +const subscriptionDelegates = /** @type {const} */ ([ + 'subscribe', 'watch', '_notifySubscribers', +]); +for (const method of subscriptionDelegates) { + Object.defineProperty(WarpRuntime.prototype, method, { + // eslint-disable-next-line object-shorthand -- function keyword needed for `this` binding + value: /** Delegates to SubscriptionController. @param {unknown[]} args @returns {unknown} */ function (...args) { + /** @type {unknown} */ + const raw = this; + const self = /** @type {WarpRuntime} */ (raw); + const ctrl = /** @type {Record} */ (/** @type {unknown} */ (self._subscriptionController)); + const fn = /** @type {(...a: unknown[]) => unknown} */ (ctrl[method]); + return fn.call(ctrl, ...args); + }, + writable: true, + configurable: true, + enumerable: false, + }); +} + +// ── Comparison methods: direct delegation to ComparisonController ──────────── +const comparisonDelegates = /** @type {const} */ ([ + 'buildPatchDivergence', 'compareStrand', 'planStrandTransfer', + 'planCoordinateTransfer', 'compareCoordinates', +]); +for (const method of comparisonDelegates) { + Object.defineProperty(WarpRuntime.prototype, method, { + // eslint-disable-next-line object-shorthand -- function keyword needed for `this` binding + value: /** Delegates to ComparisonController. @param {unknown[]} args @returns {unknown} */ function (...args) { + /** @type {unknown} */ + const raw = this; + const self = /** @type {WarpRuntime} */ (raw); + const ctrl = /** @type {Record} */ (/** @type {unknown} */ (self._comparisonController)); + const fn = /** @type {(...a: unknown[]) => unknown} */ (ctrl[method]); + return fn.call(ctrl, ...args); + }, + writable: true, + configurable: true, + enumerable: false, + }); +} + // ── Sync methods: direct delegation to SyncController (no stub file) ──────── const syncDelegates = /** @type {const} */ ([ 'getFrontier', 'hasFrontierChanged', 'status', diff --git a/src/domain/crdt/Dot.js b/src/domain/crdt/Dot.js index 5774deda..2d1734f9 100644 --- a/src/domain/crdt/Dot.js +++ b/src/domain/crdt/Dot.js @@ -56,13 +56,36 @@ */ /** - * Dot - Unique operation identifier for CRDT operations. - * A dot is a (writerId, counter) pair that uniquely identifies an operation. - * - * @typedef {Object} Dot - * @property {string} writerId - Writer identifier (non-empty string) - * @property {number} counter - Monotonic counter (positive integer) + * Dot — unique operation identity for CRDT semantics. + * A (writerId, counter) pair that serves as a "birth certificate" + * for each CRDT operation. */ +export class Dot { + /** @type {string} Writer identifier (non-empty string) */ + writerId; + + /** @type {number} Monotonic counter (positive integer) */ + counter; + + /** + * Creates a validated Dot. + * + * @param {string} writerId - Must be non-empty string + * @param {number} counter - Must be positive integer (> 0) + */ + constructor(writerId, counter) { + if (typeof writerId !== 'string' || writerId.length === 0) { + throw new Error('writerId must be a non-empty string'); + } + + if (!Number.isInteger(counter) || counter <= 0) { + throw new Error('counter must be a positive integer'); + } + + this.writerId = writerId; + this.counter = counter; + } +} /** * Creates a validated Dot. @@ -70,18 +93,9 @@ * @param {string} writerId - Must be non-empty string * @param {number} counter - Must be positive integer (> 0) * @returns {Dot} - * @throws {Error} If validation fails */ export function createDot(writerId, counter) { - if (typeof writerId !== 'string' || writerId.length === 0) { - throw new Error('writerId must be a non-empty string'); - } - - if (!Number.isInteger(counter) || counter <= 0) { - throw new Error('counter must be a positive integer'); - } - - return { writerId, counter }; + return new Dot(writerId, counter); } /** @@ -136,7 +150,7 @@ export function decodeDot(encoded) { throw new Error('Invalid encoded dot format: invalid counter'); } - return { writerId, counter }; + return new Dot(writerId, counter); } /** diff --git a/src/domain/crdt/LWW.js b/src/domain/crdt/LWW.js index 3612fae6..df95d4b6 100644 --- a/src/domain/crdt/LWW.js +++ b/src/domain/crdt/LWW.js @@ -69,12 +69,26 @@ import { compareEventIds } from '../utils/EventId.js'; */ /** - * LWW Register - stores value with EventId for conflict resolution + * LWW Register — stores value with EventId for conflict resolution. * @template T - * @typedef {Object} LWWRegister - * @property {import('../utils/EventId.js').EventId} eventId - * @property {T} value */ +export class LWWRegister { + /** @type {import('../utils/EventId.js').EventId} */ + eventId; + + /** @type {T} */ + value; + + /** + * Creates an LWW register. + * @param {import('../utils/EventId.js').EventId} eventId + * @param {T} value + */ + constructor(eventId, value) { + this.eventId = eventId; + this.value = value; + } +} /** * Creates an LWW register with the given EventId and value. @@ -84,7 +98,7 @@ import { compareEventIds } from '../utils/EventId.js'; * @returns {LWWRegister} */ export function lwwSet(eventId, value) { - return { eventId, value }; + return new LWWRegister(eventId, value); } /** diff --git a/src/domain/errors/AuditError.js b/src/domain/errors/AuditError.js new file mode 100644 index 00000000..ca1f0a71 --- /dev/null +++ b/src/domain/errors/AuditError.js @@ -0,0 +1,47 @@ +import WarpError from './WarpError.js'; + +/** + * Error class for audit receipt validation and persistence failures. + * + * ## Error Codes + * + * | Code | Description | + * |------|-------------| + * | `E_AUDIT_INVALID` | Receipt field validation failed (version, OIDs, ticks, etc.) | + * | `E_AUDIT_CAS_FAILED` | Compare-and-swap failed during audit commit | + * | `E_AUDIT_DEGRADED` | Audit service degraded after exhausting retries | + * | `E_AUDIT_CHAIN_GAP` | Audit chain has a gap (missing commit in ancestry) | + * | `E_AUDIT_WRITER_MISMATCH` | TickReceipt writer does not match the service's writerId | + * + * @class AuditError + * @extends WarpError + * + * @property {string} name - Always 'AuditError' for instanceof checks + * @property {string} code - Machine-readable error code for programmatic handling + * @property {Record} context - Serializable context object with error details + */ +export default class AuditError extends WarpError { + /** Receipt field validation failed. */ + static E_AUDIT_INVALID = 'E_AUDIT_INVALID'; + + /** Compare-and-swap failed during audit commit. */ + static E_AUDIT_CAS_FAILED = 'E_AUDIT_CAS_FAILED'; + + /** Audit service degraded after exhausting retries. */ + static E_AUDIT_DEGRADED = 'E_AUDIT_DEGRADED'; + + /** Audit chain has a gap (missing commit in ancestry). */ + static E_AUDIT_CHAIN_GAP = 'E_AUDIT_CHAIN_GAP'; + + /** TickReceipt writer does not match the service's writerId. */ + static E_AUDIT_WRITER_MISMATCH = 'E_AUDIT_WRITER_MISMATCH'; + + /** + * Creates an AuditError with the given message and error code. + * @param {string} message - Human-readable error description + * @param {{ code?: string, context?: Record }} [options={}] - Error options + */ + constructor(message, options = {}) { + super(message, options.code ?? 'E_AUDIT_INVALID', options); + } +} diff --git a/src/domain/errors/SyncError.js b/src/domain/errors/SyncError.js index 7c94c1e5..e233bd70 100644 --- a/src/domain/errors/SyncError.js +++ b/src/domain/errors/SyncError.js @@ -15,6 +15,7 @@ import WarpError from './WarpError.js'; * | `E_SYNC_REMOTE` | Remote server returned a 5xx error | * | `E_SYNC_PROTOCOL` | Protocol violation: 4xx, invalid JSON, or malformed response | * | `E_SYNC_PAYLOAD_INVALID` | Sync payload failed shape/resource-limit validation (B64) | + * | `E_SYNC_DIVERGENCE` | Writer chains have diverged (no common ancestor) | * | `SYNC_ERROR` | Generic/default sync error | * * @class SyncError diff --git a/src/domain/errors/index.js b/src/domain/errors/index.js index 592af009..3c919190 100644 --- a/src/domain/errors/index.js +++ b/src/domain/errors/index.js @@ -4,6 +4,7 @@ * @module domain/errors */ +export { default as AuditError } from './AuditError.js'; export { default as EmptyMessageError } from './EmptyMessageError.js'; export { default as EncryptionError } from './EncryptionError.js'; export { default as PersistenceError } from './PersistenceError.js'; diff --git a/src/domain/services/AuditReceiptService.js b/src/domain/services/AuditReceiptService.js index 3d9cae25..968707c4 100644 --- a/src/domain/services/AuditReceiptService.js +++ b/src/domain/services/AuditReceiptService.js @@ -10,6 +10,7 @@ * @see docs/specs/AUDIT_RECEIPT.md */ +import AuditError from '../errors/AuditError.js'; import { buildAuditRef } from '../utils/RefLayout.js'; import { encodeAuditMessage } from './AuditMessageCodec.js'; @@ -80,6 +81,46 @@ export async function computeOpsDigest(ops, crypto) { return await crypto.hash('sha256', combined); } +// ============================================================================ +// Receipt Value Object +// ============================================================================ + +/** + * Immutable audit receipt value object. + * + * Instances are frozen after construction. Keys are stored in sorted + * order for deterministic CBOR serialization. + */ +export class AuditReceipt { + /** @type {string} */ dataCommit; + /** @type {string} */ graphName; + /** @type {string} */ opsDigest; + /** @type {string} */ prevAuditCommit; + /** @type {number} */ tickEnd; + /** @type {number} */ tickStart; + /** @type {number} */ timestamp; + /** @type {number} */ version; + /** @type {string} */ writerId; + + /** + * Creates an immutable audit receipt from validated fields. + * @param {{ version: number, graphName: string, writerId: string, dataCommit: string, tickStart: number, tickEnd: number, opsDigest: string, prevAuditCommit: string, timestamp: number }} fields + */ + constructor({ version, graphName, writerId, dataCommit, tickStart, tickEnd, opsDigest, prevAuditCommit, timestamp }) { + // Alphabetical key order for canonical CBOR + this.dataCommit = dataCommit; + this.graphName = graphName; + this.opsDigest = opsDigest; + this.prevAuditCommit = prevAuditCommit; + this.tickEnd = tickEnd; + this.tickStart = tickStart; + this.timestamp = timestamp; + this.version = version; + this.writerId = writerId; + Object.freeze(this); + } +} + // ============================================================================ // Receipt Construction // ============================================================================ @@ -91,8 +132,8 @@ const OID_HEX_PATTERN = /^[0-9a-f]{40}([0-9a-f]{24})?$/; * Validates and builds a frozen receipt record with keys in sorted order. * * @param {{ version: number, graphName: string, writerId: string, dataCommit: string, tickStart: number, tickEnd: number, opsDigest: string, prevAuditCommit: string, timestamp: number }} fields - * @returns {Readonly>} - * @throws {Error} If any field is invalid + * @returns {AuditReceipt} + * @throws {AuditError} If any field is invalid (code: E_AUDIT_INVALID) */ export function buildReceiptRecord(fields) { const { @@ -102,79 +143,78 @@ export function buildReceiptRecord(fields) { // version if (version !== 1) { - throw new Error(`Invalid version: must be 1, got ${version}`); + throw new AuditError(`Invalid version: must be 1, got ${version}`, { context: { version } }); } // graphName — validated by RefLayout if (typeof graphName !== 'string' || graphName.length === 0) { - throw new Error('Invalid graphName: must be a non-empty string'); + throw new AuditError('Invalid graphName: must be a non-empty string', { context: { graphName } }); } // writerId — validated by RefLayout if (typeof writerId !== 'string' || writerId.length === 0) { - throw new Error('Invalid writerId: must be a non-empty string'); + throw new AuditError('Invalid writerId: must be a non-empty string', { context: { writerId } }); } // dataCommit const dc = dataCommit.toLowerCase(); if (!OID_HEX_PATTERN.test(dc)) { - throw new Error(`Invalid dataCommit OID: ${dataCommit}`); + throw new AuditError(`Invalid dataCommit OID: ${dataCommit}`, { context: { dataCommit } }); } // opsDigest const od = opsDigest.toLowerCase(); if (!/^[0-9a-f]{64}$/.test(od)) { - throw new Error(`Invalid opsDigest: must be 64-char lowercase hex, got ${opsDigest}`); + throw new AuditError(`Invalid opsDigest: must be 64-char lowercase hex, got ${opsDigest}`, { context: { opsDigest } }); } // prevAuditCommit const pac = prevAuditCommit.toLowerCase(); if (!OID_HEX_PATTERN.test(pac)) { - throw new Error(`Invalid prevAuditCommit OID: ${prevAuditCommit}`); + throw new AuditError(`Invalid prevAuditCommit OID: ${prevAuditCommit}`, { context: { prevAuditCommit } }); } // OID length consistency const oidLen = dc.length; if (pac.length !== oidLen) { - throw new Error(`OID length mismatch: dataCommit=${dc.length}, prevAuditCommit=${pac.length}`); + throw new AuditError(`OID length mismatch: dataCommit=${dc.length}, prevAuditCommit=${pac.length}`, { context: { dataCommitLen: dc.length, prevAuditCommitLen: pac.length } }); } // tick constraints if (!Number.isInteger(tickStart) || tickStart < 1) { - throw new Error(`Invalid tickStart: must be integer >= 1, got ${tickStart}`); + throw new AuditError(`Invalid tickStart: must be integer >= 1, got ${tickStart}`, { context: { tickStart } }); } if (!Number.isInteger(tickEnd) || tickEnd < tickStart) { - throw new Error(`Invalid tickEnd: must be integer >= tickStart, got ${tickEnd}`); + throw new AuditError(`Invalid tickEnd: must be integer >= tickStart, got ${tickEnd}`, { context: { tickEnd, tickStart } }); } if (version === 1 && tickStart !== tickEnd) { - throw new Error(`v1 requires tickStart === tickEnd, got ${tickStart} !== ${tickEnd}`); + throw new AuditError(`v1 requires tickStart === tickEnd, got ${tickStart} !== ${tickEnd}`, { context: { tickStart, tickEnd } }); } // Zero-hash sentinel only for genesis (tickStart === 1) const zeroHash = '0'.repeat(oidLen); if (pac === zeroHash && tickStart > 1) { - throw new Error('Non-genesis receipt cannot use zero-hash sentinel'); + throw new AuditError('Non-genesis receipt cannot use zero-hash sentinel', { context: { tickStart, prevAuditCommit: pac } }); } // timestamp if (!Number.isInteger(timestamp) || timestamp < 0) { - throw new Error(`Invalid timestamp: must be non-negative safe integer, got ${timestamp}`); + throw new AuditError(`Invalid timestamp: must be non-negative safe integer, got ${timestamp}`, { context: { timestamp } }); } if (!Number.isSafeInteger(timestamp)) { - throw new Error(`Invalid timestamp: exceeds Number.MAX_SAFE_INTEGER: ${timestamp}`); + throw new AuditError(`Invalid timestamp: exceeds Number.MAX_SAFE_INTEGER: ${timestamp}`, { context: { timestamp } }); } - // Build with keys in sorted order (canonical for CBOR) - return Object.freeze({ - dataCommit: dc, + return new AuditReceipt({ + version, graphName, + writerId, + dataCommit: dc, + tickStart, + tickEnd, opsDigest: od, prevAuditCommit: pac, - tickEnd, - tickStart, timestamp, - version, - writerId, }); } @@ -317,8 +357,9 @@ export class AuditReceiptService { actual: writer, patchSha, }); - throw new Error( + throw new AuditError( `Audit writer mismatch: expected '${this._writerId}', got '${writer}'`, + { code: AuditError.E_AUDIT_WRITER_MISMATCH, context: { expected: this._writerId, actual: writer, patchSha } }, ); } @@ -405,7 +446,7 @@ export class AuditReceiptService { } catch { if (this._retrying) { // Second CAS failure during retry → degrade - throw new Error('CAS failed during retry'); + throw new AuditError('CAS failed during retry', { code: AuditError.E_AUDIT_CAS_FAILED, context: { writerId: this._writerId, ref: this._auditRef } }); } // CAS mismatch — retry once with refreshed tip return await this._retryAfterCasConflict(commitSha, tickReceipt); @@ -450,7 +491,7 @@ export class AuditReceiptService { writerId: this._writerId, reason: 'second CAS failure', }); - throw new Error('Audit service degraded after second CAS failure'); + throw new AuditError('Audit service degraded after second CAS failure', { code: AuditError.E_AUDIT_DEGRADED, context: { writerId: this._writerId } }); } finally { this._retrying = false; } diff --git a/src/domain/services/AuditVerifierService.js b/src/domain/services/AuditVerifierService.js index 432edfb1..9a53f331 100644 --- a/src/domain/services/AuditVerifierService.js +++ b/src/domain/services/AuditVerifierService.js @@ -16,18 +16,7 @@ * @see docs/specs/AUDIT_RECEIPT.md Section 8 */ -/** - * @typedef {Object} AuditReceipt - * @property {number} version - * @property {string} graphName - * @property {string} writerId - * @property {string} dataCommit - * @property {string} opsDigest - * @property {string} prevAuditCommit - * @property {number} tickStart - * @property {number} tickEnd - * @property {number} timestamp - */ +/** @typedef {import('./AuditReceiptService.js').AuditReceipt} AuditReceipt */ import { buildAuditPrefix, buildAuditRef } from '../utils/RefLayout.js'; import { decodeAuditMessage } from './AuditMessageCodec.js'; @@ -92,7 +81,7 @@ function validateReceiptSchema(receipt) { if (receipt === null || receipt === undefined || typeof receipt !== 'object') { return 'receipt is not an object'; } - const rec = /** @type {{ version?: unknown, graphName?: unknown, writerId?: unknown, dataCommit?: unknown, opsDigest?: unknown, prevAuditCommit?: unknown, tickStart?: unknown, tickEnd?: unknown, timestamp?: unknown }} */ (receipt); + const rec = /** @type {Record} */ (receipt); const keys = Object.keys(rec); if (keys.length !== 9) { return `expected 9 fields, got ${keys.length}`; @@ -758,7 +747,7 @@ export class AuditVerifierService { status: 'error', source, sourceDetail, - reasonCode: TRUST_REASON_CODES['TRUST_RECORD_CHAIN_INVALID'], + reasonCode: TRUST_REASON_CODES.TRUST_RECORD_CHAIN_INVALID, reason: `Trust chain read failed: ${recordsResult.error.message}`, }); } @@ -795,7 +784,7 @@ export class AuditVerifierService { sourceDetail, writerIds: options.writerIds || [], recordsScanned: records.length, - reasonCode: TRUST_REASON_CODES['TRUST_RECORD_CHAIN_INVALID'], + reasonCode: TRUST_REASON_CODES.TRUST_RECORD_CHAIN_INVALID, reason: `Trust chain invalid: ${(typeof chainResult.errors[0]?.error === 'string' && chainResult.errors[0].error.length > 0) ? chainResult.errors[0].error : 'unknown chain error'}`, }); } diff --git a/src/domain/services/BitmapIndexReader.js b/src/domain/services/BitmapIndexReader.js index 3060ecdc..b026244d 100644 --- a/src/domain/services/BitmapIndexReader.js +++ b/src/domain/services/BitmapIndexReader.js @@ -7,12 +7,14 @@ import { canonicalStringify } from '../utils/canonicalStringify.js'; import { isValidShardOid } from '../utils/validateShardOid.js'; import { base64Decode } from '../utils/bytes.js'; + +/** @import { RoaringBitmapSubset as BitmapShard } from '../utils/roaring.js' */ /** @typedef {import('../../ports/IndexStoragePort.js').default} IndexStoragePort */ /** @typedef {import('../types/WarpPersistence.js').IndexStorage} IndexStorage */ -/** @typedef {import('../../ports/LoggerPort.js').default} LoggerPort */ -/** @typedef {import('../../ports/CryptoPort.js').default} CryptoPort */ + + /** @typedef {Record} JsonShard */ -/** @typedef {import('../utils/roaring.js').RoaringBitmapSubset} BitmapShard */ + /** @typedef {JsonShard | BitmapShard} LoadedShard */ /** diff --git a/src/domain/services/BoundaryTransitionRecord.js b/src/domain/services/BoundaryTransitionRecord.js index d88585dc..fdfafcd5 100644 --- a/src/domain/services/BoundaryTransitionRecord.js +++ b/src/domain/services/BoundaryTransitionRecord.js @@ -126,21 +126,69 @@ async function computeHmac(fields, key, { crypto, codec }) { } /** - * @typedef {Object} BTR - * @property {number} version - BTR format version - * @property {string} h_in - Hash of input state (hex SHA-256) - * @property {string} h_out - Hash of output state (hex SHA-256) - * @property {Uint8Array} U_0 - Serialized initial state (CBOR) - * @property {Array} P - Serialized provenance payload - * @property {string} t - ISO 8601 timestamp - * @property {string} kappa - Authentication tag (hex HMAC-SHA256) + * BTR — Boundary Transition Record. Tamper-evident package binding + * initial state, provenance payload, and output state hash. */ +export class BTR { + /** @type {string} Hash of input state (hex SHA-256) */ + h_in; + + /** @type {string} Hash of output state (hex SHA-256) */ + h_out; + + /** @type {string} Authentication tag (hex HMAC-SHA256) */ + kappa; + + /** @type {Array} Serialized provenance payload */ + P; + + /** @type {string} ISO 8601 timestamp */ + t; + + /** @type {Uint8Array} Serialized initial state (CBOR) */ + U_0; + + /** @type {number} BTR format version */ + version; + + /** + * Creates a BTR from field values. + * @param {{ version: number, h_in: string, h_out: string, U_0: Uint8Array, P: Array, t: string, kappa: string }} fields + */ + constructor({ version, h_in, h_out, U_0, P, t, kappa }) { + this.version = version; + this.h_in = h_in; + this.h_out = h_out; + this.U_0 = U_0; + this.P = P; + this.t = t; + this.kappa = kappa; + Object.freeze(this); + } +} /** - * @typedef {Object} VerificationResult - * @property {boolean} valid - Whether the BTR is valid - * @property {string} [reason] - Reason for failure (if invalid) + * VerificationResult — outcome of BTR HMAC/replay verification. */ +export class VerificationResult { + /** @type {boolean} */ + valid; + + /** @type {string|undefined} Reason for failure (if invalid) */ + reason; + + /** + * Creates a VerificationResult. + * @param {boolean} valid + * @param {string} [reason] + */ + constructor(valid, reason) { + this.valid = valid; + if (reason !== undefined) { + this.reason = reason; + } + } +} /** * Creates a Boundary Transition Record from an initial state and payload. @@ -197,7 +245,7 @@ export async function createBTR(initialState, payload, options) { const fields = { version: BTR_VERSION, h_in, h_out, U_0, P, t: timestamp }; const kappa = await computeHmac(fields, key, /** @type {{ crypto: import('../../ports/CryptoPort.js').default, codec?: import('../../ports/CodecPort.js').default }} */ (deps)); - return { ...fields, kappa }; + return new BTR({ ...fields, kappa }); } /** @@ -345,7 +393,7 @@ async function verifyReplayHash(btr, deps = {}) { export async function verifyBTR(btr, key, options = {}) { const structureError = validateBTRStructure(btr); if (structureError !== null) { - return { valid: false, reason: structureError }; + return new VerificationResult(false, structureError); } const hmacDeps = /** @type {{ crypto: import('../../ports/CryptoPort.js').default, codec?: import('../../ports/CodecPort.js').default }} */ (buildDeps({ crypto: options.crypto, codec: options.codec })); @@ -369,10 +417,10 @@ async function verifyReplayIfRequested(btr, options) { const replayDeps = buildDeps({ crypto: options.crypto, codec: options.codec }); const replayError = await verifyReplayHash(btr, replayDeps); if (replayError !== null) { - return { valid: false, reason: replayError }; + return new VerificationResult(false, replayError); } } - return { valid: true }; + return new VerificationResult(true); } /** @@ -389,12 +437,12 @@ async function verifyHmacSafe(btr, key, deps) { hmacValid = await verifyHmac(btr, key, deps); } catch (err) { if (err instanceof RangeError) { - return { valid: false, reason: `Invalid hex in authentication tag: ${err.message}` }; + return new VerificationResult(false, `Invalid hex in authentication tag: ${err.message}`); } throw err; } if (!hmacValid) { - return { valid: false, reason: 'Authentication tag mismatch' }; + return new VerificationResult(false, 'Authentication tag mismatch'); } return null; } @@ -491,15 +539,7 @@ export function deserializeBTR(bytes, { codec } = {}) { } const typed = /** @type {{ version: number, h_in: string, h_out: string, U_0: Uint8Array, P: Array, t: string, kappa: string }} */ (obj); - return /** @type {BTR} */ ({ - version: typed.version, - h_in: typed.h_in, - h_out: typed.h_out, - U_0: typed.U_0, - P: typed.P, - t: typed.t, - kappa: typed.kappa, - }); + return new BTR(typed); } /** diff --git a/src/domain/services/CheckpointSerializerV5.js b/src/domain/services/CheckpointSerializerV5.js index fe7802c3..09227d63 100644 --- a/src/domain/services/CheckpointSerializerV5.js +++ b/src/domain/services/CheckpointSerializerV5.js @@ -17,6 +17,7 @@ import { orsetSerialize, orsetDeserialize } from '../crdt/ORSet.js'; import { vvSerialize, vvDeserialize } from '../crdt/VersionVector.js'; import { decodeDot } from '../crdt/Dot.js'; import { createEmptyStateV5 } from './JoinReducer.js'; +import WarpStateV5 from './WarpStateV5.js'; // ============================================================================ // Full State Serialization (for Checkpoints) @@ -108,13 +109,13 @@ export function deserializeFullStateV5(buffer, { codec: codecOpt } = {}) { if (obj['version'] !== undefined && obj['version'] !== 'full-v5') { throw new Error(`Unsupported full state version: expected 'full-v5', got '${JSON.stringify(obj['version'])}'`); } - return { + return new WarpStateV5({ nodeAlive: orsetDeserialize(obj['nodeAlive'] ?? {}), edgeAlive: orsetDeserialize(obj['edgeAlive'] ?? {}), prop: deserializeProps(/** @type {[string, unknown][]} */ (obj['prop'])), observedFrontier: vvDeserialize(/** @type {{[x: string]: number}} */ (obj['observedFrontier'] ?? {})), edgeBirthEvent: /** @type {Map} */ (deserializeEdgeBirthEvent(obj)), - }; + }); } // ============================================================================ diff --git a/src/domain/services/CheckpointService.js b/src/domain/services/CheckpointService.js index 61370498..28be85f2 100644 --- a/src/domain/services/CheckpointService.js +++ b/src/domain/services/CheckpointService.js @@ -25,6 +25,7 @@ import { createORSet, orsetAdd, orsetCompact } from '../crdt/ORSet.js'; import { createDot } from '../crdt/Dot.js'; import { createVersionVector } from '../crdt/VersionVector.js'; import { cloneStateV5, reduceV5 } from './JoinReducer.js'; +import WarpStateV5 from './WarpStateV5.js'; import { encodeEdgeKey, encodePropKey, CONTENT_PROPERTY_KEY, decodePropKey, isEdgePropKey, decodeEdgePropKey } from './KeyCodec.js'; import { ProvenanceIndex } from './ProvenanceIndex.js'; @@ -563,5 +564,5 @@ export function reconstructStateV5FromCheckpoint(visibleProjection) { edgeBirthEvent.set(edgeKey, { lamport: 0, writerId: '', patchSha: '0000', opIndex: 0 }); } - return { nodeAlive, edgeAlive, prop, observedFrontier, edgeBirthEvent }; + return new WarpStateV5({ nodeAlive, edgeAlive, prop, observedFrontier, edgeBirthEvent }); } diff --git a/src/domain/warp/comparison.methods.js b/src/domain/services/ComparisonController.js similarity index 66% rename from src/domain/warp/comparison.methods.js rename to src/domain/services/ComparisonController.js index dae500d9..a5d43f21 100644 --- a/src/domain/warp/comparison.methods.js +++ b/src/domain/services/ComparisonController.js @@ -1,69 +1,229 @@ /** - * Comparison methods for substrate-visible coordinate and strand reads. + * ComparisonController — substrate-visible coordinate and strand comparison. * - * These helpers compare only deterministic substrate facts: - * - visible patch-universe divergence - * - visible node / edge / property deltas - * - optional node-local target diffs + * Extracted from comparison.methods.js. Compares only deterministic + * substrate facts: visible patch-universe divergence, visible node/edge/ + * property deltas, and optional node-local target diffs. * - * They do not introduce application semantics. - * - * @module domain/warp/comparison.methods + * @module domain/services/ComparisonController */ import QueryError from '../errors/QueryError.js'; import { buildCoordinateComparisonFact, buildCoordinateTransferPlanFact, -} from '../services/CoordinateFactExport.js'; -import { createStateReaderV5 } from '../services/StateReaderV5.js'; -import { computeStateHashV5 } from '../services/StateSerializerV5.js'; +} from './CoordinateFactExport.js'; +import { createStateReaderV5 } from './StateReaderV5.js'; +import { computeStateHashV5 } from './StateSerializerV5.js'; import { normalizeVisibleStateScopeV1, scopeMaterializedStateV5, scopePatchEntriesV1, -} from '../services/VisibleStateScopeV1.js'; -import { compareVisibleStateV5 } from '../services/VisibleStateComparisonV5.js'; -import { planVisibleStateTransferV5 } from '../services/VisibleStateTransferPlannerV5.js'; -import StrandService from '../services/StrandService.js'; +} from './VisibleStateScopeV1.js'; +import { compareVisibleStateV5 } from './VisibleStateComparisonV5.js'; +import { planVisibleStateTransferV5 } from './VisibleStateTransferPlannerV5.js'; +import StrandService from './StrandService.js'; import { computeChecksum } from '../utils/checksumUtils.js'; import { callInternalRuntimeMethod } from '../utils/callInternalRuntimeMethod.js'; + +/** @import { default as ComparisonHost } from '../WarpRuntime.js' */ const COORDINATE_COMPARISON_VERSION = 'coordinate-compare/v1'; const COORDINATE_TRANSFER_PLAN_VERSION = 'coordinate-transfer-plan/v1'; +/** @import { VisibleStateScopeV1, VisibleStateReaderV5, CoordinateComparisonSelectorV1, CoordinateTransferPlanSelectorV1, CoordinateComparisonV1, CoordinateTransferPlanV1, StrandDescriptor as StrandDescriptorV1 } from '../../../index.js' */ +/** @import { WarpStateV5 } from './JoinReducer.js' */ + /** - * @typedef {import('../../../index.js').VisibleStateScopePrefixFilterV1} VisibleStateScopePrefixFilterV1 - * @typedef {import('../../../index.js').VisibleStateScopeV1} VisibleStateScopeV1 - * @typedef {import('../../../index.js').VisibleStateReaderV5} VisibleStateReaderV5 - * @typedef {import('../../../index.js').CoordinateComparisonSelectorV1} CoordinateComparisonSelectorV1 - * @typedef {import('../../../index.js').CoordinateTransferPlanSelectorV1} CoordinateTransferPlanSelectorV1 - * @typedef {import('../../../index.js').StrandDescriptor} StrandDescriptorV1 - * @typedef {import('../../../index.js').CoordinateComparisonV1} CoordinateComparisonV1 - * @typedef {import('../../../index.js').CoordinateTransferPlanV1} CoordinateTransferPlanV1 * @typedef {{ left: Record, right: Record, targetId?: string|null, scope?: VisibleStateScopeV1|null }} InternalCompareCoordinatesOptions * @typedef {{ source: Record, target: Record, scope?: VisibleStateScopeV1|null }} InternalPlanCoordinateTransferOptions */ /** - * Internal normalized selector shape after validation. - * - * @typedef {Object} NormalizedSelector - * @property {string} kind - Selector kind (live, strand, strand_base, coordinate) - * @property {number|null} [ceiling] - Optional lamport ceiling - * @property {string} [strandId] - Strand identifier (for strand/strand_base kinds) - * @property {Record} [frontier] - Frontier record (for coordinate kind) +/** + * NormalizedSelector — base class for validated comparison selectors. + * Each subclass implements `resolve()` with the resolution logic for + * its kind, eliminating dispatch switches. */ +class NormalizedSelector { + /** @type {string} */ + kind; + + /** @type {number|null} */ + ceiling; + + /** + * Creates a NormalizedSelector. + * @param {string} kind + * @param {number|null} ceiling + */ + constructor(kind, ceiling) { + this.kind = kind; + this.ceiling = ceiling; + } + + /** + * Resolves this selector into a ResolvedComparisonSide. + * @param {import('../WarpRuntime.js').default} _graph + * @param {VisibleStateScopeV1|null} _scope + * @param {Map|null} _liveFrontier + * @returns {Promise} + */ + resolve(_graph, _scope, _liveFrontier) { + throw new QueryError(`NormalizedSelector.resolve() must be overridden by ${this.kind} subclass`, { code: 'invalid_coordinate' }); + } +} + +/** Live frontier selector. */ +class LiveSelector extends NormalizedSelector { + /** Creates a LiveSelector. + * @param {number|null} ceiling + */ + constructor(ceiling) { + super('live', ceiling); + } + + /** Resolves live frontier to a comparison side. @param {import('../WarpRuntime.js').default} graph @param {VisibleStateScopeV1|null} scope @param {Map|null} liveFrontier @returns {Promise} */ + async resolve(graph, scope, liveFrontier) { + const requestedFrontier = liveFrontier ?? /** @type {Map} */ (await graph.getFrontier()); + const requestedRecord = normalizeFrontierRecord(requestedFrontier, 'live.frontier'); + const state = await graph.materializeCoordinate({ + frontier: frontierRecordToMap(requestedRecord), + ...optionalCeiling(this.ceiling), + }); + const patchEntries = await collectPatchEntriesForFrontier(graph, requestedRecord, this.ceiling); + return await finalizeSide(graph, { + requested: { kind: 'live', ...optionalCeiling(this.ceiling) }, + state, patchEntries, coordinateKind: 'frontier', lamportCeiling: this.ceiling, + }, scope); + } +} + +/** Explicit coordinate (frontier) selector. */ +class CoordinateSelector extends NormalizedSelector { + /** @type {Record} */ + frontier; + + /** Creates a CoordinateSelector. + * @param {Record} frontier + * @param {number|null} ceiling + */ + constructor(frontier, ceiling) { + super('coordinate', ceiling); + this.frontier = frontier; + } + + /** Resolves explicit coordinate frontier to a comparison side. @param {import('../WarpRuntime.js').default} graph @param {VisibleStateScopeV1|null} scope @returns {Promise} */ + async resolve(graph, scope) { + const state = await graph.materializeCoordinate({ + frontier: frontierRecordToMap(this.frontier), + ...optionalCeiling(this.ceiling), + }); + const patchEntries = await collectPatchEntriesForFrontier(graph, this.frontier, this.ceiling); + return await finalizeSide(graph, { + requested: { ...buildCoordinateRequest(this.frontier, this.ceiling), kind: 'coordinate' }, + state, patchEntries, coordinateKind: 'frontier', lamportCeiling: this.ceiling, + }, scope); + } +} + +/** Strand overlay selector. */ +class StrandSelector extends NormalizedSelector { + /** @type {string} */ + strandId; + + /** Creates a StrandSelector. + * @param {string} strandId + * @param {number|null} ceiling + */ + constructor(strandId, ceiling) { + super('strand', ceiling); + this.strandId = strandId; + } + + /** Resolves strand overlay to a comparison side. @param {import('../WarpRuntime.js').default} graph @param {VisibleStateScopeV1|null} scope @returns {Promise} */ + async resolve(graph, scope) { + const strands = new StrandService({ graph }); + const descriptor = await strands.getOrThrow(this.strandId); + const state = /** @type {WarpStateV5} */ (await callInternalRuntimeMethod( + graph, 'materializeStrand', this.strandId, + this.ceiling === null ? undefined : { ceiling: this.ceiling }, + )); + const patchEntries = await strands.getPatchEntries( + this.strandId, this.ceiling === null ? undefined : { ceiling: this.ceiling }, + ); + return await finalizeSide(graph, { + requested: { kind: 'strand', strandId: this.strandId, ...optionalCeiling(this.ceiling) }, + state, patchEntries, coordinateKind: 'strand', lamportCeiling: this.ceiling, + strand: buildStrandMetadata(this.strandId, descriptor), + }, scope); + } +} + +/** Strand base observation selector. */ +class StrandBaseSelector extends NormalizedSelector { + /** @type {string} */ + strandId; + + /** Creates a StrandBaseSelector. + * @param {string} strandId + * @param {number|null} ceiling + */ + constructor(strandId, ceiling) { + super('strand_base', ceiling); + this.strandId = strandId; + } + + /** Resolves strand base observation to a comparison side. @param {import('../WarpRuntime.js').default} graph @param {VisibleStateScopeV1|null} scope @returns {Promise} */ + async resolve(graph, scope) { + const strands = new StrandService({ graph }); + const descriptor = await strands.getOrThrow(this.strandId); + const effectiveCeiling = combineCeilings(descriptor.baseObservation.lamportCeiling, this.ceiling); + const state = await graph.materializeCoordinate({ + frontier: descriptor.baseObservation.frontier, + ...optionalCeiling(effectiveCeiling), + }); + const patchEntries = await collectPatchEntriesForFrontier(graph, descriptor.baseObservation.frontier, effectiveCeiling); + return await finalizeSide(graph, { + requested: { + kind: 'strand_base', strandId: this.strandId, + frontier: { ...descriptor.baseObservation.frontier }, + baseLamportCeiling: descriptor.baseObservation.lamportCeiling, + ...optionalCeiling(this.ceiling), + }, + state, patchEntries, coordinateKind: 'strand_base', lamportCeiling: effectiveCeiling, + strand: buildStrandMetadata(this.strandId, /** @type {StrandDescriptorV1} */ (descriptor)), + }, scope); + } +} /** - * Resolved comparison side with state, entries, and metadata. - * - * @typedef {Object} ResolvedComparisonSide - * @property {Record} requested - Original requested selector - * @property {import('../services/JoinReducer.js').WarpStateV5} state - Materialized state - * @property {Array<{ patch: import('../types/WarpTypesV2.js').PatchV2, sha: string }>} patchEntries - Patch entries - * @property {Record} resolved - Resolved metadata with digests + * ResolvedComparisonSide — materialized state + metadata for one side of a comparison. */ +class ResolvedComparisonSide { + /** @type {Record} Original requested selector */ + requested; + + /** @type {Record} Resolved metadata with digests */ + resolved; + + /** @type {WarpStateV5} Materialized state */ + state; + + /** @type {Array<{ patch: import('../types/WarpTypesV2.js').PatchV2, sha: string }>} */ + patchEntries; + + /** + * Creates a ResolvedComparisonSide. + * @param {{ requested: Record, state: WarpStateV5, patchEntries: Array<{ patch: import('../types/WarpTypesV2.js').PatchV2, sha: string }>, resolved: Record }} fields + */ + constructor({ requested, state, patchEntries, resolved }) { + this.requested = requested; + this.resolved = resolved; + this.state = state; + this.patchEntries = patchEntries; + } +} /** * Deterministically compares two strings. @@ -422,7 +582,7 @@ function buildTargetDivergence(leftEntries, rightEntries, targetId) { * @param {string|null} targetId * @returns {Record} */ -export function buildPatchDivergence(leftEntries, rightEntries, targetId) { +function buildPatchDivergenceImpl(leftEntries, rightEntries, targetId) { const leftShas = uniqueSortedPatchShas(leftEntries); const rightShas = uniqueSortedPatchShas(rightEntries); const rightSet = new Set(rightShas); @@ -491,22 +651,19 @@ async function collectPatchEntriesForFrontier(graph, frontierRecord, ceiling) { * @param {string} field - Field name for error context * @returns {Record} */ +/** + * Normalizes a raw selector into a NormalizedSelector. + * @param {Record} selector + * @param {string} field + * @returns {NormalizedSelector} + */ function normalizeSelector(selector, field) { const raw = /** @type {Record} */ (selector); const kind = extractSelectorKind(raw); - /** @type {Record, f: string) => Record>} */ - const handlers = { - live: normalizeLiveSelector, - coordinate: normalizeCoordinateSelector, - }; - const handler = handlers[kind]; - if (handler !== undefined) { - return handler(raw, field); - } - if (kind === 'strand' || kind === 'strand_base') { - return normalizeStrandSelector(raw, kind, field); - } + if (kind === 'live') { return normalizeLiveSelector(raw, field); } + if (kind === 'coordinate') { return normalizeCoordinateSelector(raw, field); } + if (kind === 'strand' || kind === 'strand_base') { return normalizeStrandSelector(raw, kind, field); } throw new QueryError(`${field}.kind is unsupported`, { code: 'invalid_coordinate', context: { field, kind } }); } @@ -528,43 +685,46 @@ function extractSelectorKind(raw) { * @param {string} field - Field name for error context * @returns {Record} */ +/** + * Normalizes a 'live' selector. + * @param {Record} raw + * @param {string} field + * @returns {LiveSelector} + */ function normalizeLiveSelector(raw, field) { const r = /** @type {{ ceiling?: unknown }} */ (raw); - return { kind: 'live', ceiling: normalizeLamportCeiling(r.ceiling, `${field}.ceiling`) }; + return new LiveSelector(normalizeLamportCeiling(r.ceiling, `${field}.ceiling`)); } /** - * Normalizes a 'strand' or 'strand_base' kind selector. - * - * @param {Record} raw - Parsed selector record - * @param {string} kind - The selector kind - * @param {string} field - Field name for error context - * @returns {Record} + * Normalizes a 'strand' or 'strand_base' selector. + * @param {Record} raw + * @param {string} kind + * @param {string} field + * @returns {StrandSelector|StrandBaseSelector} */ function normalizeStrandSelector(raw, kind, field) { const r = /** @type {{ strandId?: unknown, ceiling?: unknown }} */ (raw); - return { - kind, - strandId: normalizeRequiredString(r.strandId, `${field}.strandId`), - ceiling: normalizeLamportCeiling(r.ceiling, `${field}.ceiling`), - }; + const strandId = normalizeRequiredString(r.strandId, `${field}.strandId`); + const ceiling = normalizeLamportCeiling(r.ceiling, `${field}.ceiling`); + return kind === 'strand_base' + ? new StrandBaseSelector(strandId, ceiling) + : new StrandSelector(strandId, ceiling); } /** - * Normalizes a 'coordinate' kind selector. - * - * @param {Record} raw - Parsed selector record - * @param {string} field - Field name for error context - * @returns {Record} + * Normalizes a 'coordinate' selector. + * @param {Record} raw + * @param {string} field + * @returns {CoordinateSelector} */ function normalizeCoordinateSelector(raw, field) { const r = /** @type {{ frontier?: unknown, ceiling?: unknown }} */ (raw); const f = /** @type {Map|Record} */ (r.frontier); - return { - kind: 'coordinate', - frontier: normalizeFrontierRecord(f, `${field}.frontier`), - ceiling: normalizeLamportCeiling(r.ceiling, `${field}.ceiling`), - }; + return new CoordinateSelector( + normalizeFrontierRecord(f, `${field}.frontier`), + normalizeLamportCeiling(r.ceiling, `${field}.ceiling`), + ); } /** @@ -607,16 +767,16 @@ function buildStrandMetadata(strandId, descriptor) { * @param {import('../WarpRuntime.js').default} graph * @param {{ * requested: Record, - * state: import('../services/JoinReducer.js').WarpStateV5, + * state: WarpStateV5, * patchEntries: Array<{ patch: import('../types/WarpTypesV2.js').PatchV2, sha: string }>, * coordinateKind: 'frontier'|'strand'|'strand_base', * lamportCeiling: number|null, * strand?: Record * }} params * @param {VisibleStateScopeV1|null} scope - * @returns {Promise>} + * @returns {Promise} */ -async function finalizeComparisonSide(graph, params, scope) { +async function finalizeSide(graph, params, scope) { const { requested, state, patchEntries, coordinateKind, lamportCeiling, strand } = params; const scopedState = scopeMaterializedStateV5(state, scope); const scopedPatchEntries = scopePatchEntriesV1(patchEntries, scope); @@ -627,7 +787,7 @@ async function finalizeComparisonSide(graph, params, scope) { const stateHash = await computeStateHashV5(scopedState, { crypto: graph._crypto, codec: graph._codec }); const patchShas = uniqueSortedPatchShas(scopedPatchEntries); - return { + return new ResolvedComparisonSide({ requested, state: scopedState, patchEntries: scopedPatchEntries, @@ -643,156 +803,9 @@ async function finalizeComparisonSide(graph, params, scope) { summary: summarizeVisibleState(reader, scopedPatchEntries.length), ...(strand !== undefined ? { strand } : {}), }, - }; -} - -/** - * Resolves the 'live' coordinate side. - * - * @param {import('../WarpRuntime.js').default} graph - * @param {NormalizedSelector} selector - * @param {VisibleStateScopeV1|null} scope - * @returns {Promise>} - * @private - */ -async function resolveLiveComparisonSide(graph, selector, scope) { - const ceiling = selector.ceiling ?? null; - const requestedFrontier = /** @type {Map} */ (await graph.getFrontier()); - const requestedRecord = normalizeFrontierRecord(requestedFrontier, 'live.frontier'); - const state = await graph.materializeCoordinate({ - frontier: frontierRecordToMap(requestedRecord), - ...optionalCeiling(ceiling), - }); - const patchEntries = await collectPatchEntriesForFrontier(graph, requestedRecord, ceiling); - return await finalizeComparisonSide(graph, { - requested: { kind: 'live', ...optionalCeiling(ceiling) }, - state, - patchEntries, - coordinateKind: 'frontier', - lamportCeiling: ceiling, - }, scope); -} - -/** - * Resolves an explicit 'coordinate' side. - * - * @param {import('../WarpRuntime.js').default} graph - * @param {NormalizedSelector} selector - * @param {VisibleStateScopeV1|null} scope - * @returns {Promise>} - * @private - */ -async function resolveCoordinateComparisonSide(graph, selector, scope) { - const ceiling = selector.ceiling ?? null; - const frontier = /** @type {Record} */ (selector.frontier ?? {}); - const state = await graph.materializeCoordinate({ - frontier: frontierRecordToMap(frontier), - ...optionalCeiling(ceiling), }); - const patchEntries = await collectPatchEntriesForFrontier(graph, frontier, ceiling); - return await finalizeComparisonSide(graph, { - requested: { ...buildCoordinateRequest(frontier, ceiling), kind: 'coordinate' }, - state, - patchEntries, - coordinateKind: 'frontier', - lamportCeiling: ceiling, - }, scope); } -/** - * Resolves a 'strand' coordinate side. - * - * @param {import('../WarpRuntime.js').default} graph - * @param {NormalizedSelector} selector - * @param {VisibleStateScopeV1|null} scope - * @returns {Promise>} - * @private - */ -async function resolveStrandComparisonSide(graph, selector, scope) { - const ceiling = selector.ceiling ?? null; - const strandId = /** @type {string} */ (selector.strandId ?? ''); - const strands = new StrandService({ graph }); - const descriptor = await strands.getOrThrow(strandId); - const state = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (await callInternalRuntimeMethod( - graph, - 'materializeStrand', - strandId, - ceiling === null ? undefined : { ceiling }, - )); - const patchEntries = await strands.getPatchEntries( - strandId, - ceiling === null ? undefined : { ceiling }, - ); - return await finalizeComparisonSide(graph, { - requested: { kind: 'strand', strandId, ...optionalCeiling(ceiling) }, - state, - patchEntries, - coordinateKind: 'strand', - lamportCeiling: ceiling, - strand: buildStrandMetadata(strandId, descriptor), - }, scope); -} - -/** - * Resolves a 'strand_base' coordinate side. - * - * @param {import('../WarpRuntime.js').default} graph - * @param {NormalizedSelector} selector - * @param {VisibleStateScopeV1|null} scope - * @returns {Promise>} - * @private - */ -async function resolveStrandBaseComparisonSide(graph, selector, scope) { - const ceiling = selector.ceiling ?? null; - const strandId = /** @type {string} */ (selector.strandId ?? ''); - const strands = new StrandService({ graph }); - const descriptor = await strands.getOrThrow(strandId); - const effectiveCeiling = combineCeilings(descriptor.baseObservation.lamportCeiling, ceiling); - const state = await graph.materializeCoordinate({ - frontier: descriptor.baseObservation.frontier, - ...optionalCeiling(effectiveCeiling), - }); - const patchEntries = await collectPatchEntriesForFrontier(graph, descriptor.baseObservation.frontier, effectiveCeiling); - return await finalizeComparisonSide(graph, { - requested: { - kind: 'strand_base', - strandId, - frontier: { ...descriptor.baseObservation.frontier }, - baseLamportCeiling: descriptor.baseObservation.lamportCeiling, - ...optionalCeiling(ceiling), - }, - state, - patchEntries, - coordinateKind: 'strand_base', - lamportCeiling: effectiveCeiling, - strand: buildStrandMetadata(strandId, /** @type {StrandDescriptorV1} */ (descriptor)), - }, scope); -} - -/** - * Dispatches coordinate side resolution based on selector kind. - * - * @this {import('../WarpRuntime.js').default} - * @param {NormalizedSelector} selector - * @param {VisibleStateScopeV1|null} scope - * @returns {Promise>} - * @private - */ -async function resolveComparisonSide(selector, scope = null) { - if (selector.kind === 'live') { - return await resolveLiveComparisonSide(this, selector, scope); - } - - if (selector.kind === 'coordinate') { - return await resolveCoordinateComparisonSide(this, selector, scope); - } - - if (selector.kind === 'strand') { - return await resolveStrandComparisonSide(this, selector, scope); - } - - return await resolveStrandBaseComparisonSide(this, selector, scope); -} /** * Checks whether a value is a strand-shaped object with kind 'strand'. @@ -832,7 +845,7 @@ function normalizeAgainstSelector(normalizedStrandId, against, againstCeiling) { * Compares a strand against its base observation, the live frontier, or * another strand. * - * @this {import('../WarpRuntime.js').default} + * @param {import('../WarpRuntime.js').default} graph * @param {string} strandId * @param {{ * against?: 'base'|'live'|{ kind: 'strand', strandId: string }, @@ -843,7 +856,8 @@ function normalizeAgainstSelector(normalizedStrandId, against, againstCeiling) { * }} [options] * @returns {Promise} */ -export async function compareStrand(strandId, options = {}) { +async function compareStrandImpl(graph, strandId, options = {}) { + assertOptionsObject(options, 'compareStrand()'); const normalizedStrandId = normalizeRequiredString(strandId, 'strandId'); const ceiling = normalizeLamportCeiling(options.ceiling, 'ceiling'); const againstCeiling = normalizeLamportCeiling(options.againstCeiling, 'againstCeiling'); @@ -853,7 +867,7 @@ export async function compareStrand(strandId, options = {}) { const left = { kind: 'strand', strandId: normalizedStrandId, ceiling }; const right = normalizeAgainstSelector(normalizedStrandId, options.against ?? 'base', againstCeiling); - return await this.compareCoordinates({ + return await compareCoordinatesImpl(graph, { left: /** @type {CoordinateComparisonSelectorV1} */ (left), right: /** @type {CoordinateComparisonSelectorV1} */ (right), targetId, @@ -906,7 +920,7 @@ function normalizeIntoSelector(normalizedStrandId, into, intoCeiling) { * Plans a deterministic transfer from one strand into live truth, its * pinned base observation, or another strand. * - * @this {import('../WarpRuntime.js').default} + * @param {import('../WarpRuntime.js').default} graph * @param {string} strandId * @param {{ * into?: 'base'|'live'|{ kind: 'strand', strandId: string }, @@ -916,7 +930,8 @@ function normalizeIntoSelector(normalizedStrandId, into, intoCeiling) { * }} [options] * @returns {Promise} */ -export async function planStrandTransfer(strandId, options = {}) { +async function planStrandTransferImpl(graph, strandId, options = {}) { + assertOptionsObject(options, 'planStrandTransfer()'); const normalizedStrandId = normalizeRequiredString(strandId, 'strandId'); const ceiling = normalizeLamportCeiling(options.ceiling, 'ceiling'); const intoCeiling = normalizeLamportCeiling(options.intoCeiling, 'intoCeiling'); @@ -925,7 +940,7 @@ export async function planStrandTransfer(strandId, options = {}) { const source = { kind: 'strand', strandId: normalizedStrandId, ceiling }; const target = normalizeIntoSelector(normalizedStrandId, options.into ?? 'live', intoCeiling); - return await this.planCoordinateTransfer({ + return await planCoordinateTransferImpl(graph, { source: /** @type {CoordinateTransferPlanSelectorV1} */ (source), target: /** @type {CoordinateTransferPlanSelectorV1} */ (target), ...(scope ? { scope } : {}), @@ -938,6 +953,23 @@ export async function planStrandTransfer(strandId, options = {}) { * @param {unknown} options * @returns {void} */ +/** + * Asserts that an options argument is a plain object (not null, array, or primitive). + * @param {unknown} options + * @param {string} callerName + * @returns {void} + */ +function assertOptionsObject(options, callerName) { + if (options !== null && options !== undefined && (typeof options !== 'object' || Array.isArray(options))) { + throw new QueryError(`${callerName} options must be an object`, { code: 'invalid_coordinate' }); + } +} + +/** + * Asserts that transfer options are valid. + * @param {unknown} options + * @returns {void} + */ function assertTransferOptions(options) { const isInvalid = options === null || options === undefined || typeof options !== 'object' || Array.isArray(options); if (isInvalid) { @@ -988,7 +1020,7 @@ async function finalizeTransferPlan(params) { /** * Plans a deterministic transfer between two substrate observation selectors. * - * @this {import('../WarpRuntime.js').default} + * @param {import('../WarpRuntime.js').default} graph * @param {{ * source: Record, * target: Record, @@ -996,28 +1028,32 @@ async function finalizeTransferPlan(params) { * }} options * @returns {Promise} */ -export async function planCoordinateTransfer(options) { +async function planCoordinateTransferImpl(graph, options) { assertTransferOptions(options); const normalizedSource = /** @type {NormalizedSelector} */ (normalizeSelector(options.source, 'source')); const normalizedTarget = /** @type {NormalizedSelector} */ (normalizeSelector(options.target, 'target')); const scope = normalizeVisibleStateScopeV1(options.scope, 'scope'); - const comp = await this.compareCoordinates({ + // Capture frontier once for consistency across comparison + transfer plan + const liveFrontier = (normalizedSource.kind === 'live' || normalizedTarget.kind === 'live') + ? /** @type {Map} */ (await graph.getFrontier()) + : null; + const comp = await compareCoordinatesImpl(graph, { left: /** @type {CoordinateComparisonSelectorV1} */ (/** @type {unknown} */ (normalizedSource)), right: /** @type {CoordinateComparisonSelectorV1} */ (/** @type {unknown} */ (normalizedTarget)), ...(scope !== null && scope !== undefined ? { scope } : {}), }); - const sourceSide = /** @type {ResolvedComparisonSide} */ (await resolveComparisonSide.call(this, normalizedSource, scope)); - const targetSide = /** @type {ResolvedComparisonSide} */ (await resolveComparisonSide.call(this, normalizedTarget, scope)); + const sourceSide = await normalizedSource.resolve(graph, scope, liveFrontier); + const targetSide = await normalizedTarget.resolve(graph, scope, liveFrontier); /** Loads node content blob by OID. @type {(nodeId: string, meta: { oid: string }) => Promise} */ - const loadNodeContent = async (_nodeId, meta) => await readContentBlobByOid(this, meta.oid); + const loadNodeContent = async (_nodeId, meta) => await readContentBlobByOid(graph, meta.oid); /** Loads edge content blob by OID. @type {(edge: unknown, meta: { oid: string }) => Promise} */ - const loadEdgeContent = async (_edge, meta) => await readContentBlobByOid(this, meta.oid); + const loadEdgeContent = async (_edge, meta) => await readContentBlobByOid(graph, meta.oid); const transfer = await planVisibleStateTransferV5(createStateReaderV5(sourceSide.state), createStateReaderV5(targetSide.state), { loadNodeContent, loadEdgeContent, }); - return await finalizeTransferPlan({ graph: this, sourceSide, targetSide, transfer, comparisonDigest: comp.comparisonDigest, scope }); + return await finalizeTransferPlan({ graph, sourceSide, targetSide, transfer, comparisonDigest: comp.comparisonDigest, scope }); } /** @@ -1057,7 +1093,7 @@ function assertComparisonOptions(options) { /** * Compares two substrate observation selectors. * - * @this {import('../WarpRuntime.js').default} + * @param {import('../WarpRuntime.js').default} graph * @param {{ * left: Record, * right: Record, @@ -1066,12 +1102,16 @@ function assertComparisonOptions(options) { * }} options * @returns {Promise} */ -export async function compareCoordinates(options) { +async function compareCoordinatesImpl(graph, options) { const { normalizedLeft, normalizedRight, targetId, scope } = extractComparisonInputs(options); - const left = /** @type {ResolvedComparisonSide} */ (await resolveComparisonSide.call(this, normalizedLeft, scope)); - const right = /** @type {ResolvedComparisonSide} */ (await resolveComparisonSide.call(this, normalizedRight, scope)); - const visiblePatchDivergence = buildPatchDivergence(left.patchEntries, right.patchEntries, targetId); + // Capture the live frontier ONCE so both sides see the same snapshot + const liveFrontier = (normalizedLeft.kind === 'live' || normalizedRight.kind === 'live') + ? /** @type {Map} */ (await graph.getFrontier()) + : null; + const left = await normalizedLeft.resolve(graph, scope, liveFrontier); + const right = await normalizedRight.resolve(graph, scope, liveFrontier); + const visiblePatchDivergence = buildPatchDivergenceImpl(left.patchEntries, right.patchEntries, targetId); const visibleState = compareVisibleStateV5(left.state, right.state, { targetId }); const fact = buildCoordinateComparisonFact({ @@ -1082,7 +1122,77 @@ export async function compareCoordinates(options) { visiblePatchDivergence, visibleState, }); - const digest = await computeChecksum(/** @type {Record} */ (/** @type {unknown} */ (fact)), this._crypto); + const digest = await computeChecksum(/** @type {Record} */ (/** @type {unknown} */ (fact)), graph._crypto); return /** @type {CoordinateComparisonV1} */ ({ ...fact, comparisonDigest: digest }); } + +// ── Controller class ────────────────────────────────────────────────────────── + +/** + * The host interface that ComparisonController depends on. + * + + */ + +export default class ComparisonController { + /** @type {ComparisonHost} */ + _host; + + /** + * Creates a ComparisonController bound to a WarpRuntime host. + * @param {ComparisonHost} host + */ + constructor(host) { + this._host = host; + } + + /** + * Builds a deterministic patch divergence analysis between two sets of patch entries. + * @param {Array<{ patch: import('../types/WarpTypesV2.js').PatchV2, sha: string }>} leftEntries + * @param {Array<{ patch: import('../types/WarpTypesV2.js').PatchV2, sha: string }>} rightEntries + * @param {string|null} [targetId] + * @returns {Record} + */ + buildPatchDivergence(leftEntries, rightEntries, targetId) { + return buildPatchDivergenceImpl(leftEntries, rightEntries, targetId ?? null); + } + + /** + * Compares a strand against its base, live truth, or another strand. + * @param {string} strandId + * @param {Record} [options] + * @returns {Promise} + */ + async compareStrand(strandId, options = {}) { + return await compareStrandImpl(this._host, strandId, options); + } + + /** + * Plans a transfer from one strand into another observation point. + * @param {string} strandId + * @param {Record} [options] + * @returns {Promise} + */ + async planStrandTransfer(strandId, options = {}) { + return await planStrandTransferImpl(this._host, strandId, options); + } + + /** + * Plans a deterministic transfer between two substrate observation selectors. + * @param {{ source: Record, target: Record, scope?: VisibleStateScopeV1|null }} options + * @returns {Promise} + */ + async planCoordinateTransfer(options) { + return await planCoordinateTransferImpl(this._host, options); + } + + /** + * Compares two substrate observation selectors. + * @param {{ left: Record, right: Record, targetId?: string|null, scope?: VisibleStateScopeV1|null }} options + * @returns {Promise} + */ + async compareCoordinates(options) { + return await compareCoordinatesImpl(this._host, options); + } +} diff --git a/src/domain/services/ConflictAnalyzerService.js b/src/domain/services/ConflictAnalyzerService.js index 5e9a91ee..5da87409 100644 --- a/src/domain/services/ConflictAnalyzerService.js +++ b/src/domain/services/ConflictAnalyzerService.js @@ -9,14 +9,16 @@ */ import QueryError from '../errors/QueryError.js'; -import { reduceV5, normalizeRawOp } from './JoinReducer.js'; +import { reduceV5, normalizeRawOp, OP_STRATEGIES } from './JoinReducer.js'; import { canonicalStringify } from '../utils/canonicalStringify.js'; import { createEventId } from '../utils/EventId.js'; import { decodeEdgeKey } from './KeyCodec.js'; import StrandService from './StrandService.js'; + +/** @import { PatchV2 } from '../types/WarpTypesV2.js' */ /** @typedef {import('../WarpRuntime.js').default} WarpRuntime */ -/** @typedef {import('../types/WarpTypesV2.js').PatchV2} PatchV2 */ + /** @typedef {import('../types/TickReceipt.js').TickReceipt} TickReceipt */ /** @typedef {import('../utils/EventId.js').EventId} EventId */ @@ -32,20 +34,15 @@ const VALID_TARGET_KINDS = new Set(['node', 'edge', 'node_property', 'edge_prope const TARGET_SELECTOR_FIELDS = ['entityId', 'propertyKey', 'from', 'to', 'label']; /** - * Receipt op type mapping. Kept local so the analyzer can interpret canonical ops - * without depending on JoinReducer internals that are not part of the public API. + * Resolves a canonical op type to its TickReceipt-compatible name via OP_STRATEGIES. + * Returns undefined for unknown/forward-compatible op types. + * @param {string} opType + * @returns {string|undefined} */ -/** @type {Readonly>} */ -const RECEIPT_OP_TYPE = Object.freeze({ - NodeAdd: 'NodeAdd', - NodeRemove: 'NodeTombstone', - EdgeAdd: 'EdgeAdd', - EdgeRemove: 'EdgeTombstone', - PropSet: 'PropSet', - NodePropSet: 'NodePropSet', - EdgePropSet: 'EdgePropSet', - BlobValue: 'BlobValue', -}); +function receiptNameForOp(opType) { + const strategy = OP_STRATEGIES.get(opType); + return strategy !== undefined ? strategy.receiptName : undefined; +} const CLASSIFICATION_NOTES = Object.freeze({ RECEIPT_SUPERSEDED: 'receipt_superseded', @@ -950,6 +947,8 @@ function normalizeEffectPayload(_target, opType, canonOp) { EdgeAdd: () => ({ dot: canonOp['dot'] ?? null }), /** Extracts observed dots from an EdgeTombstone operation. */ EdgeTombstone: () => ({ observedDots: normalizeObservedDots(canonOp['observedDots']) }), + /** Extracts the value from a PropSet operation (legacy raw type). */ + PropSet: () => ({ value: canonOp['value'] ?? null }), /** Extracts the value from a NodePropSet operation. */ NodePropSet: () => ({ value: canonOp['value'] ?? null }), /** Extracts the value from an EdgePropSet operation. */ @@ -1517,7 +1516,7 @@ async function analyzeFrameOps(service, { frame, scannedPatchShas, diagnostics, async function analyzeOneOp(service, { frame, opIndex, receiptOpIndex, receipt, diagnostics }) { const rawOp = /** @type {import('../types/WarpTypesV2.js').RawOpV2 | {type: string}} */ (frame.patch.ops[opIndex]); const canonOp = cloneObject(/** @type {Record} */ (normalizeRawOp(rawOp))); - const receiptOpType = RECEIPT_OP_TYPE[/** @type {string} */ (canonOp['type'])]; + const receiptOpType = receiptNameForOp(/** @type {string} */ (canonOp['type'])); if (typeof receiptOpType !== 'string' || receiptOpType.length === 0) { return null; } diff --git a/src/domain/services/ForkController.js b/src/domain/services/ForkController.js new file mode 100644 index 00000000..a3bb0256 --- /dev/null +++ b/src/domain/services/ForkController.js @@ -0,0 +1,292 @@ +/** + * ForkController — fork creation, wormhole compression, and + * backfill-rejection helpers. + * + * Extracted from fork.methods.js. + * + * @module domain/services/ForkController + */ + +import ForkError from '../errors/ForkError.js'; +import { validateGraphName, validateWriterId, buildWriterRef, buildWritersPrefix } from '../utils/RefLayout.js'; +import { generateWriterId } from '../utils/WriterId.js'; +import { createWormhole as createWormholeImpl } from './WormholeService.js'; + + +/** @import { default as ForkHost } from '../WarpRuntime.js' */ +const DEFAULT_ADJACENCY_CACHE_SIZE = 3; + +/** + * The host interface that ForkController depends on. + * + + */ + +export default class ForkController { + /** @type {ForkHost} */ + _host; + + /** + * Creates a ForkController bound to a WarpRuntime host. + * @param {ForkHost} host + */ + constructor(host) { + this._host = host; + } + + /** + * Creates a fork of this graph at a specific point in a writer's history. + * + * @param {{ from: string, at: string, forkName?: string, forkWriterId?: string }} options + * @returns {Promise} + */ + async fork({ from, at, forkName, forkWriterId }) { + const host = this._host; + const t0 = host._clock.now(); + + try { + if (!from || typeof from !== 'string') { + throw new ForkError("Required parameter 'from' is missing or not a string", { + code: 'E_FORK_INVALID_ARGS', + context: { from }, + }); + } + + if (!at || typeof at !== 'string') { + throw new ForkError("Required parameter 'at' is missing or not a string", { + code: 'E_FORK_INVALID_ARGS', + context: { at }, + }); + } + + const writers = await host.discoverWriters(); + if (!writers.includes(from)) { + throw new ForkError(`Writer '${from}' does not exist in graph '${host._graphName}'`, { + code: 'E_FORK_WRITER_NOT_FOUND', + context: { writerId: from, graphName: host._graphName, existingWriters: writers }, + }); + } + + const nodeExists = await host._persistence.nodeExists(at); + if (!nodeExists) { + throw new ForkError(`Patch SHA '${at}' does not exist`, { + code: 'E_FORK_PATCH_NOT_FOUND', + context: { patchSha: at, writerId: from }, + }); + } + + const writerRef = buildWriterRef(host._graphName, from); + const tipSha = await host._persistence.readRef(writerRef); + + if (tipSha === null || tipSha === undefined || tipSha === '') { + throw new ForkError(`Writer '${from}' has no commits`, { + code: 'E_FORK_WRITER_NOT_FOUND', + context: { writerId: from }, + }); + } + + const isInChain = await this._isAncestor(at, tipSha); + if (!isInChain) { + throw new ForkError(`Patch SHA '${at}' is not in writer '${from}' chain`, { + code: 'E_FORK_PATCH_NOT_IN_CHAIN', + context: { patchSha: at, writerId: from, tipSha }, + }); + } + + const resolvedForkName = + forkName ?? `${host._graphName}-fork-${Math.random().toString(36).slice(2, 10).padEnd(8, '0')}`; + try { + validateGraphName(resolvedForkName); + } catch (err) { + throw new ForkError(`Invalid fork name: ${/** @type {Error} */ (err).message}`, { + code: 'E_FORK_NAME_INVALID', + context: { forkName: resolvedForkName, originalError: /** @type {Error} */ (err).message }, + }); + } + + const forkWritersPrefix = buildWritersPrefix(resolvedForkName); + const existingForkRefs = await host._persistence.listRefs(forkWritersPrefix); + if (existingForkRefs.length > 0) { + throw new ForkError(`Graph '${resolvedForkName}' already exists`, { + code: 'E_FORK_ALREADY_EXISTS', + context: { forkName: resolvedForkName, existingRefs: existingForkRefs }, + }); + } + + const resolvedForkWriterId = (forkWriterId !== undefined && forkWriterId !== null && forkWriterId !== '') ? forkWriterId : generateWriterId(); + try { + validateWriterId(resolvedForkWriterId); + } catch (err) { + throw new ForkError(`Invalid fork writer ID: ${/** @type {Error} */ (err).message}`, { + code: 'E_FORK_WRITER_ID_INVALID', + context: { forkWriterId: resolvedForkWriterId, originalError: /** @type {Error} */ (err).message }, + }); + } + + const forkWriterRef = buildWriterRef(resolvedForkName, resolvedForkWriterId); + await host._persistence.updateRef(forkWriterRef, at); + + // Dynamic import to avoid circular dependency + const { default: WarpRuntime } = await import('../WarpRuntime.js'); + + /** @type {import('../WarpRuntime.js').default} */ + let forkGraph; + try { + forkGraph = await WarpRuntime.open({ + persistence: host._persistence, + graphName: resolvedForkName, + writerId: resolvedForkWriterId, + gcPolicy: host._gcPolicy, + adjacencyCacheSize: host._adjacencyCache?.maxSize ?? DEFAULT_ADJACENCY_CACHE_SIZE, + ...(host._checkpointPolicy ? { checkpointPolicy: host._checkpointPolicy } : {}), + autoMaterialize: host._autoMaterialize, + onDeleteWithData: host._onDeleteWithData, + ...(host._logger ? { logger: host._logger } : {}), + clock: host._clock, + crypto: host._crypto, + codec: host._codec, + }); + } catch (openErr) { + // Rollback: delete the ref we just created to avoid a dangling fork + try { + await host._persistence.deleteRef(forkWriterRef); + } catch { + // Best-effort rollback — log but don't mask the original error + } + throw openErr; + } + + host._logTiming('fork', t0, { + metrics: `from=${from} at=${at.slice(0, 7)} name=${resolvedForkName}`, + }); + + return forkGraph; + } catch (err) { + host._logTiming('fork', t0, { error: /** @type {Error} */ (err) }); + throw err; + } + } + + /** + * Creates a wormhole compressing a range of patches. + * + * @param {string} fromSha + * @param {string} toSha + * @returns {Promise<{fromSha: string, toSha: string, writerId: string, payload: import('./ProvenancePayload.js').default, patchCount: number}>} + */ + async createWormhole(fromSha, toSha) { + const host = this._host; + const t0 = host._clock.now(); + + try { + const wormhole = await createWormholeImpl({ + persistence: host._persistence, + graphName: host._graphName, + fromSha, + toSha, + codec: host._codec, + }); + + host._logTiming('createWormhole', t0, { + metrics: `${wormhole.patchCount} patches from=${fromSha.slice(0, 7)} to=${toSha.slice(0, 7)}`, + }); + + return wormhole; + } catch (err) { + host._logTiming('createWormhole', t0, { error: /** @type {Error} */ (err) }); + throw err; + } + } + + /** + * Checks if ancestorSha is an ancestor of descendantSha. + * + * @param {string} ancestorSha + * @param {string} descendantSha + * @returns {Promise} + */ + async _isAncestor(ancestorSha, descendantSha) { + if (!ancestorSha || !descendantSha) { + return false; + } + if (ancestorSha === descendantSha) { + return true; + } + + /** @type {string | null} */ + let cur = descendantSha; + /** @type {Set} */ + const visited = new Set(); + while (cur !== null) { + if (visited.has(cur)) { + throw new ForkError('Cycle detected in commit graph', { + code: 'E_FORK_CYCLE_DETECTED', + context: { sha: cur }, + }); + } + visited.add(cur); + const nodeInfo = await this._host._persistence.getNodeInfo(cur); + const parent = nodeInfo.parents?.[0] ?? null; + if (parent === ancestorSha) { + return true; + } + cur = parent; + } + return false; + } + + /** + * Determines relationship between incoming patch and checkpoint head. + * + * @param {string} ckHead + * @param {string} incomingSha + * @returns {Promise<'same' | 'ahead' | 'behind' | 'diverged'>} + */ + async _relationToCheckpointHead(ckHead, incomingSha) { + if (incomingSha === ckHead) { + return 'same'; + } + if (await this._isAncestor(ckHead, incomingSha)) { + return 'ahead'; + } + if (await this._isAncestor(incomingSha, ckHead)) { + return 'behind'; + } + return 'diverged'; + } + + /** + * Validates an incoming patch against checkpoint frontier. + * + * @param {string} writerId + * @param {string} incomingSha + * @param {{state: import('./JoinReducer.js').WarpStateV5, frontier: Map, stateHash: string, schema: number}} checkpoint + * @returns {Promise} + */ + async _validatePatchAgainstCheckpoint(writerId, incomingSha, checkpoint) { + if (checkpoint === null || checkpoint === undefined || (checkpoint.schema !== 2 && checkpoint.schema !== 3)) { + return; + } + + const ckHead = checkpoint.frontier?.get(writerId); + if (ckHead === undefined || ckHead === null || ckHead === '') { + return; + } + + const relation = await this._relationToCheckpointHead(ckHead, incomingSha); + + if (relation === 'same' || relation === 'behind') { + throw new ForkError( + `Backfill rejected for writer ${writerId}: incoming patch is ${relation} checkpoint frontier`, + { code: 'E_FORK_BACKFILL_REJECTED', context: { writerId, incomingSha, relation, ckHead } }, + ); + } + + if (relation === 'diverged') { + throw new ForkError( + `Writer fork detected for ${writerId}: incoming patch does not extend checkpoint head`, + { code: 'E_FORK_WRITER_DIVERGED', context: { writerId, incomingSha, ckHead } }, + ); + } + } +} diff --git a/src/domain/services/GraphTraversal.js b/src/domain/services/GraphTraversal.js index 7140cda3..227ba455 100644 --- a/src/domain/services/GraphTraversal.js +++ b/src/domain/services/GraphTraversal.js @@ -42,11 +42,8 @@ import MinHeap from '../utils/MinHeap.js'; import LRUCache from '../utils/LRUCache.js'; import { checkAborted } from '../utils/cancellation.js'; -/** @typedef {import('../../ports/NeighborProviderPort.js').default} NeighborProviderPort */ -/** @typedef {import('../../ports/NeighborProviderPort.js').Direction} Direction */ -/** @typedef {import('../../ports/NeighborProviderPort.js').NeighborEdge} NeighborEdge */ -/** @typedef {import('../../ports/NeighborProviderPort.js').NeighborOptions} NeighborOptions */ +/** @import { Direction, NeighborEdge, NeighborOptions, default as NeighborProviderPort } from '../../ports/NeighborProviderPort.js' */ /** * @typedef {Object} TraversalStats * @property {number} nodesVisited diff --git a/src/domain/services/JoinReducer.js b/src/domain/services/JoinReducer.js index 4b127f30..661356e5 100644 --- a/src/domain/services/JoinReducer.js +++ b/src/domain/services/JoinReducer.js @@ -9,8 +9,8 @@ * } */ -import { createORSet, orsetAdd, orsetRemove, orsetJoin, orsetContains, orsetClone } from '../crdt/ORSet.js'; -import { createVersionVector, vvMerge, vvClone, vvDeserialize } from '../crdt/VersionVector.js'; +import { orsetAdd, orsetRemove, orsetJoin, orsetContains, orsetClone } from '../crdt/ORSet.js'; +import { vvMerge, vvClone, vvDeserialize } from '../crdt/VersionVector.js'; import { lwwSet, lwwMax } from '../crdt/LWW.js'; import { createEventId, compareEventIds } from '../utils/EventId.js'; import { createTickReceipt, OP_TYPES } from '../types/TickReceipt.js'; @@ -19,6 +19,9 @@ import { encodeEdgeKey, decodeEdgeKey, encodePropKey, encodeEdgePropKey, EDGE_PR import { normalizeRawOp } from './OpNormalizer.js'; import { createEmptyDiff, mergeDiffs } from '../types/PatchDiff.js'; import PatchError from '../errors/PatchError.js'; +import WarpStateV5 from './WarpStateV5.js'; + +export { default as WarpStateV5 } from './WarpStateV5.js'; // Re-export key codec functions for backward compatibility export { @@ -31,18 +34,7 @@ export { // Re-export op normalization for consumers that operate on raw patches export { normalizeRawOp, lowerCanonicalOp } from './OpNormalizer.js'; -/** - * @typedef {Object} WarpStateV5 - * @property {import('../crdt/ORSet.js').ORSet} nodeAlive - ORSet of alive nodes - * @property {import('../crdt/ORSet.js').ORSet} edgeAlive - ORSet of alive edges - * @property {Map>} prop - Properties with LWW - * @property {import('../crdt/VersionVector.js').VersionVector} observedFrontier - Observed version vector - * @property {Map} edgeBirthEvent - EdgeKey → EventId of most recent EdgeAdd (for clean-slate prop visibility). - * Always present at runtime (initialized to empty Map by createEmptyStateV5 and - * deserializeFullStateV5). Edge birth events were introduced in a later schema - * version; older checkpoints serialize without this field, but the deserializer - * always produces an empty Map for them. - */ +// WarpStateV5 class imported from ./WarpStateV5.js (re-exported above) /** * @typedef {Object} OpLike @@ -79,13 +71,7 @@ export { normalizeRawOp, lowerCanonicalOp } from './OpNormalizer.js'; * @returns {WarpStateV5} A fresh, empty WARP state ready for patch application */ export function createEmptyStateV5() { - return { - nodeAlive: createORSet(), - edgeAlive: createORSet(), - prop: new Map(), - observedFrontier: createVersionVector(), - edgeBirthEvent: new Map(), - }; + return WarpStateV5.empty(); } /** @@ -236,14 +222,69 @@ function requireDot(op) { // ============================================================================ /** - * @typedef {Object} OpOutcomeResult - * @property {string} target - The entity ID or key affected - * @property {'applied'|'superseded'|'redundant'} result - Outcome - * @property {string} [reason] - Explanation when superseded + * OpOutcomeResult — base class for CRDT operation outcomes. + * Subclasses carry outcome-specific data instead of fragile reason strings. */ +export class OpOutcomeResult { + /** @type {string} The entity ID or key affected */ + target; + + /** @type {'applied'|'superseded'|'redundant'} */ + result; + + /** + * Creates an OpOutcomeResult. + * @param {string} target + * @param {'applied'|'superseded'|'redundant'} result + */ + constructor(target, result) { + this.target = target; + this.result = result; + } +} + +/** The operation was applied to the state. */ +export class OpApplied extends OpOutcomeResult { + /** Creates an OpApplied. + * @param {string} target + */ + constructor(target) { + super(target, 'applied'); + } +} + +/** The operation was overridden by a concurrent write with a higher EventId. */ +export class OpSuperseded extends OpOutcomeResult { + /** @type {import('../utils/EventId.js').EventId} The winning EventId */ + winner; + + /** @type {string} Human-readable explanation */ + reason; + + /** Creates an OpSuperseded. + * @param {string} target + * @param {import('../utils/EventId.js').EventId} winner + */ + constructor(target, winner) { + super(target, 'superseded'); + this.winner = winner; + this.reason = `LWW: writer ${winner.writerId} at lamport ${winner.lamport} wins`; + } +} + +/** The operation had no effect (already present in state). */ +export class OpRedundant extends OpOutcomeResult { + /** Creates an OpRedundant. + * @param {string} target + */ + constructor(target) { + super(target, 'redundant'); + } +} /** * @typedef {Object} OpStrategy + * @property {string} receiptName - The TickReceipt-compatible operation type name (e.g. 'NodeTombstone' for NodeRemove) * @property {(state: WarpStateV5, op: OpLike, eventId: import('../utils/EventId.js').EventId) => void} mutate * @property {(state: WarpStateV5, op: OpLike, eventId: import('../utils/EventId.js').EventId) => OpOutcomeResult} outcome * @property {(state: WarpStateV5, op: OpLike) => SnapshotBeforeOp} snapshot @@ -253,6 +294,7 @@ function requireDot(op) { /** @type {OpStrategy} */ const nodeAddStrategy = { + receiptName: 'NodeAdd', validate(op) { requireString(op, 'node'); requireDot(op); }, mutate(state, op) { orsetAdd(state.nodeAlive, /** @type {string} */ (op.node), /** @type {import('../crdt/Dot.js').Dot} */ (op.dot)); @@ -272,6 +314,7 @@ const nodeAddStrategy = { /** @type {OpStrategy} */ const nodeRemoveStrategy = { + receiptName: 'NodeTombstone', validate(op) { requireIterable(op, 'observedDots'); }, mutate(state, op) { orsetRemove(state.nodeAlive, /** @type {Set} */ (/** @type {unknown} */ (op.observedDots))); @@ -292,6 +335,7 @@ const nodeRemoveStrategy = { /** @type {OpStrategy} */ const edgeAddStrategy = { + receiptName: 'EdgeAdd', validate(op) { requireString(op, 'from'); requireString(op, 'to'); requireString(op, 'label'); requireDot(op); }, mutate(state, op, eventId) { const edgeKey = encodeEdgeKey(/** @type {string} */ (op.from), /** @type {string} */ (op.to), /** @type {string} */ (op.label)); @@ -320,6 +364,7 @@ const edgeAddStrategy = { /** @type {OpStrategy} */ const edgeRemoveStrategy = { + receiptName: 'EdgeTombstone', validate(op) { requireIterable(op, 'observedDots'); }, mutate(state, op) { orsetRemove(state.edgeAlive, /** @type {Set} */ (/** @type {unknown} */ (op.observedDots))); @@ -379,6 +424,7 @@ function accumulatePropDiff(diff, state, nodeId, key, before) { /** @type {OpStrategy} */ const nodePropSetStrategy = { + receiptName: 'NodePropSet', validate(op) { requireString(op, 'node'); requireString(op, 'key'); }, mutate(state, op, eventId) { mutateProp(state, encodePropKey(/** @type {string} */ (op.node), /** @type {string} */ (op.key)), eventId, op.value); @@ -396,6 +442,7 @@ const nodePropSetStrategy = { /** @type {OpStrategy} */ const edgePropSetStrategy = { + receiptName: 'EdgePropSet', validate(op) { requireString(op, 'from'); requireString(op, 'to'); requireString(op, 'label'); requireString(op, 'key'); }, mutate(state, op, eventId) { mutateProp(state, encodeEdgePropKey(/** @type {string} */ (op.from), /** @type {string} */ (op.to), /** @type {string} */ (op.label), /** @type {string} */ (op.key)), eventId, op.value); @@ -413,6 +460,7 @@ const edgePropSetStrategy = { /** @type {OpStrategy} */ const propSetStrategy = { + receiptName: 'PropSet', validate(op) { requireString(op, 'node'); requireString(op, 'key'); }, mutate(state, op, eventId) { // Legacy raw PropSet — must NOT carry edge-property encoding at this point. @@ -438,13 +486,14 @@ const propSetStrategy = { /** @type {OpStrategy} */ const blobValueStrategy = { + receiptName: 'BlobValue', validate() { /* no-op: forward-compat */ }, mutate() { /* no-op: BlobValue has no state effect */ }, outcome(_state, op) { const blobOp = /** @type {{ oid?: string }} */ (op); const blobOid = blobOp.oid; const blobTarget = (typeof blobOid === 'string' && blobOid.length > 0) ? blobOid : '*'; - return { target: blobTarget, result: /** @type {'applied'} */ ('applied') }; + return new OpApplied(blobTarget); }, snapshot() { return {}; }, accumulate() { /* no-op */ }, @@ -473,6 +522,12 @@ for (const [type, strategy] of OP_STRATEGIES) { throw new Error(`OpStrategy '${type}' missing required method '${method}'`); } } + if (typeof strategy.receiptName !== 'string' || strategy.receiptName.length === 0) { + throw new Error(`OpStrategy '${type}' missing required property 'receiptName'`); + } + if (!OP_TYPES.includes(strategy.receiptName)) { + throw new Error(`OpStrategy '${type}' receiptName '${strategy.receiptName}' is not in TickReceipt OP_TYPES`); + } } /** @@ -495,30 +550,6 @@ export function applyOpV2(state, op, eventId) { strategy.mutate(state, op, eventId); } -/** - * Maps internal operation type names to TickReceipt-compatible operation type names. - * - * The internal representation uses "Remove" for tombstone operations, but the - * TickReceipt API uses "Tombstone" to be more explicit about CRDT semantics. - * This mapping ensures receipt consumers see the canonical operation names. - * - * Mappings: - * - NodeRemove -> NodeTombstone (CRDT tombstone semantics) - * - EdgeRemove -> EdgeTombstone (CRDT tombstone semantics) - * - All others pass through unchanged - * - * @const {Record} - */ -const RECEIPT_OP_TYPE = { - NodeAdd: 'NodeAdd', - NodeRemove: 'NodeTombstone', - EdgeAdd: 'EdgeAdd', - EdgeRemove: 'EdgeTombstone', - PropSet: 'PropSet', - NodePropSet: 'NodePropSet', - EdgePropSet: 'EdgePropSet', - BlobValue: 'BlobValue', -}; /** * Set of valid receipt op types (from TickReceipt) for fast membership checks. @@ -535,15 +566,15 @@ const VALID_RECEIPT_OPS = new Set(OP_TYPES); * * @param {import('../crdt/ORSet.js').ORSet} orset - The node OR-Set containing alive nodes * @param {{node: string, dot: import('../crdt/Dot.js').Dot}} op - The NodeAdd operation - * @returns {{target: string, result: 'applied'|'redundant'}} Outcome with node ID as target + * @returns {OpApplied|OpRedundant} Outcome with node ID as target */ function nodeAddOutcome(orset, op) { const encoded = encodeDot(op.dot); const existingDots = orset.entries.get(op.node); if (existingDots && existingDots.has(encoded)) { - return { target: op.node, result: 'redundant' }; + return new OpRedundant(op.node); } - return { target: op.node, result: 'applied' }; + return new OpApplied(op.node); } /** @@ -556,7 +587,7 @@ function nodeAddOutcome(orset, op) { * * @param {import('../crdt/ORSet.js').ORSet} orset - The node OR-Set containing alive nodes * @param {{node?: string, observedDots: string[] | Set}} op - The NodeRemove operation - * @returns {{target: string, result: 'applied'|'redundant'}} Outcome with node ID (or '*') as target + * @returns {OpApplied|OpRedundant} Outcome with node ID (or '*') as target */ function nodeRemoveOutcome(orset, op) { // Build a reverse index (dot → elementId) for the observed dots to avoid @@ -587,15 +618,15 @@ function nodeRemoveOutcome(orset, op) { * @param {import('../crdt/ORSet.js').ORSet} orset - The edge OR-Set containing alive edges * @param {{from: string, to: string, label: string, dot: import('../crdt/Dot.js').Dot}} op - The EdgeAdd operation * @param {string} edgeKey - Pre-encoded edge key (from\0to\0label format) - * @returns {{target: string, result: 'applied'|'redundant'}} Outcome with encoded edge key as target + * @returns {OpApplied|OpRedundant} Outcome with encoded edge key as target */ function edgeAddOutcome(orset, op, edgeKey) { const encoded = encodeDot(op.dot); const existingDots = orset.entries.get(edgeKey); if (existingDots && existingDots.has(encoded)) { - return { target: edgeKey, result: 'redundant' }; + return new OpRedundant(edgeKey); } - return { target: edgeKey, result: 'applied' }; + return new OpApplied(edgeKey); } /** @@ -611,7 +642,7 @@ function edgeAddOutcome(orset, op, edgeKey) { * * @param {import('../crdt/ORSet.js').ORSet} orset - The edge OR-Set containing alive edges * @param {{from?: string, to?: string, label?: string, observedDots: string[] | Set}} op - The EdgeRemove operation - * @returns {{target: string, result: 'applied'|'redundant'}} Outcome with encoded edge key (or '*') as target + * @returns {OpApplied|OpRedundant} Outcome with encoded edge key (or '*') as target */ function edgeRemoveOutcome(orset, op) { // Build a reverse index (dot → elementId) for the observed dots to avoid @@ -655,29 +686,25 @@ function edgeRemoveOutcome(orset, op) { * @param {Map>} propMap - The properties map keyed by encoded prop keys * @param {string} key - Pre-encoded property key (node or edge) * @param {import('../utils/EventId.js').EventId} eventId - The event ID for this operation, used for LWW comparison - * @returns {{target: string, result: 'applied'|'superseded'|'redundant', reason?: string}} + * @returns {OpOutcomeResult} * Outcome with encoded prop key as target; includes reason when superseded */ function propOutcomeForKey(propMap, key, eventId) { const current = propMap.get(key); if (!current) { - return { target: key, result: 'applied' }; + return new OpApplied(key); } const cmp = compareEventIds(eventId, current.eventId); if (cmp > 0) { - return { target: key, result: 'applied' }; + return new OpApplied(key); } if (cmp < 0) { const winner = current.eventId; - return { - target: key, - result: 'superseded', - reason: `LWW: writer ${winner.writerId} at lamport ${winner.lamport} wins`, - }; + return new OpSuperseded(key, winner); } - return { target: key, result: 'redundant' }; + return new OpRedundant(key); } /** @@ -686,7 +713,7 @@ function propOutcomeForKey(propMap, key, eventId) { * @param {Map>} propMap * @param {{node: string, key: string}} op - The PropSet or NodePropSet operation * @param {import('../utils/EventId.js').EventId} eventId - * @returns {{target: string, result: 'applied'|'superseded'|'redundant', reason?: string}} + * @returns {OpOutcomeResult} */ function propSetOutcome(propMap, op, eventId) { return propOutcomeForKey(propMap, encodePropKey(op.node, op.key), eventId); @@ -698,7 +725,7 @@ function propSetOutcome(propMap, op, eventId) { * @param {Map>} propMap * @param {{from: string, to: string, label: string, key: string}} op - The EdgePropSet operation * @param {import('../utils/EventId.js').EventId} eventId - * @returns {{target: string, result: 'applied'|'superseded'|'redundant', reason?: string}} + * @returns {OpOutcomeResult} */ function edgePropSetOutcome(propMap, op, eventId) { return propOutcomeForKey(propMap, encodeEdgePropKey(op.from, op.to, op.label, op.key), eventId); @@ -908,15 +935,14 @@ export function applyWithReceipt(state, patch, patchSha) { // Apply the op (mutates state) strategy.mutate(state, canonOp, eventId); - const mappedOp = /** @type {Record} */ (RECEIPT_OP_TYPE)[canonOp.type]; - const receiptOp = (typeof mappedOp === 'string' && mappedOp.length > 0) ? mappedOp : canonOp.type; + const receiptOp = strategy.receiptName; // Skip unknown/forward-compatible op types that aren't valid receipt ops if (!VALID_RECEIPT_OPS.has(receiptOp)) { continue; } /** @type {import('../types/TickReceipt.js').OpOutcome} */ const entry = { op: receiptOp, target: outcome.target, result: /** @type {'applied'|'superseded'|'redundant'} */ (outcome.result) }; - if (typeof outcome.reason === 'string' && outcome.reason.length > 0) { + if (outcome instanceof OpSuperseded && outcome.reason.length > 0) { entry.reason = outcome.reason; } opResults.push(entry); @@ -981,13 +1007,13 @@ export function join(state, patch, patchSha, collectReceipts) { * @returns {WarpStateV5} New state representing the join of a and b */ export function joinStates(a, b) { - return { + return new WarpStateV5({ nodeAlive: orsetJoin(a.nodeAlive, b.nodeAlive), edgeAlive: orsetJoin(a.edgeAlive, b.edgeAlive), prop: mergeProps(a.prop, b.prop), observedFrontier: vvMerge(a.observedFrontier, b.observedFrontier), edgeBirthEvent: mergeEdgeBirthEvent(a.edgeBirthEvent, b.edgeBirthEvent), - }; + }); } /** @@ -1115,11 +1141,18 @@ export function reduceV5(patches, initialState, options) { * @returns {WarpStateV5} A new state with identical contents but independent data structures */ export function cloneStateV5(state) { - return { - nodeAlive: orsetClone(state.nodeAlive), - edgeAlive: orsetClone(state.edgeAlive), - prop: new Map(state.prop), - observedFrontier: vvClone(state.observedFrontier), - edgeBirthEvent: new Map(state.edgeBirthEvent ?? []), - }; + if (state instanceof WarpStateV5) { + return state.clone(); + } + // Structural fallback: normalize plain/deserialized objects into WarpStateV5. + // This handles checkpoint deserialization and test fixtures that construct + // state as plain objects. + const s = /** @type {Record} */ (/** @type {unknown} */ (state)); + return new WarpStateV5({ + nodeAlive: orsetClone(/** @type {import('../crdt/ORSet.js').ORSet} */ (s['nodeAlive'])), + edgeAlive: orsetClone(/** @type {import('../crdt/ORSet.js').ORSet} */ (s['edgeAlive'])), + prop: new Map(/** @type {Map>} */ (s['prop'])), + observedFrontier: vvClone(/** @type {import('../crdt/VersionVector.js').VersionVector} */ (s['observedFrontier'])), + edgeBirthEvent: new Map(/** @type {Map} */ (s['edgeBirthEvent'] ?? [])), + }); } diff --git a/src/domain/services/MaterializedViewService.js b/src/domain/services/MaterializedViewService.js index 9507ef0c..b3ba815b 100644 --- a/src/domain/services/MaterializedViewService.js +++ b/src/domain/services/MaterializedViewService.js @@ -155,7 +155,7 @@ function sampleNodes(allNodes, sampleRate, seed) { /** * Builds adjacency maps from state for ground-truth verification. * - * @param {import('../services/JoinReducer.js').WarpStateV5} state + * @param {import('./JoinReducer.js').WarpStateV5} state * @returns {{ outgoing: Map>, incoming: Map> }} */ function buildGroundTruthAdjacency(state) { diff --git a/src/domain/services/Observer.js b/src/domain/services/Observer.js index 01b16e6c..65c05018 100644 --- a/src/domain/services/Observer.js +++ b/src/domain/services/Observer.js @@ -15,8 +15,9 @@ import { createStateReaderV5 } from './StateReaderV5.js'; import { orsetContains, orsetElements } from '../crdt/ORSet.js'; import { decodeEdgeKey } from './KeyCodec.js'; import { matchGlob } from '../utils/matchGlob.js'; -/** @typedef {import('../../../index.js').WorldlineSource} WorldlineSource */ + +/** @import { WorldlineSource } from '../../../index.js' */ /** * Clones an observer worldline source descriptor, producing an independent copy. * @param {{ diff --git a/src/domain/services/PatchBuilderV2.js b/src/domain/services/PatchBuilderV2.js index 370316b1..02063cba 100644 --- a/src/domain/services/PatchBuilderV2.js +++ b/src/domain/services/PatchBuilderV2.js @@ -164,7 +164,7 @@ export class PatchBuilderV2 { /** * Creates a new PatchBuilderV2. * - * @param {{ persistence: import('../../ports/CommitPort.js').default & import('../../ports/BlobPort.js').default & import('../../ports/TreePort.js').default & import('../../ports/RefPort.js').default, graphName: string, writerId: string, lamport: number, versionVector: import('../crdt/VersionVector.js').VersionVector, getCurrentState: () => import('../services/JoinReducer.js').WarpStateV5 | null, expectedParentSha?: string|null, targetRefPath?: string, onCommitSuccess?: ((result: {patch: import('../types/WarpTypesV2.js').PatchV2, sha: string}) => void | Promise)|null, onDeleteWithData?: 'reject'|'cascade'|'warn', codec?: import('../../ports/CodecPort.js').default, logger?: import('../../ports/LoggerPort.js').default, blobStorage?: import('../../ports/BlobStoragePort.js').default, patchBlobStorage?: import('../../ports/BlobStoragePort.js').default }} options + * @param {{ persistence: import('../../ports/CommitPort.js').default & import('../../ports/BlobPort.js').default & import('../../ports/TreePort.js').default & import('../../ports/RefPort.js').default, graphName: string, writerId: string, lamport: number, versionVector: import('../crdt/VersionVector.js').VersionVector, getCurrentState: () => import('./JoinReducer.js').WarpStateV5 | null, expectedParentSha?: string|null, targetRefPath?: string, onCommitSuccess?: ((result: {patch: import('../types/WarpTypesV2.js').PatchV2, sha: string}) => void | Promise)|null, onDeleteWithData?: 'reject'|'cascade'|'warn', codec?: import('../../ports/CodecPort.js').default, logger?: import('../../ports/LoggerPort.js').default, blobStorage?: import('../../ports/BlobStoragePort.js').default, patchBlobStorage?: import('../../ports/BlobStoragePort.js').default }} options */ constructor({ persistence, graphName, writerId, lamport, versionVector, getCurrentState, expectedParentSha = null, targetRefPath, onCommitSuccess = null, onDeleteWithData = 'warn', codec, logger, blobStorage, patchBlobStorage }) { /** @type {import('../../ports/CommitPort.js').default & import('../../ports/BlobPort.js').default & import('../../ports/TreePort.js').default & import('../../ports/RefPort.js').default} */ @@ -187,7 +187,7 @@ export class PatchBuilderV2 { /** @type {import('../crdt/VersionVector.js').VersionVector} */ this._vv = vvClone(versionVector); // Clone to track local increments - /** @type {() => import('../services/JoinReducer.js').WarpStateV5 | null} */ + /** @type {() => import('./JoinReducer.js').WarpStateV5 | null} */ this._getCurrentState = getCurrentState; /** diff --git a/src/domain/services/ProvenanceController.js b/src/domain/services/ProvenanceController.js new file mode 100644 index 00000000..9767ea2b --- /dev/null +++ b/src/domain/services/ProvenanceController.js @@ -0,0 +1,243 @@ +/** + * ProvenanceController — patch lookups, slice materialization, + * backward causal cone computation, and causal sorting. + * + * Extracted from provenance.methods.js. + * + * @module domain/services/ProvenanceController + */ + +import QueryError from '../errors/QueryError.js'; +import { createEmptyStateV5, reduceV5 } from './JoinReducer.js'; +import { ProvenancePayload } from './ProvenancePayload.js'; +import { decodePatchMessage, detectMessageKind } from './WarpMessageCodec.js'; + +/** @import { WarpStateV5 } from './JoinReducer.js' */ +/** @import { PatchV2 } from '../types/WarpTypesV2.js' */ + +/** + * The host interface that ProvenanceController depends on. + * + * Uses WarpRuntime directly because several required methods + * (_readPatchBlob, _ensureFreshState) are wired onto the prototype + * by other mixin files and not visible to TSC as class members. + * + * @typedef {import('../warp/_internal.js').WarpGraphWithMixins} ProvenanceHost + */ + +export default class ProvenanceController { + /** @type {ProvenanceHost} */ + _host; + + /** + * Creates a ProvenanceController bound to a WarpRuntime host. + * @param {ProvenanceHost} host + */ + constructor(host) { + this._host = host; + } + + /** + * Returns all patch SHAs that affected a given node or edge. + * + * @param {string} entityId + * @returns {Promise} + */ + async patchesFor(entityId) { + await this._host._ensureFreshState(); + + if (this._host._provenanceDegraded) { + throw new QueryError('Provenance unavailable for cached seek. Re-seek with --no-persistent-cache or call materialize({ ceiling }) directly.', { + code: 'E_PROVENANCE_DEGRADED', + }); + } + + if (!this._host._provenanceIndex) { + throw new QueryError('No provenance index. Call materialize() first.', { + code: 'E_NO_STATE', + }); + } + return this._host._provenanceIndex.patchesFor(entityId); + } + + /** + * Materializes only the backward causal cone for a specific node. + * + * @param {string} nodeId + * @param {{receipts?: boolean}} [options] + * @returns {Promise<{state: WarpStateV5, patchCount: number, receipts?: import('../types/TickReceipt.js').TickReceipt[]}>} + */ + async materializeSlice(nodeId, options) { + const host = this._host; + const t0 = host._clock.now(); + const collectReceipts = options?.receipts === true; + + try { + await host._ensureFreshState(); + + if (host._provenanceDegraded) { + throw new QueryError('Provenance unavailable for cached seek. Re-seek with --no-persistent-cache or call materialize({ ceiling }) directly.', { + code: 'E_PROVENANCE_DEGRADED', + }); + } + + if (!host._provenanceIndex) { + throw new QueryError('No provenance index. Call materialize() first.', { + code: 'E_NO_STATE', + }); + } + + const conePatchMap = await this._computeBackwardCone(nodeId); + + if (conePatchMap.size === 0) { + const emptyState = createEmptyStateV5(); + host._logTiming('materializeSlice', t0, { metrics: '0 patches (empty cone)' }); + return { + state: emptyState, + patchCount: 0, + ...(collectReceipts ? { receipts: [] } : {}), + }; + } + + const patchEntries = []; + for (const [sha, patch] of conePatchMap) { + patchEntries.push({ patch, sha }); + } + + const sortedPatches = this._sortPatchesCausally(patchEntries); + host._logTiming('materializeSlice', t0, { metrics: `${sortedPatches.length} patches` }); + + if (collectReceipts) { + const result = /** @type {{state: WarpStateV5, receipts: import('../types/TickReceipt.js').TickReceipt[]}} */ (reduceV5(sortedPatches, undefined, { receipts: true })); + return { + state: result.state, + patchCount: sortedPatches.length, + receipts: result.receipts, + }; + } + + const payload = new ProvenancePayload(sortedPatches); + return { + state: payload.replay(), + patchCount: sortedPatches.length, + }; + } catch (err) { + host._logTiming('materializeSlice', t0, { error: /** @type {Error} */ (err) }); + throw err; + } + } + + /** + * Computes the backward causal cone for a node via BFS over the provenance index. + * + * @param {string} nodeId + * @returns {Promise>} + */ + async _computeBackwardCone(nodeId) { + const host = this._host; + if (!host._provenanceIndex) { + throw new QueryError('No provenance index. Call materialize() first.', { + code: 'E_NO_STATE', + }); + } + /** @type {Map} */ + const cone = new Map(); + /** @type {Set} */ + const visited = new Set(); + const queue = [nodeId]; + let qi = 0; + + while (qi < queue.length) { + const entityId = /** @type {string} */ (queue[qi++]); + if (visited.has(entityId)) { + continue; + } + visited.add(entityId); + + const patchShas = host._provenanceIndex.patchesFor(entityId); + for (const sha of patchShas) { + if (cone.has(sha)) { + continue; + } + const patch = await this._loadPatchBySha(sha); + cone.set(sha, patch); + + const patchReads = /** @type {{reads?: string[]}} */ (patch).reads; + if (patchReads) { + for (const readEntity of patchReads) { + if (!visited.has(readEntity)) { + queue.push(readEntity); + } + } + } + } + } + + return cone; + } + + /** + * Loads a single patch by its SHA (public API for CLI/debug tooling). + * + * @param {string} sha + * @returns {Promise} + */ + async loadPatchBySha(sha) { + return await this._loadPatchBySha(sha); + } + + /** + * Loads a single patch by its SHA. + * + * @param {string} sha + * @returns {Promise} + */ + async _loadPatchBySha(sha) { + const host = this._host; + const nodeInfo = await host._persistence.getNodeInfo(sha); + const kind = detectMessageKind(nodeInfo.message); + + if (kind !== 'patch') { + throw new Error(`Commit ${sha} is not a patch`); + } + + const patchMeta = decodePatchMessage(nodeInfo.message); + const patchBuffer = await host._readPatchBlob(patchMeta); + return /** @type {PatchV2} */ (host._codec.decode(patchBuffer)); + } + + /** + * Loads multiple patches by their SHAs. + * + * @param {string[]} shas + * @returns {Promise>} + */ + async _loadPatchesBySha(shas) { + const entries = []; + for (const sha of shas) { + const patch = await this._loadPatchBySha(sha); + entries.push({ patch, sha }); + } + return entries; + } + + /** + * Sorts patches in causal order for deterministic replay. + * + * @param {Array<{patch: PatchV2, sha: string}>} patches + * @returns {Array<{patch: PatchV2, sha: string}>} + */ + _sortPatchesCausally(patches) { + return [...patches].sort((a, b) => { + const lamportDiff = (a.patch.lamport || 0) - (b.patch.lamport || 0); + if (lamportDiff !== 0) { + return lamportDiff; + } + const writerCmp = (a.patch.writer || '').localeCompare(b.patch.writer || ''); + if (writerCmp !== 0) { + return writerCmp; + } + return a.sha.localeCompare(b.sha); + }); + } +} diff --git a/src/domain/warp/query.methods.js b/src/domain/services/QueryController.js similarity index 75% rename from src/domain/warp/query.methods.js rename to src/domain/services/QueryController.js index 535502bf..199a451b 100644 --- a/src/domain/warp/query.methods.js +++ b/src/domain/services/QueryController.js @@ -1,10 +1,10 @@ /** - * Query methods for WarpRuntime — pure reads on materialized state. + * QueryController — pure reads on materialized state. * - * Every function uses `this` bound to a WarpRuntime instance at runtime - * via wireWarpMethods(). + * Extracted from query.methods.js. All methods are read-only queries + * against cached CRDT state, indexes, and blob storage. * - * @module domain/warp/query.methods + * @module domain/services/QueryController */ import { orsetContains, orsetElements } from '../crdt/ORSet.js'; @@ -19,18 +19,24 @@ import { CONTENT_PROPERTY_KEY, CONTENT_MIME_PROPERTY_KEY, CONTENT_SIZE_PROPERTY_KEY, -} from '../services/KeyCodec.js'; +} from './KeyCodec.js'; import { compareEventIds } from '../utils/EventId.js'; -import { cloneStateV5 } from '../services/JoinReducer.js'; -import { createImmutableWarpStateV5 } from '../services/ImmutableSnapshot.js'; -import QueryBuilder from '../services/QueryBuilder.js'; -import Observer from '../services/Observer.js'; -import Worldline from '../services/Worldline.js'; -import { computeTranslationCost } from '../services/TranslationCost.js'; -import { computeStateHashV5 } from '../services/StateSerializerV5.js'; +import { cloneStateV5 } from './JoinReducer.js'; +import { createImmutableWarpStateV5 } from './ImmutableSnapshot.js'; +import QueryBuilder from './QueryBuilder.js'; +import Observer from './Observer.js'; +import Worldline from './Worldline.js'; +import { computeTranslationCost } from './TranslationCost.js'; +import { computeStateHashV5 } from './StateSerializerV5.js'; import { toInternalStrandShape } from '../utils/strandPublicShape.js'; import { callInternalRuntimeMethod } from '../utils/callInternalRuntimeMethod.js'; +/** + * The host interface that QueryController depends on. + * + * @typedef {import('../warp/_internal.js').WarpGraphWithMixins} QueryHost + */ + /** * @typedef {{ * source?: { @@ -118,10 +124,10 @@ async function openDetachedObserverGraph(graph) { * Snapshots the current materialized state with a cloned copy and hash. * * @param {import('../WarpRuntime.js').default} graph - * @returns {Promise<{ state: import('../services/JoinReducer.js').WarpStateV5, stateHash: string }>} + * @returns {Promise<{ state: import('./WarpStateV5.js').default, stateHash: string }>} */ async function snapshotCurrentMaterialized(graph) { - const materialized = await /** @type {{ _materializeGraph: () => Promise<{state: import('../services/JoinReducer.js').WarpStateV5, stateHash: string|null}> }} */ (graph)._materializeGraph(); + const materialized = await /** @type {{ _materializeGraph: () => Promise<{state: import('./WarpStateV5.js').default, stateHash: string|null}> }} */ (graph)._materializeGraph(); return { state: cloneStateV5(materialized.state), stateHash: /** @type {string} */ (materialized.stateHash), @@ -132,8 +138,8 @@ async function snapshotCurrentMaterialized(graph) { * Clones and hashes a returned state for snapshot isolation. * * @param {import('../WarpRuntime.js').default} graph - * @param {import('../services/JoinReducer.js').WarpStateV5} state - * @returns {Promise<{ state: import('../services/JoinReducer.js').WarpStateV5, stateHash: string }>} + * @param {import('./WarpStateV5.js').default} state + * @returns {Promise<{ state: import('./WarpStateV5.js').default, stateHash: string }>} */ async function snapshotReturnedState(graph, state) { const stateHash = await computeStateHashV5(state, { @@ -151,7 +157,7 @@ async function snapshotReturnedState(graph, state) { * * @param {import('../WarpRuntime.js').default} graph * @param {ObserverOptions|undefined} options - * @returns {Promise<{ state: import('../services/JoinReducer.js').WarpStateV5, stateHash: string }>} + * @returns {Promise<{ state: import('./WarpStateV5.js').default, stateHash: string }>} */ async function resolveObserverSnapshot(graph, options) { const source = cloneObserverSource(options?.source); @@ -162,7 +168,7 @@ async function resolveObserverSnapshot(graph, options) { if (source.kind === 'live') { const detached = await openDetachedObserverGraph(graph); - const state = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (await detached.materialize({ + const state = /** @type {import('./WarpStateV5.js').default} */ (await detached.materialize({ ceiling: source.ceiling ?? null, })); return await snapshotReturnedState(detached, state); @@ -170,7 +176,7 @@ async function resolveObserverSnapshot(graph, options) { if (source.kind === 'coordinate') { const detached = await openDetachedObserverGraph(graph); - const state = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (await detached.materializeCoordinate({ + const state = /** @type {import('./WarpStateV5.js').default} */ (await detached.materializeCoordinate({ frontier: source.frontier, ceiling: source.ceiling ?? null, })); @@ -182,7 +188,7 @@ async function resolveObserverSnapshot(graph, options) { const internalSource = /** @type {{ strandId: string, ceiling?: number|null }} */ ( /** @type {unknown} */ (toInternalStrandShape(source)) ); - const state = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ ( + const state = /** @type {import('./WarpStateV5.js').default} */ ( await callInternalRuntimeMethod(detached, 'materializeStrand', internalSource.strandId, { ceiling: internalSource.ceiling ?? null, }) @@ -198,33 +204,33 @@ async function resolveObserverSnapshot(graph, options) { * * **Requires a cached state.** Call materialize() first if not already cached. * - * @this {import('../WarpRuntime.js').default} * @param {string} nodeId - The node ID to check * @returns {Promise} True if the node exists in the materialized state * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) * @throws {import('../errors/QueryError.js').default} If cached state is dirty (code: `E_STALE_STATE`) + * @this {QueryController} */ -export async function hasNode(nodeId) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function hasNode(nodeId) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); return orsetContains(s.nodeAlive, nodeId); } /** * Gets all properties for a node from the materialized state. * - * @this {import('../WarpRuntime.js').default} * @param {string} nodeId - The node ID to get properties for * @returns {Promise|null>} Object of property key → value, or null if node doesn't exist * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function getNodeProps(nodeId) { - await this._ensureFreshState(); +async function getNodeProps(nodeId) { + await this._host._ensureFreshState(); // ── Indexed fast path (positive results only; stale index falls through) ── - if (this._propertyReader !== null && this._propertyReader !== undefined && this._logicalIndex?.isAlive(nodeId) === true) { + if (this._host._propertyReader !== null && this._host._propertyReader !== undefined && this._host._logicalIndex?.isAlive(nodeId) === true) { try { - const record = await this._propertyReader.getNodeProps(nodeId); + const record = await this._host._propertyReader.getNodeProps(nodeId); if (record !== null) { return record; } @@ -235,7 +241,7 @@ export async function getNodeProps(nodeId) { } // ── Linear scan fallback ───────────────────────────────────────────── - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); if (!orsetContains(s.nodeAlive, nodeId)) { return null; @@ -256,16 +262,16 @@ export async function getNodeProps(nodeId) { /** * Gets all properties for an edge from the materialized state. * - * @this {import('../WarpRuntime.js').default} * @param {string} from - Source node ID * @param {string} to - Target node ID * @param {string} label - Edge label * @returns {Promise|null>} Object of property key → value, or null if edge doesn't exist * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function getEdgeProps(from, to, label) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getEdgeProps(from, to, label) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); const edgeKey = encodeEdgeKey(from, to, label); if (!orsetContains(s.edgeAlive, edgeKey)) { @@ -311,19 +317,19 @@ function tagDirection(edges, dir) { /** * Gets neighbors of a node from the materialized state. * - * @this {import('../WarpRuntime.js').default} * @param {string} nodeId - The node ID to get neighbors for * @param {'outgoing' | 'incoming' | 'both'} [direction='both'] - Edge direction to follow * @param {string} [edgeLabel] - Optional edge label filter * @returns {Promise>} Array of neighbor info * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function neighbors(nodeId, direction = 'both', edgeLabel = undefined) { - await this._ensureFreshState(); +async function neighbors(nodeId, direction = 'both', edgeLabel = undefined) { + await this._host._ensureFreshState(); // ── Indexed fast path (only when node is in index; stale falls through) ── - const provider = this._materializedGraph?.provider; - if (provider !== null && provider !== undefined && this._logicalIndex?.isAlive(nodeId) === true) { + const provider = this._host._materializedGraph?.provider; + if (provider !== null && provider !== undefined && this._host._logicalIndex?.isAlive(nodeId) === true) { try { const opts = typeof edgeLabel === 'string' && edgeLabel.length > 0 ? { labels: new Set([edgeLabel]) } : undefined; return await _indexedNeighbors(provider, nodeId, direction, opts); @@ -333,7 +339,7 @@ export async function neighbors(nodeId, direction = 'both', edgeLabel = undefine } // ── Linear scan fallback ───────────────────────────────────────────── - return _linearNeighbors(/** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState), nodeId, direction, edgeLabel); + return _linearNeighbors(/** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState), nodeId, direction, edgeLabel); } /** @@ -362,14 +368,14 @@ async function _indexedNeighbors(provider, nodeId, direction, opts) { /** * Linear-scan neighbor lookup from raw CRDT state. * - * @param {import('../services/JoinReducer.js').WarpStateV5} cachedState + * @param {import('./WarpStateV5.js').default} cachedState * @param {string} nodeId * @param {'outgoing' | 'incoming' | 'both'} direction * @param {string} [edgeLabel] * @returns {Array<{nodeId: string, label: string, direction: 'outgoing' | 'incoming'}>} */ function _linearNeighbors(cachedState, nodeId, direction, edgeLabel) { - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (cachedState); + const s = /** @type {import('./WarpStateV5.js').default} */ (cachedState); /** @type {Array<{nodeId: string, label: string, direction: 'outgoing' | 'incoming'}>} */ const result = []; const checkOut = direction === 'outgoing' || direction === 'both'; @@ -394,43 +400,43 @@ function _linearNeighbors(cachedState, nodeId, direction, edgeLabel) { /** * Returns a defensive copy of the current materialized state. * - * @this {import('../WarpRuntime.js').default} - * @returns {Promise} + * @returns {Promise} + * @this {QueryController} */ -export async function getStateSnapshot() { - if (!this._cachedState && !this._autoMaterialize) { +async function getStateSnapshot() { + if (!this._host._cachedState && !this._host._autoMaterialize) { return null; } - await this._ensureFreshState(); - if (!this._cachedState) { + await this._host._ensureFreshState(); + if (!this._host._cachedState) { return null; } - return createImmutableWarpStateV5(/** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState)); + return createImmutableWarpStateV5(/** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState)); } /** * Gets all visible nodes in the materialized state. * - * @this {import('../WarpRuntime.js').default} * @returns {Promise} Array of node IDs * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function getNodes() { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getNodes() { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); return [...orsetElements(s.nodeAlive)]; } /** * Gets all visible edges in the materialized state. * - * @this {import('../WarpRuntime.js').default} * @returns {Promise}>>} Array of edge info * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function getEdges() { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getEdges() { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); /** @type {Map>} */ const edgePropsByKey = new Map(); @@ -471,36 +477,36 @@ export async function getEdges() { /** * Returns the number of property entries in the materialized state. * - * @this {import('../WarpRuntime.js').default} * @returns {Promise} Number of property entries * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function getPropertyCount() { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getPropertyCount() { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); return s.prop.size; } /** * Creates a fluent query builder for the logical graph. * - * @this {import('../WarpRuntime.js').default} - * @returns {import('../services/QueryBuilder.js').default} A fluent query builder + * @returns {import('./QueryBuilder.js').default} A fluent query builder + * @this {QueryController} */ -export function query() { - return new QueryBuilder(this); +function query() { + return new QueryBuilder(this._host); } /** * Creates a first-class worldline handle over a pinned read source. * - * @this {import('../WarpRuntime.js').default} * @param {ObserverOptions} [options] - * @returns {import('../services/Worldline.js').default} + * @returns {import('./Worldline.js').default} + * @this {QueryController} */ -export function worldline(options = undefined) { +function worldline(options = undefined) { return new Worldline({ - graph: this, + graph: this._host, source: cloneObserverSource(options?.source) || { kind: 'live' }, }); } @@ -534,26 +540,26 @@ function normalizeObserverArgs(nameOrConfig, configOrOptions, maybeOptions) { /** * Creates a read-only observer over the current materialized state. * - * @this {import('../WarpRuntime.js').default} * @param {string|{ match: string|string[], expose?: string[], redact?: string[] }} nameOrConfig * Observer name or observer configuration * @param {{ match: string|string[], expose?: string[], redact?: string[] }|ObserverOptions} [configOrOptions] * Observer configuration when a name is supplied, otherwise observer options * @param {ObserverOptions} [maybeOptions] - Optional pinned read source - * @returns {Promise} A read-only observer + * @returns {Promise} A read-only observer + * @this {QueryController} */ -export async function observer(nameOrConfig, configOrOptions = undefined, maybeOptions = undefined) { +async function observer(nameOrConfig, configOrOptions = undefined, maybeOptions = undefined) { const { name, config, options } = normalizeObserverArgs(nameOrConfig, configOrOptions, maybeOptions); /** Validates that a match value is a non-empty string or non-empty string array. @param {unknown} m - Match value to validate @returns {boolean} True if valid */ const isValidMatch = (m) => typeof m === 'string' || (Array.isArray(m) && m.length > 0 && m.every(/** Checks that an element is a string. @param {unknown} i - Array element @returns {boolean} True if string */ i => typeof i === 'string')); if (!config || !isValidMatch(config.match)) { throw new Error('observer config.match must be a non-empty string or non-empty array of strings'); } - const snapshot = await resolveObserverSnapshot(this, options); + const snapshot = await resolveObserverSnapshot(this._host, options); return new Observer({ name, config, - graph: this, + graph: this._host, snapshot, source: cloneObserverSource(options?.source) || { kind: 'live' }, }); @@ -562,14 +568,14 @@ export async function observer(nameOrConfig, configOrOptions = undefined, maybeO /** * Computes the directed MDL translation cost from observer A to observer B. * - * @this {import('../WarpRuntime.js').default} * @param {{ match: string|string[], expose?: string[], redact?: string[] }} configA - Observer configuration for A * @param {{ match: string|string[], expose?: string[], redact?: string[] }} configB - Observer configuration for B * @returns {Promise<{cost: number, breakdown: {nodeLoss: number, edgeLoss: number, propLoss: number}}>} + * @this {QueryController} */ -export async function translationCost(configA, configB) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function translationCost(configA, configB) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); return computeTranslationCost(configA, configB, s); } @@ -615,7 +621,7 @@ function visibleEdgeRegister(register, birthEvent) { /** * Looks up the current node attachment registers directly from materialized state. * - * @param {import('../services/JoinReducer.js').WarpStateV5} state + * @param {import('./WarpStateV5.js').default} state * @param {string} nodeId * @returns {{ contentRegister: { eventId: import('../utils/EventId.js').EventId|null, value: string }, mimeRegister: { eventId: import('../utils/EventId.js').EventId|null, value: unknown }|null, sizeRegister: { eventId: import('../utils/EventId.js').EventId|null, value: unknown }|null }|null} */ @@ -637,7 +643,7 @@ function getNodeContentRegisters(state, nodeId) { /** * Looks up the current edge attachment registers directly from materialized state. * - * @param {import('../services/JoinReducer.js').WarpStateV5} state + * @param {import('./WarpStateV5.js').default} state * @param {string} from * @param {string} to * @param {string} label @@ -706,14 +712,14 @@ function extractContentMeta(contentRegister, mimeRegister, sizeRegister) { /** * Gets the content blob OID for a node, or null if none is attached. * - * @this {import('../WarpRuntime.js').default} * @param {string} nodeId - The node ID to check * @returns {Promise} Hex blob OID or null * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function getContentOid(nodeId) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getContentOid(nodeId) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); const registers = getNodeContentRegisters(s, nodeId); return registers?.contentRegister.value ?? null; } @@ -721,14 +727,14 @@ export async function getContentOid(nodeId) { /** * Gets structured content metadata for a node attachment, or null if none is attached. * - * @this {import('../WarpRuntime.js').default} * @param {string} nodeId - The node ID to check * @returns {Promise<{ oid: string, mime: string|null, size: number|null }|null>} Content metadata or null * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function getContentMeta(nodeId) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getContentMeta(nodeId) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); const registers = getNodeContentRegisters(s, nodeId); return registers ? extractContentMeta(registers.contentRegister, registers.mimeRegister, registers.sizeRegister) @@ -741,41 +747,41 @@ export async function getContentMeta(nodeId) { * Returns the raw bytes from `readBlob()`. Consumers wanting text * should decode the result with `new TextDecoder().decode(buf)`. * - * @this {import('../WarpRuntime.js').default} * @param {string} nodeId - The node ID to get content for * @returns {Promise} Content bytes or null * @throws {import('../errors/PersistenceError.js').default} If the referenced * blob OID is not in the object store (code: `E_MISSING_OBJECT`), such as * after repository corruption, aggressive GC, or a partial clone missing the * blob object. + * @this {QueryController} */ -export async function getContent(nodeId) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getContent(nodeId) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); const registers = getNodeContentRegisters(s, nodeId); if (!registers) { return null; } const { value: oid } = registers.contentRegister; - if (this._blobStorage) { - return await this._blobStorage.retrieve(oid); + if (this._host._blobStorage) { + return await this._host._blobStorage.retrieve(oid); } - return await this._persistence.readBlob(oid); + return await this._host._persistence.readBlob(oid); } /** * Gets the content blob OID for an edge, or null if none is attached. * - * @this {import('../WarpRuntime.js').default} * @param {string} from - Source node ID * @param {string} to - Target node ID * @param {string} label - Edge label * @returns {Promise} Hex blob OID or null * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function getEdgeContentOid(from, to, label) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getEdgeContentOid(from, to, label) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); const registers = getEdgeContentRegisters(s, from, to, label); return registers?.contentRegister.value ?? null; } @@ -783,16 +789,16 @@ export async function getEdgeContentOid(from, to, label) { /** * Gets structured content metadata for an edge attachment, or null if none is attached. * - * @this {import('../WarpRuntime.js').default} * @param {string} from - Source node ID * @param {string} to - Target node ID * @param {string} label - Edge label * @returns {Promise<{ oid: string, mime: string|null, size: number|null }|null>} Content metadata or null * @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`) + * @this {QueryController} */ -export async function getEdgeContentMeta(from, to, label) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getEdgeContentMeta(from, to, label) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); const registers = getEdgeContentRegisters(s, from, to, label); return registers ? extractContentMeta(registers.contentRegister, registers.mimeRegister, registers.sizeRegister) @@ -805,7 +811,6 @@ export async function getEdgeContentMeta(from, to, label) { * Returns the raw bytes from `readBlob()`. Consumers wanting text * should decode the result with `new TextDecoder().decode(buf)`. * - * @this {import('../WarpRuntime.js').default} * @param {string} from - Source node ID * @param {string} to - Target node ID * @param {string} label - Edge label @@ -814,19 +819,20 @@ export async function getEdgeContentMeta(from, to, label) { * blob OID is not in the object store (code: `E_MISSING_OBJECT`), such as * after repository corruption, aggressive GC, or a partial clone missing the * blob object. + * @this {QueryController} */ -export async function getEdgeContent(from, to, label) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getEdgeContent(from, to, label) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); const registers = getEdgeContentRegisters(s, from, to, label); if (!registers) { return null; } const { value: oid } = registers.contentRegister; - if (this._blobStorage) { - return await this._blobStorage.retrieve(oid); + if (this._host._blobStorage) { + return await this._host._blobStorage.retrieve(oid); } - return await this._persistence.readBlob(oid); + return await this._host._persistence.readBlob(oid); } /** @@ -835,23 +841,23 @@ export async function getEdgeContent(from, to, label) { * Returns an async iterable of Uint8Array chunks for incremental * consumption. Use `getContent()` when you want the full buffer. * - * @this {import('../WarpRuntime.js').default} * @param {string} nodeId - The node ID to get content for * @returns {Promise|null>} Async iterable of content chunks, or null + * @this {QueryController} */ -export async function getContentStream(nodeId) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getContentStream(nodeId) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); const registers = getNodeContentRegisters(s, nodeId); if (!registers) { return null; } const { value: oid } = registers.contentRegister; - if (this._blobStorage && typeof this._blobStorage.retrieveStream === 'function') { - return this._blobStorage.retrieveStream(oid); + if (this._host._blobStorage && typeof this._host._blobStorage.retrieveStream === 'function') { + return this._host._blobStorage.retrieveStream(oid); } // Fallback: wrap buffered read as single-chunk async iterable - const buf = await this._persistence.readBlob(oid); + const buf = await this._host._persistence.readBlob(oid); return singleChunkAsyncIterable(buf); } @@ -861,24 +867,24 @@ export async function getContentStream(nodeId) { * Returns an async iterable of Uint8Array chunks for incremental * consumption. Use `getEdgeContent()` when you want the full buffer. * - * @this {import('../WarpRuntime.js').default} * @param {string} from - Source node ID * @param {string} to - Target node ID * @param {string} label - Edge label * @returns {Promise|null>} Async iterable of content chunks, or null + * @this {QueryController} */ -export async function getEdgeContentStream(from, to, label) { - await this._ensureFreshState(); - const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); +async function getEdgeContentStream(from, to, label) { + await this._host._ensureFreshState(); + const s = /** @type {import('./WarpStateV5.js').default} */ (this._host._cachedState); const registers = getEdgeContentRegisters(s, from, to, label); if (!registers) { return null; } const { value: oid } = registers.contentRegister; - if (this._blobStorage && typeof this._blobStorage.retrieveStream === 'function') { - return this._blobStorage.retrieveStream(oid); + if (this._host._blobStorage && typeof this._host._blobStorage.retrieveStream === 'function') { + return this._host._blobStorage.retrieveStream(oid); } - const buf = await this._persistence.readBlob(oid); + const buf = await this._host._persistence.readBlob(oid); return singleChunkAsyncIterable(buf); } @@ -904,3 +910,55 @@ function singleChunkAsyncIterable(buf) { }, }; } + +// ── Controller class ────────────────────────────────────────────────────────── + +/** + * QueryController — read-only query surface for materialized graph state. + * + * Each public method delegates to the module-level function above, + * bound with `this` as the controller (which carries `_host`). + */ +export default class QueryController { + /** @type {QueryHost} */ + _host; + + /** + * Creates a QueryController bound to a WarpRuntime host. + * @param {QueryHost} host + */ + constructor(host) { + this._host = host; + } +} + +// Wire all query functions as methods on the controller prototype. +// The functions use this._host._xxx, so they work when this = controller. +const queryFunctions = /** @type {const} */ ([ + 'hasNode', 'getNodeProps', 'getEdgeProps', 'neighbors', + 'getStateSnapshot', 'getNodes', 'getEdges', 'getPropertyCount', + 'query', 'worldline', 'observer', 'translationCost', + 'getContentOid', 'getContentMeta', 'getContent', + 'getEdgeContentOid', 'getEdgeContentMeta', 'getEdgeContent', + 'getContentStream', 'getEdgeContentStream', +]); + +/** @type {Record} */ +const fnMap = { + hasNode, getNodeProps, getEdgeProps, neighbors, + getStateSnapshot, getNodes, getEdges, getPropertyCount, + query, worldline, observer, translationCost, + getContentOid, getContentMeta, getContent, + getEdgeContentOid, getEdgeContentMeta, getEdgeContent, + getContentStream, getEdgeContentStream, +}; + +for (const name of queryFunctions) { + const fn = fnMap[name]; + Object.defineProperty(QueryController.prototype, name, { + value: fn, + writable: true, + configurable: true, + enumerable: false, + }); +} diff --git a/src/domain/services/StrandController.js b/src/domain/services/StrandController.js new file mode 100644 index 00000000..f0e5880b --- /dev/null +++ b/src/domain/services/StrandController.js @@ -0,0 +1,182 @@ +/** + * StrandController — encapsulates strand and conflict analysis operations. + * + * Extracted from strand.methods.js and conflict.methods.js. WarpRuntime + * delegates directly to this controller via defineProperty loops. + * + * @module domain/services/StrandController + */ + +import StrandService from './StrandService.js'; +import ConflictAnalyzerService from './ConflictAnalyzerService.js'; + +/** + * The host interface that StrandController depends on. + * + * StrandService and ConflictAnalyzerService both accept `{ graph }` where + * graph is the full WarpRuntime instance. This typedef documents that + * coupling explicitly. + * + * @typedef {import('../WarpRuntime.js').default} StrandHost + */ + +export default class StrandController { + /** @type {StrandHost} */ + _host; + + /** @type {StrandService} */ + _strandService; + + /** + * Creates a StrandController bound to a WarpRuntime host. + * @param {StrandHost} host - The WarpRuntime instance + */ + constructor(host) { + this._host = host; + this._strandService = new StrandService({ graph: host }); + } + + // ── Strand lifecycle ──────────────────────────────────────────────────── + + /** + * Creates a new strand with the given options. + * @param {import('./StrandService.js').StrandCreateOptions} [options] + * @returns {Promise} + */ + async createStrand(options) { + return await this._strandService.create(options); + } + + /** + * Braids a strand, merging its overlay back into the base graph. + * @param {string} strandId + * @param {import('./StrandService.js').StrandBraidOptions} [options] + * @returns {Promise} + */ + async braidStrand(strandId, options) { + return await this._strandService.braid(strandId, options); + } + + /** + * Retrieves the descriptor for a strand by its identifier. + * @param {string} strandId + * @returns {Promise} + */ + async getStrand(strandId) { + return await this._strandService.get(strandId); + } + + /** + * Lists all strand descriptors in the current graph. + * @returns {Promise} + */ + async listStrands() { + return await this._strandService.list(); + } + + /** + * Drops (deletes) a strand, removing its refs and overlay data. + * @param {string} strandId + * @returns {Promise} + */ + async dropStrand(strandId) { + return await this._strandService.drop(strandId); + } + + // ── Strand materialization & queries ───────────────────────────────────── + + /** + * Materializes the graph state scoped to a single strand. + * @param {string} strandId + * @param {{ receipts?: boolean, ceiling?: number|null }} [options] + * @returns {Promise} + */ + async materializeStrand(strandId, options) { + return await this._strandService.materialize(strandId, options); + } + + /** + * Retrieves all patch entries belonging to a strand. + * @param {string} strandId + * @param {{ ceiling?: number|null }} [options] + * @returns {Promise>} + */ + async getStrandPatches(strandId, options) { + return await this._strandService.getPatchEntries(strandId, options); + } + + /** + * Returns the patch SHAs that touched a given entity within a strand. + * @param {string} strandId + * @param {string} entityId + * @param {{ ceiling?: number|null }} [options] + * @returns {Promise} + */ + async patchesForStrand(strandId, entityId, options) { + return await this._strandService.patchesFor(strandId, entityId, options); + } + + // ── Strand patching ───────────────────────────────────────────────────── + + /** + * Creates a PatchBuilderV2 scoped to a strand for manual patch construction. + * @param {string} strandId + * @returns {Promise} + */ + async createStrandPatch(strandId) { + return await this._strandService.createPatchBuilder(strandId); + } + + /** + * Applies a patch to a strand using a builder callback and commits it. + * @param {string} strandId + * @param {(p: import('./PatchBuilderV2.js').PatchBuilderV2) => void | Promise} build + * @returns {Promise} + */ + async patchStrand(strandId, build) { + return await this._strandService.patch(strandId, build); + } + + // ── Speculative intents ───────────────────────────────────────────────── + + /** + * Queues a speculative intent on a strand without committing it. + * @param {string} strandId + * @param {(p: import('./PatchBuilderV2.js').PatchBuilderV2) => void | Promise} build + * @returns {Promise<{ intentId: string, enqueuedAt: string, patch: import('../types/WarpTypesV2.js').PatchV2, reads: string[], writes: string[], contentBlobOids: string[] }>} + */ + async queueStrandIntent(strandId, build) { + return await this._strandService.queueIntent(strandId, build); + } + + /** + * Lists all pending intents queued on a strand. + * @param {string} strandId + * @returns {Promise>} + */ + async listStrandIntents(strandId) { + return await this._strandService.listIntents(strandId); + } + + /** + * Advances a strand by one tick, draining queued intents with conflict detection. + * @param {string} strandId + * @returns {Promise<{ tickId: string, strandId: string, tickIndex: number, createdAt: string, drainedIntentCount: number, admittedIntentIds: string[], rejected: Array<{ intentId: string, reason: string, conflictsWith: string[], reads: string[], writes: string[] }>, baseOverlayHeadPatchSha: string|null, overlayHeadPatchSha: string|null, overlayPatchShas: string[] }>} + */ + async tickStrand(strandId) { + return await this._strandService.tick(strandId); + } + + // ── Conflict analysis ─────────────────────────────────────────────────── + + /** + * Analyze read-only conflict provenance over either the current frontier + * or an explicit strand, with an optional Lamport ceiling. + * @param {import('./ConflictAnalyzerService.js').ConflictAnalyzeOptions} [options] + * @returns {Promise} + */ + async analyzeConflicts(options) { + const analyzer = new ConflictAnalyzerService({ graph: this._host }); + return await analyzer.analyze(options); + } +} diff --git a/src/domain/services/StrandService.js b/src/domain/services/StrandService.js index 4e57177b..670761dd 100644 --- a/src/domain/services/StrandService.js +++ b/src/domain/services/StrandService.js @@ -27,8 +27,9 @@ import { createImmutableValue, createImmutableWarpStateV5 } from './ImmutableSna import { ProvenanceIndex } from './ProvenanceIndex.js'; import { encodePatchMessage } from './WarpMessageCodec.js'; -/** @typedef {import('../WarpRuntime.js').default} WarpRuntime */ -/** @typedef {import('../types/WarpTypesV2.js').PatchV2} PatchV2 */ + +/** @import { default as WarpRuntime } from '../WarpRuntime.js' */ +/** @import { PatchV2 } from '../types/WarpTypesV2.js' */ /** * @typedef {{ * strandId: string, @@ -1050,7 +1051,7 @@ export default class StrandService { * * @param {string} strandId * @param {{ receipts?: boolean, ceiling?: number|null }} [options] - * @returns {Promise} + * @returns {Promise} */ async materialize(strandId, options = {}) { const detached = await openDetachedReadGraph(this._graph); diff --git a/src/domain/services/StreamingBitmapIndexBuilder.js b/src/domain/services/StreamingBitmapIndexBuilder.js index 35c4e4fa..95f8371c 100644 --- a/src/domain/services/StreamingBitmapIndexBuilder.js +++ b/src/domain/services/StreamingBitmapIndexBuilder.js @@ -331,7 +331,7 @@ export default class StreamingBitmapIndexBuilder { const type = key.substring(0, 3); const sha = key.substring(4); const prefix = sha.substring(0, 2); - const bucket = type === 'fwd' ? bitmapShards['fwd'] : bitmapShards['rev']; + const bucket = type === 'fwd' ? bitmapShards.fwd : bitmapShards.rev; if (bucket[prefix] === undefined) { bucket[prefix] = {}; diff --git a/src/domain/services/SubscriptionController.js b/src/domain/services/SubscriptionController.js new file mode 100644 index 00000000..0ebcfeda --- /dev/null +++ b/src/domain/services/SubscriptionController.js @@ -0,0 +1,247 @@ +/** + * SubscriptionController — graph change subscription and watch logic. + * + * Extracted from subscribe.methods.js. Manages subscriber registration, + * glob-filtered watches with optional polling, and deferred replay. + * + * @module domain/services/SubscriptionController + */ + +import { diffStates, isEmptyDiff } from './StateDiff.js'; +import { matchGlob } from '../utils/matchGlob.js'; + +/** @import { WarpStateV5 } from './JoinReducer.js' */ +/** @import { StateDiffResult, EdgeChange, PropSet, PropRemoved } from './StateDiff.js' */ + +/** + * @typedef {Object} Subscriber + * @property {(diff: StateDiffResult) => void} onChange + * @property {((error: unknown) => void)|undefined} [onError] + * @property {boolean} pendingReplay + */ + +/** + * The host interface that SubscriptionController depends on. + * + * @typedef {Object} SubscriptionHost + * @property {WarpStateV5|null} _cachedState + * @property {Array<{onChange: Function, onError?: Function, pendingReplay?: boolean}>} _subscribers + * @property {() => Promise} hasFrontierChanged + * @property {(options?: Record) => Promise} materialize + */ + +export default class SubscriptionController { + /** @type {SubscriptionHost} */ + _host; + + /** + * Creates a SubscriptionController bound to a WarpRuntime host. + * @param {SubscriptionHost} host + */ + constructor(host) { + this._host = host; + } + + /** + * Subscribes to graph changes. + * + * The `onChange` handler is called after each `materialize()` that results in + * state changes. The handler receives a diff object describing what changed. + * + * When `replay: true` is set and `_cachedState` is available, immediately + * fires `onChange` with a diff from empty state to current state. If + * `_cachedState` is null, replay is deferred until the first materialize. + * + * @param {{ onChange: (diff: StateDiffResult) => void, onError?: (error: unknown) => void, replay?: boolean }} options + * @returns {{ unsubscribe: () => void }} + */ + subscribe({ onChange, onError, replay = false }) { + if (typeof onChange !== 'function') { + throw new Error('onChange must be a function'); + } + + const host = this._host; + const subscriber = { + onChange, + ...(onError !== undefined ? { onError } : {}), + pendingReplay: replay && !host._cachedState, + }; + host._subscribers.push(subscriber); + + // Immediate replay if requested and cached state is available + if (replay && host._cachedState) { + const diff = diffStates(null, host._cachedState); + if (!isEmptyDiff(diff)) { + try { + onChange(diff); + } catch (err) { + if (onError) { + try { + onError(/** @type {Error} */ (err)); + } catch { + // onError itself threw — swallow to prevent cascade + } + } + } + } + } + + return { + /** Removes this subscriber from the notification list. */ + unsubscribe: () => { + const index = host._subscribers.indexOf(subscriber); + if (index !== -1) { + host._subscribers.splice(index, 1); + } + }, + }; + } + + /** + * Watches for graph changes matching a pattern. + * + * Like `subscribe()`, but only fires for changes where node IDs match the + * provided glob pattern. When `poll` is set, periodically checks + * `hasFrontierChanged()` and auto-materializes if changed. + * + * @param {string|string[]} pattern + * @param {{ onChange: (diff: StateDiffResult) => void, onError?: (error: unknown) => void, poll?: number }} options + * @returns {{ unsubscribe: () => void }} + */ + watch(pattern, { onChange, onError, poll }) { + /** Checks whether a pattern is a non-empty string or array of strings. @param {string|string[]} p @returns {boolean} */ + const isValidPattern = (p) => typeof p === 'string' || (Array.isArray(p) && p.length > 0 && p.every(i => typeof i === 'string')); + if (!isValidPattern(pattern)) { + throw new Error('pattern must be a non-empty string or non-empty array of strings'); + } + if (typeof onChange !== 'function') { + throw new Error('onChange must be a function'); + } + if (poll !== undefined) { + if (typeof poll !== 'number' || !Number.isFinite(poll) || poll < 1000) { + throw new Error('poll must be a finite number >= 1000'); + } + } + + /** Tests whether a node ID matches the subscription pattern. @param {string} nodeId @returns {boolean} */ + const matchesPattern = (nodeId) => matchGlob(pattern, nodeId); + + /** + * Filtered onChange that only passes matching changes. + * @param {StateDiffResult} diff + */ + const filteredOnChange = (diff) => { + const filteredDiff = { + nodes: { + added: diff.nodes.added.filter(matchesPattern), + removed: diff.nodes.removed.filter(matchesPattern), + }, + edges: { + added: diff.edges.added.filter((/** @type {EdgeChange} */ e) => matchesPattern(e.from) || matchesPattern(e.to)), + removed: diff.edges.removed.filter((/** @type {EdgeChange} */ e) => matchesPattern(e.from) || matchesPattern(e.to)), + }, + props: { + set: diff.props.set.filter((/** @type {PropSet} */ p) => matchesPattern(p.nodeId)), + removed: diff.props.removed.filter((/** @type {PropRemoved} */ p) => matchesPattern(p.nodeId)), + }, + }; + + const hasChanges = + filteredDiff.nodes.added.length > 0 || + filteredDiff.nodes.removed.length > 0 || + filteredDiff.edges.added.length > 0 || + filteredDiff.edges.removed.length > 0 || + filteredDiff.props.set.length > 0 || + filteredDiff.props.removed.length > 0; + + if (hasChanges) { + onChange(filteredDiff); + } + }; + + // Reuse own subscription infrastructure + const subscription = this.subscribe({ + onChange: filteredOnChange, + ...(onError !== undefined ? { onError } : {}), + }); + + const host = this._host; + + // Polling: periodically check frontier and auto-materialize if changed + /** @type {ReturnType|null} */ + let pollIntervalId = null; + let pollInFlight = false; + if (poll !== undefined) { + pollIntervalId = setInterval(() => { + if (pollInFlight) { + return; + } + pollInFlight = true; + host.hasFrontierChanged() + .then(async (changed) => { + if (changed) { + await host.materialize(); + } + }) + .catch((err) => { + if (onError) { + try { + onError(err); + } catch { + // onError itself threw — swallow to prevent cascade + } + } + }) + .finally(() => { + pollInFlight = false; + }); + }, poll); + } + + return { + /** Stops polling and removes the filtered subscriber. */ + unsubscribe: () => { + if (pollIntervalId !== null) { + clearInterval(pollIntervalId); + pollIntervalId = null; + } + subscription.unsubscribe(); + }, + }; + } + + /** + * Notifies all subscribers of state changes. + * Handles deferred replay for subscribers added with `replay: true` before + * cached state was available. + * + * @param {StateDiffResult} diff + * @param {WarpStateV5} currentState + */ + _notifySubscribers(diff, currentState) { + for (const subscriber of /** @type {Subscriber[]} */ ([...this._host._subscribers])) { + try { + if (subscriber.pendingReplay) { + subscriber.pendingReplay = false; + const replayDiff = diffStates(null, currentState); + if (!isEmptyDiff(replayDiff)) { + subscriber.onChange(replayDiff); + } + } else { + if (isEmptyDiff(diff)) { + continue; + } + subscriber.onChange(diff); + } + } catch (err) { + if (typeof subscriber.onError === 'function') { + try { + subscriber.onError(err); + } catch { + // onError itself threw — swallow to prevent cascade + } + } + } + } + } +} diff --git a/src/domain/services/SyncController.js b/src/domain/services/SyncController.js index c8d3ce3a..e9768f55 100644 --- a/src/domain/services/SyncController.js +++ b/src/domain/services/SyncController.js @@ -38,7 +38,7 @@ import SyncTrustGate from './SyncTrustGate.js'; * in unit tests. * * @typedef {Object} SyncHost - * @property {import('../services/JoinReducer.js').WarpStateV5|null} _cachedState + * @property {import('./JoinReducer.js').WarpStateV5|null} _cachedState * @property {Map|null} _lastFrontier * @property {boolean} _stateDirty * @property {number} _patchesSinceGC @@ -52,7 +52,7 @@ import SyncTrustGate from './SyncTrustGate.js'; * @property {number} _patchesSinceCheckpoint * @property {(op: string, t0: number, opts?: {metrics?: string, error?: Error}) => void} _logTiming * @property {(options?: Record) => Promise} materialize - * @property {(state: import('../services/JoinReducer.js').WarpStateV5) => Promise} _setMaterializedState + * @property {(state: import('./JoinReducer.js').WarpStateV5) => Promise} _setMaterializedState * @property {() => Promise} discoverWriters * @property {((trust: { mode?: 'off'|'log-only'|'enforce', pin?: string|null }|undefined|null) => SyncTrustGate|null)} [_createSyncTrustGate] */ diff --git a/src/domain/services/SyncProtocol.js b/src/domain/services/SyncProtocol.js index 5f80519a..cb73d8e7 100644 --- a/src/domain/services/SyncProtocol.js +++ b/src/domain/services/SyncProtocol.js @@ -41,6 +41,7 @@ import nullLogger from '../utils/nullLogger.js'; import { decodePatchMessage, assertOpsCompatible, SCHEMA_V3 } from './WarpMessageCodec.js'; import { join, cloneStateV5, isKnownRawOp } from './JoinReducer.js'; import SchemaUnsupportedError from '../errors/SchemaUnsupportedError.js'; +import SyncError from '../errors/SyncError.js'; import EncryptionError from '../errors/EncryptionError.js'; import PersistenceError from '../errors/PersistenceError.js'; import { cloneFrontier, updateFrontier } from './Frontier.js'; @@ -233,11 +234,10 @@ export async function loadPatchRange(persistence, _graphName, writerId, fromSha, // If fromSha was specified but we didn't reach it, we have divergence if (fromSha !== null && fromSha !== undefined && fromSha.length > 0 && cur === null) { - const err = /** @type {Error & { code: string }} */ (new Error( - `Divergence detected: ${toSha} does not descend from ${fromSha} for writer ${writerId}` - )); - err.code = 'E_SYNC_DIVERGENCE'; - throw err; + throw new SyncError( + `Divergence detected: ${toSha} does not descend from ${fromSha} for writer ${writerId}`, + { code: 'E_SYNC_DIVERGENCE', context: { writerId, fromSha, toSha } }, + ); } return patches; diff --git a/src/domain/services/VisibleStateComparisonV5.js b/src/domain/services/VisibleStateComparisonV5.js index 197db966..f3975b88 100644 --- a/src/domain/services/VisibleStateComparisonV5.js +++ b/src/domain/services/VisibleStateComparisonV5.js @@ -1,13 +1,13 @@ import { canonicalStringify } from '../utils/canonicalStringify.js'; import { createStateReaderV5 } from './StateReaderV5.js'; + +/** @import { VisibleNodeViewV5, VisibleStateComparisonV5, VisibleStateNeighborV5, VisibleStateReaderV5 } from '../../../index.js' */ export const VISIBLE_STATE_COMPARISON_VERSION = 'visible-state-compare/v1'; /** - * @typedef {import('../../../index.js').VisibleStateReaderV5} VisibleStateReaderV5 - * @typedef {import('../../../index.js').VisibleNodeViewV5} VisibleNodeViewV5 - * @typedef {import('../../../index.js').VisibleStateNeighborV5} VisibleStateNeighborV5 - * @typedef {import('../../../index.js').VisibleStateComparisonV5} VisibleStateComparisonV5 + + * @typedef {import('./JoinReducer.js').WarpStateV5} WarpStateV5 */ diff --git a/src/domain/services/VisibleStateScopeV1.js b/src/domain/services/VisibleStateScopeV1.js index cb2b5e41..d5f51166 100644 --- a/src/domain/services/VisibleStateScopeV1.js +++ b/src/domain/services/VisibleStateScopeV1.js @@ -1,6 +1,7 @@ import QueryError from '../errors/QueryError.js'; import { createORSet, orsetContains } from '../crdt/ORSet.js'; import { vvClone } from '../crdt/VersionVector.js'; +import WarpStateV5 from './WarpStateV5.js'; import { normalizeRawOp } from './OpNormalizer.js'; import { decodeEdgeKey, @@ -18,7 +19,6 @@ import { * @typedef {{ * nodeIdPrefixes?: VisibleStateScopePrefixFilterV1 * }} VisibleStateScopeV1 - * @typedef {import('./JoinReducer.js').WarpStateV5} WarpStateV5 */ /** @@ -369,13 +369,13 @@ export function scopeMaterializedStateV5(state, scope) { state.edgeAlive.tombstones, ); - return { + return new WarpStateV5({ nodeAlive: scopedNodeAlive, edgeAlive: scopedEdgeAlive, prop: collectScopedProps(state, scopedNodeIds, scopedEdgeKeys), observedFrontier: vvClone(state.observedFrontier), edgeBirthEvent: collectScopedEdgeBirthEvents(state, scopedEdgeKeys), - }; + }); } /** diff --git a/src/domain/services/VisibleStateTransferPlannerV5.js b/src/domain/services/VisibleStateTransferPlannerV5.js index eff5f21a..1d1d540e 100644 --- a/src/domain/services/VisibleStateTransferPlannerV5.js +++ b/src/domain/services/VisibleStateTransferPlannerV5.js @@ -5,14 +5,10 @@ import { } from './KeyCodec.js'; import { canonicalStringify } from '../utils/canonicalStringify.js'; + +/** @import { ContentMeta, VisibleStateReaderV5, VisibleStateTransferOperationV1, VisibleStateTransferPlanSummaryV1 } from '../../../index.js' */ export const VISIBLE_STATE_TRANSFER_PLAN_VERSION = 'visible-state-transfer-plan/v1'; -/** - * @typedef {import('../../../index.js').VisibleStateReaderV5} VisibleStateReaderV5 - * @typedef {import('../../../index.js').ContentMeta} ContentMeta - * @typedef {import('../../../index.js').VisibleStateTransferOperationV1} VisibleStateTransferOperationV1 - * @typedef {import('../../../index.js').VisibleStateTransferPlanSummaryV1} VisibleStateTransferPlanSummaryV1 - */ const ATTACHMENT_PROPERTY_KEYS = new Set([ CONTENT_PROPERTY_KEY, diff --git a/src/domain/services/WarpStateV5.js b/src/domain/services/WarpStateV5.js new file mode 100644 index 00000000..b0b2b5af --- /dev/null +++ b/src/domain/services/WarpStateV5.js @@ -0,0 +1,86 @@ +/** + * WarpStateV5 — the core CRDT materialized state object. + * + * Holds the alive sets (OR-Set for nodes and edges), property registers + * (LWW), the observed version vector frontier, and edge birth events. + * + * @module domain/services/WarpStateV5 + */ + +import { createORSet, orsetClone } from '../crdt/ORSet.js'; +import { createVersionVector, vvClone } from '../crdt/VersionVector.js'; + +/** + * The CRDT materialized state for a WARP graph. + * + * Instances are mutable during reduce (patch application) but should + * be cloned before handing to consumers that expect isolation. + */ +export default class WarpStateV5 { + /** @type {import('../crdt/ORSet.js').ORSet} */ + nodeAlive; + + /** @type {import('../crdt/ORSet.js').ORSet} */ + edgeAlive; + + /** @type {Map>} */ + prop; + + /** @type {import('../crdt/VersionVector.js').VersionVector} */ + observedFrontier; + + /** + * EdgeKey → EventId of most recent EdgeAdd (for clean-slate prop visibility). + * @type {Map} + */ + edgeBirthEvent; + + /** + * Creates a WarpStateV5 from field values. + * + * @param {{ + * nodeAlive: import('../crdt/ORSet.js').ORSet, + * edgeAlive: import('../crdt/ORSet.js').ORSet, + * prop: Map>, + * observedFrontier: import('../crdt/VersionVector.js').VersionVector, + * edgeBirthEvent?: Map + * }} fields + */ + constructor({ nodeAlive, edgeAlive, prop, observedFrontier, edgeBirthEvent }) { + this.nodeAlive = nodeAlive; + this.edgeAlive = edgeAlive; + this.prop = prop; + this.observedFrontier = observedFrontier; + this.edgeBirthEvent = edgeBirthEvent ?? /** @type {Map} */ (new Map()); + } + + /** + * Creates an empty state with fresh OR-Sets and version vector. + * + * @returns {WarpStateV5} + */ + static empty() { + return new WarpStateV5({ + nodeAlive: createORSet(), + edgeAlive: createORSet(), + prop: new Map(), + observedFrontier: createVersionVector(), + edgeBirthEvent: new Map(), + }); + } + + /** + * Creates a deep clone with independent data structures. + * + * @returns {WarpStateV5} + */ + clone() { + return new WarpStateV5({ + nodeAlive: orsetClone(this.nodeAlive), + edgeAlive: orsetClone(this.edgeAlive), + prop: new Map(this.prop), + observedFrontier: vvClone(this.observedFrontier), + edgeBirthEvent: new Map(this.edgeBirthEvent), + }); + } +} diff --git a/src/domain/services/Worldline.js b/src/domain/services/Worldline.js index 3b5762ba..5e7a3933 100644 --- a/src/domain/services/Worldline.js +++ b/src/domain/services/Worldline.js @@ -13,12 +13,13 @@ import LogicalTraversal from './LogicalTraversal.js'; import { toInternalStrandShape } from '../utils/strandPublicShape.js'; import { callInternalRuntimeMethod } from '../utils/callInternalRuntimeMethod.js'; -/** @typedef {import('../WarpRuntime.js').default} WarpRuntime */ -/** @typedef {import('../../../index.js').ObserverConfig} ObserverConfig */ + +/** @import { ObserverConfig, WorldlineOptions, WorldlineSource } from '../../../index.js' */ +/** @import { default as WarpRuntime } from '../WarpRuntime.js' */ /** - * @typedef {import('../../../index.js').WorldlineSource} WorldlineSource - * @typedef {import('../../../index.js').WorldlineOptions} WorldlineOptions - * @typedef {import('../services/JoinReducer.js').WarpStateV5 | { state: import('../services/JoinReducer.js').WarpStateV5, receipts: import('../types/TickReceipt.js').TickReceipt[] }} MaterializedSourceResult + + + * @typedef {import('./JoinReducer.js').WarpStateV5 | { state: import('./JoinReducer.js').WarpStateV5, receipts: import('../types/TickReceipt.js').TickReceipt[] }} MaterializedSourceResult * @typedef {{ * _materializeGraph: () => Promise<{ * state: unknown, @@ -141,7 +142,7 @@ function orUndefined(value) { * @param {WarpRuntime} graph * @param {{ kind: 'live', ceiling?: number|null }} source * @param {boolean} collectReceipts - * @returns {Promise} + * @returns {Promise} */ async function materializeLiveSource(graph, source, collectReceipts) { if (collectReceipts) { @@ -161,7 +162,7 @@ async function materializeLiveSource(graph, source, collectReceipts) { * @param {WarpRuntime} graph * @param {{ kind: 'coordinate', frontier: Map|Record, ceiling?: number|null }} source * @param {boolean} collectReceipts - * @returns {Promise} + * @returns {Promise} */ async function materializeCoordinateSource(graph, source, collectReceipts) { if (collectReceipts) { @@ -214,7 +215,7 @@ async function materializeStrandSource(graph, source, collectReceipts) { * @param {WarpRuntime} graph * @param {WorldlineSource} source * @param {boolean} collectReceipts - * @returns {Promise} + * @returns {Promise} */ async function materializeSource(graph, source, collectReceipts) { if (source.kind === 'live') { @@ -288,7 +289,7 @@ export default class Worldline { * Materializes the pinned worldline source into a detached snapshot. * * @param {{ receipts?: false } | { receipts: true }} [options] - * @returns {Promise} + * @returns {Promise} */ async materialize(options = undefined) { const detached = await openDetachedReadGraph(this._graph); diff --git a/src/domain/trust/TrustStateBuilder.js b/src/domain/trust/TrustStateBuilder.js index 8db56180..511a43fa 100644 --- a/src/domain/trust/TrustStateBuilder.js +++ b/src/domain/trust/TrustStateBuilder.js @@ -26,14 +26,30 @@ import { TrustRecordSchema } from './schemas.js'; */ /** - * @typedef {Object} TrustState - * @property {Map} activeKeys - keyId → key info - * @property {Map} revokedKeys - * @property {Map} writerBindings - "writerId\0keyId" → binding - * @property {Map} revokedBindings - * @property {Array<{recordId: string, error: string}>} errors - * @property {number} recordsProcessed - Total number of records fed to the builder + * TrustState — frozen aggregate of all trust chain records. */ +export class TrustState { + /** @type {Map} */ activeKeys; + /** @type {Map} */ revokedKeys; + /** @type {Map} */ writerBindings; + /** @type {Map} */ revokedBindings; + /** @type {Array<{recordId: string, error: string}>} */ errors; + /** @type {number} */ recordsProcessed; + + /** + * Creates a frozen TrustState. + * @param {{ activeKeys: Map, revokedKeys: Map, writerBindings: Map, revokedBindings: Map, errors: Array<{recordId: string, error: string}>, recordsProcessed: number }} fields + */ + constructor(fields) { + this.activeKeys = fields.activeKeys; + this.revokedKeys = fields.revokedKeys; + this.writerBindings = fields.writerBindings; + this.revokedBindings = fields.revokedBindings; + this.errors = fields.errors; + this.recordsProcessed = fields.recordsProcessed; + Object.freeze(this); + } +} /** * @typedef {Object} TrustBuildOptions @@ -89,7 +105,7 @@ export function buildState(records, options = {}) { processRecord(rec, ctx); } - return Object.freeze({ + return new TrustState({ activeKeys: ctx.activeKeys, revokedKeys: ctx.revokedKeys, writerBindings: ctx.writerBindings, diff --git a/src/domain/trust/reasonCodes.js b/src/domain/trust/reasonCodes.js index 9a5daa15..df8c4336 100644 --- a/src/domain/trust/reasonCodes.js +++ b/src/domain/trust/reasonCodes.js @@ -56,23 +56,23 @@ export const TRUST_REASON_CODES = Object.freeze({ /** @type {ReadonlySet} */ export const POSITIVE_CODES = Object.freeze(new Set([ - TRUST_REASON_CODES['WRITER_BOUND_TO_ACTIVE_KEY'], + TRUST_REASON_CODES.WRITER_BOUND_TO_ACTIVE_KEY, ])); /** @type {ReadonlySet} */ export const NEGATIVE_CODES = Object.freeze(new Set([ - TRUST_REASON_CODES['WRITER_HAS_NO_ACTIVE_BINDING'], - TRUST_REASON_CODES['WRITER_BOUND_KEY_REVOKED'], - TRUST_REASON_CODES['BINDING_REVOKED'], - TRUST_REASON_CODES['KEY_UNKNOWN'], + TRUST_REASON_CODES.WRITER_HAS_NO_ACTIVE_BINDING, + TRUST_REASON_CODES.WRITER_BOUND_KEY_REVOKED, + TRUST_REASON_CODES.BINDING_REVOKED, + TRUST_REASON_CODES.KEY_UNKNOWN, ])); /** @type {ReadonlySet} */ export const SYSTEM_CODES = Object.freeze(new Set([ - TRUST_REASON_CODES['TRUST_REF_MISSING'], - TRUST_REASON_CODES['TRUST_PIN_INVALID'], - TRUST_REASON_CODES['TRUST_RECORD_SCHEMA_INVALID'], - TRUST_REASON_CODES['TRUST_SIGNATURE_INVALID'], - TRUST_REASON_CODES['TRUST_RECORD_CHAIN_INVALID'], - TRUST_REASON_CODES['TRUST_POLICY_INVALID'], + TRUST_REASON_CODES.TRUST_REF_MISSING, + TRUST_REASON_CODES.TRUST_PIN_INVALID, + TRUST_REASON_CODES.TRUST_RECORD_SCHEMA_INVALID, + TRUST_REASON_CODES.TRUST_SIGNATURE_INVALID, + TRUST_REASON_CODES.TRUST_RECORD_CHAIN_INVALID, + TRUST_REASON_CODES.TRUST_POLICY_INVALID, ])); diff --git a/src/domain/types/DeliveryObservation.js b/src/domain/types/DeliveryObservation.js index 6d950174..a69865a8 100644 --- a/src/domain/types/DeliveryObservation.js +++ b/src/domain/types/DeliveryObservation.js @@ -23,14 +23,49 @@ const modeSet = new Set(DELIVERY_MODES); */ /** - * @typedef {Object} DeliveryObservation - * @property {string} emissionId - Links to the EffectEmission - * @property {string} sinkId - Which sink/adapter handled it - * @property {'delivered' | 'suppressed' | 'failed' | 'skipped'} outcome - * @property {string} [reason] - Why (e.g., "replay mode") - * @property {number} timestamp - Wall-clock milliseconds - * @property {Readonly} lens - Execution context at delivery time + * DeliveryObservation — trace record of how a sink handled an emitted effect. */ +export class DeliveryObservation { + /** @type {string} Links to the EffectEmission */ + emissionId; + + /** @type {string} Which sink/adapter handled it */ + sinkId; + + /** @type {'delivered' | 'suppressed' | 'failed' | 'skipped'} */ + outcome; + + /** @type {string | undefined} Why (e.g., "replay mode"). Omitted (not null) when absent. */ + reason; + + /** @type {number} Wall-clock milliseconds */ + timestamp; + + /** @type {Readonly} Execution context at delivery time */ + lens; + + /** + * Creates an immutable DeliveryObservation. + * @param {{ emissionId: string, sinkId: string, outcome: string, reason?: string, timestamp: number, lens: { mode: string, suppressExternal: boolean } }} fields + */ + constructor({ emissionId, sinkId, outcome, reason, timestamp, lens }) { + requireNonEmptyString(emissionId, 'emissionId'); + requireNonEmptyString(sinkId, 'sinkId'); + validateOutcome(outcome); + validateTimestamp(timestamp); + validateLens(lens); + + this.emissionId = emissionId; + this.sinkId = sinkId; + this.outcome = /** @type {'delivered' | 'suppressed' | 'failed' | 'skipped'} */ (outcome); + this.timestamp = timestamp; + this.lens = freezeLens(lens); + if (reason !== undefined) { + this.reason = reason; + } + Object.freeze(this); + } +} // ============================================================================ // Validation @@ -124,34 +159,11 @@ function freezeLens(lens) { * }} params * @returns {Readonly} */ -export function createDeliveryObservation({ - emissionId, - sinkId, - outcome, - reason, - timestamp, - lens, -}) { - requireNonEmptyString(emissionId, 'emissionId'); - requireNonEmptyString(sinkId, 'sinkId'); - validateOutcome(outcome); - validateTimestamp(timestamp); - validateLens(lens); - - /** @type {{ emissionId: string, sinkId: string, outcome: 'delivered' | 'suppressed' | 'failed' | 'skipped', timestamp: number, lens: Readonly, reason?: string }} */ - const obs = { - emissionId, - sinkId, - outcome: /** @type {'delivered' | 'suppressed' | 'failed' | 'skipped'} */ (outcome), - timestamp, - lens: freezeLens(lens), - }; - - if (reason !== undefined) { - obs.reason = reason; - } - - return Object.freeze(obs); +export function createDeliveryObservation({ emissionId, sinkId, outcome, reason, timestamp, lens }) { + return new DeliveryObservation({ + emissionId, sinkId, outcome, timestamp, lens, + ...(reason !== undefined ? { reason } : {}), + }); } // ============================================================================ diff --git a/src/domain/types/EffectEmission.js b/src/domain/types/EffectEmission.js index e14d8077..f2c97741 100644 --- a/src/domain/types/EffectEmission.js +++ b/src/domain/types/EffectEmission.js @@ -22,20 +22,56 @@ export { DELIVERY_MODES, DELIVERY_OUTCOMES }; // ============================================================================ /** - * @typedef {Object} EffectCoordinate - * @property {Record | null} frontier - Writer tip SHAs at emission time - * @property {number | null} ceiling - Lamport ceiling (if capped) + * Causal coordinate at emission time. */ +export class EffectCoordinate { + /** @type {Record | null} Writer tip SHAs at emission time */ + frontier; + + /** @type {number | null} Lamport ceiling (if capped) */ + ceiling; + + /** + * Creates an immutable EffectCoordinate. + * @param {{ frontier: Record | null, ceiling: number | null }} fields + */ + constructor({ frontier, ceiling }) { + this.frontier = frontier ? Object.freeze({ ...frontier }) : null; + this.ceiling = ceiling ?? null; + Object.freeze(this); + } +} /** - * @typedef {Object} EffectEmission - * @property {string} id - Unique emission ID - * @property {string} kind - Effect kind (generic string, app chooses meaning) - * @property {unknown} payload - Opaque effect payload - * @property {number} timestamp - Wall-clock milliseconds - * @property {string | null} writer - Writer ID (null if not writer-scoped) - * @property {Readonly} coordinate - Causal position + * EffectEmission — host-domain trace object for an outbound effect candidate. */ +export class EffectEmission { + /** @type {string} */ id; + /** @type {string} */ kind; + /** @type {unknown} */ payload; + /** @type {number} */ timestamp; + /** @type {string | null} */ writer; + /** @type {Readonly} */ coordinate; + + /** + * Creates an immutable EffectEmission. + * @param {{ id: string, kind: string, payload: unknown, timestamp: number, writer: string | null, coordinate: { frontier: Record | null, ceiling: number | null } }} fields + */ + constructor({ id, kind, payload, timestamp, writer, coordinate }) { + requireNonEmptyString(id, 'id'); + requireNonEmptyString(kind, 'kind'); + validateTimestamp(timestamp); + validateCoordinate(coordinate); + + this.id = id; + this.kind = kind; + this.payload = payload; + this.timestamp = timestamp; + this.writer = writer ?? null; + this.coordinate = new EffectCoordinate(coordinate); + Object.freeze(this); + } +} // ============================================================================ // Validation @@ -96,26 +132,7 @@ function validateCoordinate(value) { * @returns {Readonly} */ export function createEffectEmission({ id, kind, payload, timestamp, writer, coordinate }) { - requireNonEmptyString(id, 'id'); - requireNonEmptyString(kind, 'kind'); - validateTimestamp(timestamp); - validateCoordinate(coordinate); - - const frozenCoordinate = Object.freeze({ - frontier: coordinate.frontier - ? Object.freeze({ ...coordinate.frontier }) - : null, - ceiling: coordinate.ceiling ?? null, - }); - - return Object.freeze({ - id, - kind, - payload, - timestamp, - writer: writer ?? null, - coordinate: frozenCoordinate, - }); + return new EffectEmission({ id, kind, payload, timestamp, writer, coordinate }); } // ============================================================================ diff --git a/src/domain/types/PatchDiff.js b/src/domain/types/PatchDiff.js index e3f14521..015415fc 100644 --- a/src/domain/types/PatchDiff.js +++ b/src/domain/types/PatchDiff.js @@ -24,27 +24,57 @@ */ /** - * @typedef {Object} PatchDiff - * @property {string[]} nodesAdded - Nodes that transitioned not-alive → alive - * @property {string[]} nodesRemoved - Nodes that transitioned alive → not-alive - * @property {EdgeDiffEntry[]} edgesAdded - Edges that transitioned not-alive → alive - * @property {EdgeDiffEntry[]} edgesRemoved - Edges that transitioned alive → not-alive - * @property {PropDiffEntry[]} propsChanged - Properties whose LWW winner actually changed + * PatchDiff — captures alive-ness transitions during patch application. */ +export class PatchDiff { + /** @type {EdgeDiffEntry[]} Edges that transitioned not-alive → alive */ + edgesAdded; + + /** @type {EdgeDiffEntry[]} Edges that transitioned alive → not-alive */ + edgesRemoved; + + /** @type {string[]} Nodes that transitioned not-alive → alive */ + nodesAdded; + + /** @type {string[]} Nodes that transitioned alive → not-alive */ + nodesRemoved; + + /** @type {PropDiffEntry[]} Properties whose LWW winner actually changed */ + propsChanged; + + /** + * Creates a PatchDiff from field values. + * @param {{ nodesAdded: string[], nodesRemoved: string[], edgesAdded: EdgeDiffEntry[], edgesRemoved: EdgeDiffEntry[], propsChanged: PropDiffEntry[] }} fields + */ + constructor({ nodesAdded, nodesRemoved, edgesAdded, edgesRemoved, propsChanged }) { + this.nodesAdded = nodesAdded; + this.nodesRemoved = nodesRemoved; + this.edgesAdded = edgesAdded; + this.edgesRemoved = edgesRemoved; + this.propsChanged = propsChanged; + } + + /** + * Creates an empty PatchDiff. + * @returns {PatchDiff} + */ + static empty() { + return new PatchDiff({ + nodesAdded: [], + nodesRemoved: [], + edgesAdded: [], + edgesRemoved: [], + propsChanged: [], + }); + } +} /** * Creates an empty PatchDiff. - * * @returns {PatchDiff} */ export function createEmptyDiff() { - return { - nodesAdded: [], - nodesRemoved: [], - edgesAdded: [], - edgesRemoved: [], - propsChanged: [], - }; + return PatchDiff.empty(); } /** @@ -110,5 +140,5 @@ export function mergeDiffs(a, b) { const propsChanged = deduplicateProps(a.propsChanged.concat(b.propsChanged)); - return { nodesAdded, nodesRemoved, edgesAdded, edgesRemoved, propsChanged }; + return new PatchDiff({ nodesAdded, nodesRemoved, edgesAdded, edgesRemoved, propsChanged }); } diff --git a/src/domain/types/TickReceipt.js b/src/domain/types/TickReceipt.js index ae1f5fc3..8143a982 100644 --- a/src/domain/types/TickReceipt.js +++ b/src/domain/types/TickReceipt.js @@ -165,32 +165,48 @@ function validateOpResult(value, i) { */ /** - * @typedef {Object} TickReceipt - * @property {string} patchSha - SHA of the patch commit - * @property {string} writer - Writer ID that produced the patch - * @property {number} lamport - Lamport timestamp of the patch - * @property {ReadonlyArray>} ops - Per-operation outcomes (frozen) + * TickReceipt — immutable record of per-operation outcomes from a single patch. */ +export class TickReceipt { + /** @type {number} Lamport timestamp of the patch */ + lamport; + + /** @type {ReadonlyArray>} Per-operation outcomes (frozen) */ + ops; + + /** @type {string} SHA of the patch commit */ + patchSha; + + /** @type {string} Writer ID that produced the patch */ + writer; + + /** + * Creates an immutable TickReceipt. + * @param {{ patchSha: string, writer: string, lamport: number, ops: OpOutcome[] }} fields + * @throws {Error} If any field is invalid + */ + constructor({ patchSha, writer, lamport, ops }) { + assertNonEmptyString(patchSha, 'patchSha'); + assertNonEmptyString(writer, 'writer'); + assertNonNegativeInt(lamport); + assertOpsArray(ops); + + this.lamport = lamport; + this.ops = freezeOps(ops); + this.patchSha = patchSha; + this.writer = writer; + Object.freeze(this); + } +} /** * Creates an immutable TickReceipt. * * @param {{ patchSha: string, writer: string, lamport: number, ops: OpOutcome[] }} params - * @returns {Readonly} Frozen tick receipt - * @throws {Error} If any parameter is invalid + * @returns {TickReceipt} */ export function createTickReceipt({ patchSha, writer, lamport, ops }) { - assertNonEmptyString(patchSha, 'patchSha'); - assertNonEmptyString(writer, 'writer'); - assertNonNegativeInt(lamport); - assertOpsArray(ops); - - return Object.freeze({ - patchSha, - writer, - lamport, - ops: freezeOps(ops), - }); + return new TickReceipt({ patchSha, writer, lamport, ops }); } /** diff --git a/src/domain/types/WarpErrors.js b/src/domain/types/WarpErrors.js index 58edeaf1..c71c5d8d 100644 --- a/src/domain/types/WarpErrors.js +++ b/src/domain/types/WarpErrors.js @@ -26,6 +26,7 @@ export function hasErrorCode(err) { typeof err === 'object' && err !== null && 'code' in err && + // eslint-disable-next-line @typescript-eslint/dot-notation -- Record requires bracket access (TS4111) typeof (/** @type {Record} */ (err))['code'] === 'string' ); } @@ -40,6 +41,7 @@ export function hasMessage(err) { typeof err === 'object' && err !== null && 'message' in err && + // eslint-disable-next-line @typescript-eslint/dot-notation -- Record requires bracket access (TS4111) typeof (/** @type {Record} */ (err))['message'] === 'string' ); } diff --git a/src/domain/types/WarpPersistence.js b/src/domain/types/WarpPersistence.js index fbcab24d..b1bc5a40 100644 --- a/src/domain/types/WarpPersistence.js +++ b/src/domain/types/WarpPersistence.js @@ -23,7 +23,7 @@ /** * Ref-only persistence — ref reads, writes, CAS, listing. - * @typedef {import('../../ports/RefPort.js').default} RefPersistence + */ /** diff --git a/src/domain/utils/EventId.js b/src/domain/utils/EventId.js index c99aaa16..883cdb5b 100644 --- a/src/domain/utils/EventId.js +++ b/src/domain/utils/EventId.js @@ -1,48 +1,62 @@ -/** - * EventId for total ordering of operations (WARP spec Section 7). - * - * @typedef {Object} EventId - * @property {number} lamport - Monotonic counter (positive integer) - * @property {string} writerId - Writer identifier (non-empty string) - * @property {string} patchSha - Patch commit SHA (hex OID, 4-64 chars) - * @property {number} opIndex - Operation index within patch (non-negative integer) - */ - // Regex for validating hex OID (4-64 hex characters) const HEX_OID_REGEX = /^[0-9a-f]{4,64}$/; /** - * Creates a validated EventId. - * - * @param {number} lamport - Must be positive integer (> 0) - * @param {string} writerId - Must be non-empty string - * @param {string} patchSha - Must be valid hex OID (4-64 chars) - * @param {number} opIndex - Must be non-negative integer (>= 0) - * @returns {EventId} - * @throws {Error} If validation fails + * EventId — total ordering identity for CRDT operations (WARP spec Section 7). */ -export function createEventId(lamport, writerId, patchSha, opIndex) { - // Validate lamport is positive integer - if (!Number.isInteger(lamport) || lamport <= 0) { - throw new Error('lamport must be a positive integer'); - } +export class EventId { + /** @type {number} Monotonic counter (positive integer) */ + lamport; - // Validate writerId is non-empty string - if (typeof writerId !== 'string' || writerId.length === 0) { - throw new Error('writerId must be a non-empty string'); - } + /** @type {string} Writer identifier (non-empty string) */ + writerId; - // Validate patchSha is hex string 4-64 chars - if (typeof patchSha !== 'string' || !HEX_OID_REGEX.test(patchSha)) { - throw new Error('patchSha must be a hex string of 4-64 characters'); - } + /** @type {string} Patch commit SHA (hex OID, 4-64 chars) */ + patchSha; + + /** @type {number} Operation index within patch (non-negative integer) */ + opIndex; + + /** + * Creates a validated EventId. + * + * @param {number} lamport - Must be positive integer (> 0) + * @param {string} writerId - Must be non-empty string + * @param {string} patchSha - Must be valid hex OID (4-64 chars) + * @param {number} opIndex - Must be non-negative integer (>= 0) + */ + constructor(lamport, writerId, patchSha, opIndex) { + if (!Number.isInteger(lamport) || lamport <= 0) { + throw new Error('lamport must be a positive integer'); + } + if (typeof writerId !== 'string' || writerId.length === 0) { + throw new Error('writerId must be a non-empty string'); + } + if (typeof patchSha !== 'string' || !HEX_OID_REGEX.test(patchSha)) { + throw new Error('patchSha must be a hex string of 4-64 characters'); + } + if (!Number.isInteger(opIndex) || opIndex < 0) { + throw new Error('opIndex must be a non-negative integer'); + } - // Validate opIndex is non-negative integer - if (!Number.isInteger(opIndex) || opIndex < 0) { - throw new Error('opIndex must be a non-negative integer'); + this.lamport = lamport; + this.writerId = writerId; + this.patchSha = patchSha; + this.opIndex = opIndex; } +} - return { lamport, writerId, patchSha, opIndex }; +/** + * Creates a validated EventId. + * + * @param {number} lamport + * @param {string} writerId + * @param {string} patchSha + * @param {number} opIndex + * @returns {EventId} + */ +export function createEventId(lamport, writerId, patchSha, opIndex) { + return new EventId(lamport, writerId, patchSha, opIndex); } /** diff --git a/src/domain/utils/defaultCodec.js b/src/domain/utils/defaultCodec.js index 0be10a4a..ad1d64cc 100644 --- a/src/domain/utils/defaultCodec.js +++ b/src/domain/utils/defaultCodec.js @@ -55,15 +55,26 @@ function sortMapKeys(map) { return sorted; } +/** @type {ReadonlyArray} */ +const CBOR_NATIVE = [Uint8Array, Date, RegExp, Set]; + +/** + * Returns true if the value is a built-in type with its own CBOR encoding. + * @param {object} value + * @returns {boolean} + */ +function isCborNative(value) { + return CBOR_NATIVE.some((T) => value instanceof T); +} + /** - * Sorts keys of a plain object and recursively sorts nested values. + * Sorts keys of any object and recursively sorts nested values. + * Skips built-in types that have their own CBOR representation. * @param {Record} obj * @returns {Record} */ function sortObjectKeys(obj) { - if (obj.constructor !== Object && obj.constructor !== undefined) { - return obj; - } + if (isCborNative(obj)) { return obj; } /** @type {Record} */ const sorted = {}; for (const key of Object.keys(obj).sort()) { diff --git a/src/domain/utils/defaultCrypto.js b/src/domain/utils/defaultCrypto.js index d2991a6d..399c723d 100644 --- a/src/domain/utils/defaultCrypto.js +++ b/src/domain/utils/defaultCrypto.js @@ -1,3 +1,5 @@ +/** @import { Hash, Hmac } from 'node:crypto' */ + /** * Default crypto implementation for domain services. * @@ -13,10 +15,6 @@ * @module domain/utils/defaultCrypto */ -/** - * @typedef {import('node:crypto').Hash} Hash - * @typedef {import('node:crypto').Hmac} Hmac - */ /** @type {((algorithm: string) => Hash)|null} */ let _createHash = null; diff --git a/src/domain/warp/conflict.methods.js b/src/domain/warp/conflict.methods.js deleted file mode 100644 index 483eba82..00000000 --- a/src/domain/warp/conflict.methods.js +++ /dev/null @@ -1,23 +0,0 @@ -/** - * Conflict analysis methods for WarpRuntime. - * - * @module domain/warp/conflict.methods - */ - -import ConflictAnalyzerService from '../services/ConflictAnalyzerService.js'; - -/** - * Analyze read-only conflict provenance over either the current frontier or - * an explicit strand, with an optional Lamport ceiling. - * - * This method performs zero durable writes. It does not materialize or mutate - * cached graph state, checkpoints, or persistent caches. - * - * @this {import('../WarpRuntime.js').default} - * @param {import('../services/ConflictAnalyzerService.js').ConflictAnalyzeOptions} [options] - * @returns {Promise} - */ -export async function analyzeConflicts(options) { - const analyzer = new ConflictAnalyzerService({ graph: this }); - return await analyzer.analyze(options); -} diff --git a/src/domain/warp/fork.methods.js b/src/domain/warp/fork.methods.js deleted file mode 100644 index ee89ded5..00000000 --- a/src/domain/warp/fork.methods.js +++ /dev/null @@ -1,320 +0,0 @@ -/** - * Fork and wormhole methods for WarpRuntime, plus backfill-rejection helpers. - * - * Every function uses `this` bound to a WarpRuntime instance at runtime - * via wireWarpMethods(). - * - * @module domain/warp/fork.methods - */ - -import { ForkError, DEFAULT_ADJACENCY_CACHE_SIZE } from './_internal.js'; -import { validateGraphName, validateWriterId, buildWriterRef, buildWritersPrefix } from '../utils/RefLayout.js'; -import { generateWriterId } from '../utils/WriterId.js'; -import { createWormhole as createWormholeImpl } from '../services/WormholeService.js'; - -// ============================================================================ -// Fork API -// ============================================================================ - -/** - * Creates a fork of this graph at a specific point in a writer's history. - * - * A fork creates a new WarpRuntime instance that shares history up to the - * specified patch SHA. Due to Git's content-addressed storage, the shared - * history is automatically deduplicated. The fork gets a new writer ID and - * operates independently from the original graph. - * - * **Key Properties:** - * - Fork materializes the same state as the original at the fork point - * - Writes to the fork don't appear in the original - * - Writes to the original after fork don't appear in the fork - * - History up to the fork point is shared (content-addressed dedup) - * - * @this {import('../WarpRuntime.js').default} - * @param {{ from: string, at: string, forkName?: string, forkWriterId?: string }} options - Fork configuration - * @returns {Promise} A new WarpRuntime instance for the fork - * @throws {ForkError} If `from` writer does not exist (code: `E_FORK_WRITER_NOT_FOUND`) - * @throws {ForkError} If `at` SHA does not exist (code: `E_FORK_PATCH_NOT_FOUND`) - * @throws {ForkError} If `at` SHA is not in the writer's chain (code: `E_FORK_PATCH_NOT_IN_CHAIN`) - * @throws {ForkError} If fork graph name is invalid (code: `E_FORK_NAME_INVALID`) - * @throws {ForkError} If a graph with the fork name already has refs (code: `E_FORK_ALREADY_EXISTS`) - * @throws {ForkError} If required parameters are missing or invalid (code: `E_FORK_INVALID_ARGS`) - * @throws {ForkError} If forkWriterId is invalid (code: `E_FORK_WRITER_ID_INVALID`) - */ -export async function fork({ from, at, forkName, forkWriterId }) { - const t0 = this._clock.now(); - - try { - // Validate required parameters - if (!from || typeof from !== 'string') { - throw new ForkError("Required parameter 'from' is missing or not a string", { - code: 'E_FORK_INVALID_ARGS', - context: { from }, - }); - } - - if (!at || typeof at !== 'string') { - throw new ForkError("Required parameter 'at' is missing or not a string", { - code: 'E_FORK_INVALID_ARGS', - context: { at }, - }); - } - - // 1. Validate that the `from` writer exists - const writers = await this.discoverWriters(); - if (!writers.includes(from)) { - throw new ForkError(`Writer '${from}' does not exist in graph '${this._graphName}'`, { - code: 'E_FORK_WRITER_NOT_FOUND', - context: { writerId: from, graphName: this._graphName, existingWriters: writers }, - }); - } - - // 2. Validate that `at` SHA exists in the repository - const nodeExists = await this._persistence.nodeExists(at); - if (!nodeExists) { - throw new ForkError(`Patch SHA '${at}' does not exist`, { - code: 'E_FORK_PATCH_NOT_FOUND', - context: { patchSha: at, writerId: from }, - }); - } - - // 3. Validate that `at` SHA is in the writer's chain - const writerRef = buildWriterRef(this._graphName, from); - const tipSha = await this._persistence.readRef(writerRef); - - if (tipSha === null || tipSha === undefined || tipSha === '') { - throw new ForkError(`Writer '${from}' has no commits`, { - code: 'E_FORK_WRITER_NOT_FOUND', - context: { writerId: from }, - }); - } - - // Walk the chain to verify `at` is reachable from the tip - const isInChain = await this._isAncestor(at, tipSha); - if (!isInChain) { - throw new ForkError(`Patch SHA '${at}' is not in writer '${from}' chain`, { - code: 'E_FORK_PATCH_NOT_IN_CHAIN', - context: { patchSha: at, writerId: from, tipSha }, - }); - } - - // 4. Generate or validate fork name (add random suffix to prevent collisions) - const resolvedForkName = - forkName ?? `${this._graphName}-fork-${Math.random().toString(36).slice(2, 10).padEnd(8, '0')}`; - try { - validateGraphName(resolvedForkName); - } catch (err) { - throw new ForkError(`Invalid fork name: ${/** @type {Error} */ (err).message}`, { - code: 'E_FORK_NAME_INVALID', - context: { forkName: resolvedForkName, originalError: /** @type {Error} */ (err).message }, - }); - } - - // 5. Check that the fork graph doesn't already exist (has any refs) - const forkWritersPrefix = buildWritersPrefix(resolvedForkName); - const existingForkRefs = await this._persistence.listRefs(forkWritersPrefix); - if (existingForkRefs.length > 0) { - throw new ForkError(`Graph '${resolvedForkName}' already exists`, { - code: 'E_FORK_ALREADY_EXISTS', - context: { forkName: resolvedForkName, existingRefs: existingForkRefs }, - }); - } - - // 6. Generate or validate fork writer ID - const resolvedForkWriterId = (forkWriterId !== undefined && forkWriterId !== null && forkWriterId !== '') ? forkWriterId : generateWriterId(); - try { - validateWriterId(resolvedForkWriterId); - } catch (err) { - throw new ForkError(`Invalid fork writer ID: ${/** @type {Error} */ (err).message}`, { - code: 'E_FORK_WRITER_ID_INVALID', - context: { forkWriterId: resolvedForkWriterId, originalError: /** @type {Error} */ (err).message }, - }); - } - - // 7. Create the fork's writer ref pointing to the `at` commit - const forkWriterRef = buildWriterRef(resolvedForkName, resolvedForkWriterId); - await this._persistence.updateRef(forkWriterRef, at); - - // 8. Open and return a new WarpRuntime instance for the fork - // Dynamic import to avoid circular dependency (WarpRuntime -> fork.methods -> WarpRuntime) - const { default: WarpRuntime } = await import('../WarpRuntime.js'); - - const forkGraph = await WarpRuntime.open({ - persistence: this._persistence, - graphName: resolvedForkName, - writerId: resolvedForkWriterId, - gcPolicy: this._gcPolicy, - adjacencyCacheSize: this._adjacencyCache?.maxSize ?? DEFAULT_ADJACENCY_CACHE_SIZE, - ...(this._checkpointPolicy ? { checkpointPolicy: this._checkpointPolicy } : {}), - autoMaterialize: this._autoMaterialize, - onDeleteWithData: this._onDeleteWithData, - ...(this._logger ? { logger: this._logger } : {}), - clock: this._clock, - crypto: this._crypto, - codec: this._codec, - }); - - this._logTiming('fork', t0, { - metrics: `from=${from} at=${at.slice(0, 7)} name=${resolvedForkName}`, - }); - - return forkGraph; - } catch (err) { - this._logTiming('fork', t0, { error: /** @type {Error} */ (err) }); - throw err; - } -} - -// ============================================================================ -// Wormhole API (HOLOGRAM) -// ============================================================================ - -/** - * Creates a wormhole compressing a range of patches. - * - * A wormhole is a compressed representation of a contiguous range of patches - * from a single writer. It preserves provenance by storing the original - * patches as a ProvenancePayload that can be replayed during materialization. - * - * **Key Properties:** - * - **Provenance Preservation**: The wormhole contains the full sub-payload, - * allowing exact replay of the compressed segment. - * - **Monoid Composition**: Two consecutive wormholes can be composed by - * concatenating their sub-payloads (use `WormholeService.composeWormholes`). - * - **Materialization Equivalence**: A wormhole + remaining patches produces - * the same state as materializing all patches. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} fromSha - SHA of the first (oldest) patch commit in the range - * @param {string} toSha - SHA of the last (newest) patch commit in the range - * @returns {Promise<{fromSha: string, toSha: string, writerId: string, payload: import('../services/ProvenancePayload.js').default, patchCount: number}>} The created wormhole edge - * @throws {import('../errors/WormholeError.js').default} If fromSha or toSha doesn't exist (E_WORMHOLE_SHA_NOT_FOUND) - * @throws {import('../errors/WormholeError.js').default} If fromSha is not an ancestor of toSha (E_WORMHOLE_INVALID_RANGE) - * @throws {import('../errors/WormholeError.js').default} If commits span multiple writers (E_WORMHOLE_MULTI_WRITER) - * @throws {import('../errors/WormholeError.js').default} If a commit is not a patch commit (E_WORMHOLE_NOT_PATCH) - */ -export async function createWormhole(fromSha, toSha) { - const t0 = this._clock.now(); - - try { - const wormhole = await createWormholeImpl({ - persistence: this._persistence, - graphName: this._graphName, - fromSha, - toSha, - codec: this._codec, - }); - - this._logTiming('createWormhole', t0, { - metrics: `${wormhole.patchCount} patches from=${fromSha.slice(0, 7)} to=${toSha.slice(0, 7)}`, - }); - - return wormhole; - } catch (err) { - this._logTiming('createWormhole', t0, { error: /** @type {Error} */ (err) }); - throw err; - } -} - -// ============================================================================ -// Backfill Rejection and Divergence Detection -// ============================================================================ - -/** - * Checks if ancestorSha is an ancestor of descendantSha. - * Walks the commit graph (linear per-writer chain assumption). - * - * @this {import('../WarpRuntime.js').default} - * @param {string} ancestorSha - The potential ancestor commit SHA - * @param {string} descendantSha - The potential descendant commit SHA - * @returns {Promise} True if ancestorSha is an ancestor of descendantSha - * @private - */ -export async function _isAncestor(ancestorSha, descendantSha) { - if (!ancestorSha || !descendantSha) { - return false; - } - if (ancestorSha === descendantSha) { - return true; - } - - /** @type {string | null} */ - let cur = descendantSha; - const MAX_WALK = 100_000; - let steps = 0; - while (cur !== null) { - if (++steps > MAX_WALK) { - throw new Error(`_isAncestor: exceeded ${MAX_WALK} steps — possible cycle`); - } - const nodeInfo = await this._persistence.getNodeInfo(cur); - const parent = nodeInfo.parents?.[0] ?? null; - if (parent === ancestorSha) { - return true; - } - cur = parent; - } - return false; -} - -/** - * Determines relationship between incoming patch and checkpoint head. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} ckHead - The checkpoint head SHA for this writer - * @param {string} incomingSha - The incoming patch commit SHA - * @returns {Promise<'same' | 'ahead' | 'behind' | 'diverged'>} The relationship - * @private - */ -export async function _relationToCheckpointHead(ckHead, incomingSha) { - if (incomingSha === ckHead) { - return 'same'; - } - if (await this._isAncestor(ckHead, incomingSha)) { - return 'ahead'; - } - if (await this._isAncestor(incomingSha, ckHead)) { - return 'behind'; - } - return 'diverged'; -} - -/** - * Validates an incoming patch against checkpoint frontier. - * Uses graph reachability, NOT lamport timestamps. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} writerId - The writer ID for this patch - * @param {string} incomingSha - The incoming patch commit SHA - * @param {{state: import('../services/JoinReducer.js').WarpStateV5, frontier: Map, stateHash: string, schema: number}} checkpoint - The checkpoint to validate against - * @returns {Promise} - * @throws {Error} If patch is behind/same as checkpoint frontier (backfill rejected) - * @throws {Error} If patch does not extend checkpoint head (writer fork detected) - * @private - */ -export async function _validatePatchAgainstCheckpoint(writerId, incomingSha, checkpoint) { - if (checkpoint === null || checkpoint === undefined || (checkpoint.schema !== 2 && checkpoint.schema !== 3)) { - return; - } - - const ckHead = checkpoint.frontier?.get(writerId); - if (ckHead === undefined || ckHead === null || ckHead === '') { - return; // Checkpoint didn't include this writer - } - - const relation = await this._relationToCheckpointHead(ckHead, incomingSha); - - if (relation === 'same' || relation === 'behind') { - throw new Error( - `Backfill rejected for writer ${writerId}: ` + - `incoming patch is ${relation} checkpoint frontier` - ); - } - - if (relation === 'diverged') { - throw new Error( - `Writer fork detected for ${writerId}: ` + - `incoming patch does not extend checkpoint head` - ); - } - // relation === 'ahead' => OK -} diff --git a/src/domain/warp/materializeAdvanced.methods.js b/src/domain/warp/materializeAdvanced.methods.js index 3a853945..2c136c11 100644 --- a/src/domain/warp/materializeAdvanced.methods.js +++ b/src/domain/warp/materializeAdvanced.methods.js @@ -23,8 +23,7 @@ import BitmapNeighborProvider from '../services/BitmapNeighborProvider.js'; import { QueryError } from './_internal.js'; /** @typedef {import('../types/WarpPersistence.js').CorePersistence} CorePersistence */ -/** @typedef {import('../services/JoinReducer.js').WarpStateV5} WarpStateV5 */ -/** @typedef {import('../types/TickReceipt.js').TickReceipt} TickReceipt */ + /** * @typedef {{ outgoing: Map>, incoming: Map> }} AdjacencyMap @@ -33,6 +32,9 @@ import { QueryError } from './_internal.js'; import { buildWriterRef } from '../utils/RefLayout.js'; + +/** @import { WarpStateV5 } from '../services/JoinReducer.js' */ +/** @import { TickReceipt } from '../types/TickReceipt.js' */ /** * Creates a shallow-frozen public view of materialized state. * diff --git a/src/domain/warp/provenance.methods.js b/src/domain/warp/provenance.methods.js deleted file mode 100644 index 928b1c7e..00000000 --- a/src/domain/warp/provenance.methods.js +++ /dev/null @@ -1,286 +0,0 @@ -/** - * Provenance methods for WarpRuntime — patch lookups, slice materialization, - * backward causal cone computation, and causal sorting. - * - * Every function uses `this` bound to a WarpRuntime instance at runtime - * via wireWarpMethods(). - * - * @module domain/warp/provenance.methods - */ - -import { QueryError } from './_internal.js'; -import { createEmptyStateV5, reduceV5 } from '../services/JoinReducer.js'; -import { ProvenancePayload } from '../services/ProvenancePayload.js'; -import { decodePatchMessage, detectMessageKind } from '../services/WarpMessageCodec.js'; - -/** @typedef {import('../types/WarpTypesV2.js').PatchV2} PatchV2 */ - -/** - * Returns all patch SHAs that affected a given node or edge. - * - * "Affected" means the patch either read from or wrote to the entity - * (based on the patch's I/O declarations from HG/IO/1). - * - * If `autoMaterialize` is enabled, this will automatically materialize - * the state if dirty. Otherwise, call `materialize()` first. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} entityId - The node ID or edge key to query - * @returns {Promise} Array of patch SHAs that affected the entity, sorted alphabetically - * @throws {QueryError} If no cached state exists and autoMaterialize is off (code: `E_NO_STATE`) - */ -export async function patchesFor(entityId) { - await this._ensureFreshState(); - - if (this._provenanceDegraded) { - throw new QueryError('Provenance unavailable for cached seek. Re-seek with --no-persistent-cache or call materialize({ ceiling }) directly.', { - code: 'E_PROVENANCE_DEGRADED', - }); - } - - if (!this._provenanceIndex) { - throw new QueryError('No provenance index. Call materialize() first.', { - code: 'E_NO_STATE', - }); - } - return this._provenanceIndex.patchesFor(entityId); -} - -/** - * Materializes only the backward causal cone for a specific node. - * - * This implements the slicing theorem from Paper III (Computational Holography): - * Given a target node v, compute its backward causal cone D(v) - the set of - * all patches that contributed to v's current state - and replay only those. - * - * The algorithm: - * 1. Start with patches that directly wrote to the target node - * 2. For each patch, find entities it read from - * 3. Recursively gather all dependencies - * 4. Topologically sort by Lamport timestamp (causal order) - * 5. Replay the sorted patches against empty state - * - * **Requires a cached state.** Call materialize() first to build the provenance index. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} nodeId - The target node ID to materialize the cone for - * @param {{receipts?: boolean}} [options] - Optional configuration - * @returns {Promise<{state: import('../services/JoinReducer.js').WarpStateV5, patchCount: number, receipts?: import('../types/TickReceipt.js').TickReceipt[]}>} - * Returns the sliced state with the patch count (for comparison with full materialization) - * @throws {QueryError} If no provenance index exists (code: `E_NO_STATE`) - * @throws {Error} If patch loading fails - */ -export async function materializeSlice(nodeId, options) { - const t0 = this._clock.now(); - const collectReceipts = options?.receipts === true; - - try { - // Ensure fresh state before accessing provenance index - await this._ensureFreshState(); - - if (this._provenanceDegraded) { - throw new QueryError('Provenance unavailable for cached seek. Re-seek with --no-persistent-cache or call materialize({ ceiling }) directly.', { - code: 'E_PROVENANCE_DEGRADED', - }); - } - - if (!this._provenanceIndex) { - throw new QueryError('No provenance index. Call materialize() first.', { - code: 'E_NO_STATE', - }); - } - - // 1. Compute backward causal cone using BFS over the provenance index - // Returns Map with patches already loaded (avoids double I/O) - const conePatchMap = await this._computeBackwardCone(nodeId); - - // 2. If no patches in cone, return empty state - if (conePatchMap.size === 0) { - const emptyState = createEmptyStateV5(); - this._logTiming('materializeSlice', t0, { metrics: '0 patches (empty cone)' }); - return { - state: emptyState, - patchCount: 0, - ...(collectReceipts ? { receipts: [] } : {}), - }; - } - - // 3. Convert cached patches to entry format (patches already loaded by _computeBackwardCone) - const patchEntries = []; - for (const [sha, patch] of conePatchMap) { - patchEntries.push({ patch, sha }); - } - - // 4. Topologically sort by causal order (Lamport timestamp, then writer, then SHA) - const sortedPatches = this._sortPatchesCausally(patchEntries); - - // 5. Replay: use reduceV5 directly when collecting receipts, otherwise use ProvenancePayload - this._logTiming('materializeSlice', t0, { metrics: `${sortedPatches.length} patches` }); - - if (collectReceipts) { - const result = /** @type {{state: import('../services/JoinReducer.js').WarpStateV5, receipts: import('../types/TickReceipt.js').TickReceipt[]}} */ (reduceV5(sortedPatches, undefined, { receipts: true })); - return { - state: result.state, - patchCount: sortedPatches.length, - receipts: result.receipts, - }; - } - - const payload = new ProvenancePayload(sortedPatches); - return { - state: payload.replay(), - patchCount: sortedPatches.length, - }; - } catch (err) { - this._logTiming('materializeSlice', t0, { error: /** @type {Error} */ (err) }); - throw err; - } -} - -/** - * Computes the backward causal cone for a node. - * - * Uses BFS over the provenance index: - * 1. Find all patches that wrote to the target node - * 2. For each patch, find entities it read from - * 3. Find all patches that wrote to those entities - * 4. Repeat until no new patches are found - * - * Returns a Map of SHA -> patch to avoid double-loading (the cone - * computation needs to read patches for their read-dependencies, - * so we cache them for later replay). - * - * @this {import('../WarpRuntime.js').default} - * @param {string} nodeId - The target node ID - * @returns {Promise>} Map of patch SHA to loaded patch object - */ -export async function _computeBackwardCone(nodeId) { - if (!this._provenanceIndex) { - throw new QueryError('No provenance index. Call materialize() first.', { - code: 'E_NO_STATE', - }); - } - const cone = new Map(); // sha -> patch (cache loaded patches) - const visited = new Set(); // Visited entities - const queue = [nodeId]; // BFS queue of entities to process - let qi = 0; - - while (qi < queue.length) { - const entityId = /** @type {string} */ (queue[qi++]); - - if (visited.has(entityId)) { - continue; - } - visited.add(entityId); - - // Get all patches that affected this entity - const patchShas = /** @type {import('../services/ProvenanceIndex.js').ProvenanceIndex} */ (this._provenanceIndex).patchesFor(entityId); - - for (const sha of patchShas) { - if (cone.has(sha)) { - continue; - } - - // Load the patch and cache it - const patch = await this._loadPatchBySha(sha); - cone.set(sha, patch); - - // Add read dependencies to the queue - const patchReads = /** @type {{reads?: string[]}} */ (patch).reads; - if (patchReads) { - for (const readEntity of patchReads) { - if (!visited.has(readEntity)) { - queue.push(readEntity); - } - } - } - } - } - - return cone; -} - -/** - * Loads a single patch by its SHA. - * - * Thin wrapper around the internal `_loadPatchBySha` helper. Exposed for - * CLI/debug tooling (e.g. seek tick receipts) that needs to inspect patch - * operations without re-materializing intermediate states. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} sha - The patch commit SHA - * @returns {Promise} The decoded patch object - * @throws {Error} If the commit is not a patch or loading fails - */ -export async function loadPatchBySha(sha) { - return await this._loadPatchBySha(sha); -} - -/** - * Loads a single patch by its SHA. - * - * @this {import('./_internal.js').WarpGraphWithMixins} - * @param {string} sha - The patch commit SHA - * @returns {Promise} The decoded patch object - * @throws {Error} If the commit is not a patch or loading fails - */ -export async function _loadPatchBySha(sha) { - const nodeInfo = await this._persistence.getNodeInfo(sha); - const kind = detectMessageKind(nodeInfo.message); - - if (kind !== 'patch') { - throw new Error(`Commit ${sha} is not a patch`); - } - - const patchMeta = decodePatchMessage(nodeInfo.message); - const patchBuffer = await this._readPatchBlob(patchMeta); - return /** @type {import('../types/WarpTypesV2.js').PatchV2} */ (this._codec.decode(patchBuffer)); -} - -/** - * Loads multiple patches by their SHAs. - * - * @this {import('../WarpRuntime.js').default} - * @param {string[]} shas - Array of patch commit SHAs - * @returns {Promise>} Array of patch entries - * @throws {Error} If any SHA is not a patch or loading fails - */ -export async function _loadPatchesBySha(shas) { - const entries = []; - - for (const sha of shas) { - const patch = await this._loadPatchBySha(sha); - entries.push({ patch, sha }); - } - - return entries; -} - -/** - * Sorts patches in causal order for deterministic replay. - * - * Sort order: Lamport timestamp (ascending), then writer ID, then SHA. - * This ensures deterministic ordering regardless of discovery order. - * - * @this {import('../WarpRuntime.js').default} - * @param {Array<{patch: PatchV2, sha: string}>} patches - Unsorted patch entries - * @returns {Array<{patch: PatchV2, sha: string}>} Sorted patch entries - */ -export function _sortPatchesCausally(patches) { - return [...patches].sort((a, b) => { - // Primary: Lamport timestamp (ascending - earlier patches first) - const lamportDiff = (a.patch.lamport || 0) - (b.patch.lamport || 0); - if (lamportDiff !== 0) { - return lamportDiff; - } - - // Secondary: Writer ID (lexicographic) - const writerCmp = (a.patch.writer || '').localeCompare(b.patch.writer || ''); - if (writerCmp !== 0) { - return writerCmp; - } - - // Tertiary: SHA (lexicographic) for total ordering - return a.sha.localeCompare(b.sha); - }); -} diff --git a/src/domain/warp/strand.methods.js b/src/domain/warp/strand.methods.js deleted file mode 100644 index bd72803e..00000000 --- a/src/domain/warp/strand.methods.js +++ /dev/null @@ -1,200 +0,0 @@ -/** - * Strand methods for WarpRuntime. - * - * @module domain/warp/strand.methods - */ - -import StrandService from '../services/StrandService.js'; - -/** - * Creates a new strand with the given options. - * - * @this {import('../WarpRuntime.js').default} - * @param {import('../services/StrandService.js').StrandCreateOptions} [options] - * @returns {Promise} - */ -export async function createStrand(options) { - const service = new StrandService({ graph: this }); - return await service.create(options); -} - -/** - * Braids a strand, merging its overlay back into the base graph. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @param {import('../services/StrandService.js').StrandBraidOptions} [options] - * @returns {Promise} - */ -export async function braidStrand(strandId, options) { - const service = new StrandService({ graph: this }); - return await service.braid(strandId, options); -} - -/** - * Retrieves the descriptor for a strand by its identifier. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @returns {Promise} - */ -export async function getStrand(strandId) { - const service = new StrandService({ graph: this }); - return await service.get(strandId); -} - -/** - * Lists all strand descriptors in the current graph. - * - * @this {import('../WarpRuntime.js').default} - * @returns {Promise} - */ -export async function listStrands() { - const service = new StrandService({ graph: this }); - return await service.list(); -} - -/** - * Drops (deletes) a strand, removing its refs and overlay data. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @returns {Promise} - */ -export async function dropStrand(strandId) { - const service = new StrandService({ graph: this }); - return await service.drop(strandId); -} - -/** - * Materializes the graph state scoped to a single strand. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @param {{ receipts?: boolean, ceiling?: number|null }} [options] - * @returns {Promise} - */ -export async function materializeStrand(strandId, options) { - const service = new StrandService({ graph: this }); - return await service.materialize(strandId, options); -} - -/** - * Retrieves all patch entries belonging to a strand. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @param {{ ceiling?: number|null }} [options] - * @returns {Promise>} - */ -export async function getStrandPatches(strandId, options) { - const service = new StrandService({ graph: this }); - return await service.getPatchEntries(strandId, options); -} - -/** - * Returns the patch SHAs that touched a given entity within a strand. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @param {string} entityId - * @param {{ ceiling?: number|null }} [options] - * @returns {Promise} - */ -export async function patchesForStrand(strandId, entityId, options) { - const service = new StrandService({ graph: this }); - return await service.patchesFor(strandId, entityId, options); -} - -/** - * Creates a PatchBuilderV2 scoped to a strand for manual patch construction. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @returns {Promise} - */ -export async function createStrandPatch(strandId) { - const service = new StrandService({ graph: this }); - return await service.createPatchBuilder(strandId); -} - -/** - * Applies a patch to a strand using a builder callback and commits it. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @param {(p: import('../services/PatchBuilderV2.js').PatchBuilderV2) => void | Promise} build - * @returns {Promise} - */ -export async function patchStrand(strandId, build) { - const service = new StrandService({ graph: this }); - return await service.patch(strandId, build); -} - -/** - * Queues a speculative intent on a strand without committing it. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @param {(p: import('../services/PatchBuilderV2.js').PatchBuilderV2) => void | Promise} build - * @returns {Promise<{ - * intentId: string, - * enqueuedAt: string, - * patch: import('../types/WarpTypesV2.js').PatchV2, - * reads: string[], - * writes: string[], - * contentBlobOids: string[] - * }>} - */ -export async function queueStrandIntent(strandId, build) { - const service = new StrandService({ graph: this }); - return await service.queueIntent(strandId, build); -} - -/** - * Lists all pending intents queued on a strand. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @returns {Promise>} - */ -export async function listStrandIntents(strandId) { - const service = new StrandService({ graph: this }); - return await service.listIntents(strandId); -} - -/** - * Advances a strand by one tick, draining queued intents with conflict detection. - * - * @this {import('../WarpRuntime.js').default} - * @param {string} strandId - * @returns {Promise<{ - * tickId: string, - * strandId: string, - * tickIndex: number, - * createdAt: string, - * drainedIntentCount: number, - * admittedIntentIds: string[], - * rejected: Array<{ - * intentId: string, - * reason: string, - * conflictsWith: string[], - * reads: string[], - * writes: string[] - * }>, - * baseOverlayHeadPatchSha: string|null, - * overlayHeadPatchSha: string|null, - * overlayPatchShas: string[] - * }>} - */ -export async function tickStrand(strandId) { - const service = new StrandService({ graph: this }); - return await service.tick(strandId); -} diff --git a/src/domain/warp/subscribe.methods.js b/src/domain/warp/subscribe.methods.js deleted file mode 100644 index 0ae64195..00000000 --- a/src/domain/warp/subscribe.methods.js +++ /dev/null @@ -1,284 +0,0 @@ -/** - * @module domain/warp/subscribe.methods - * - * Extracted subscribe, watch, and _notifySubscribers methods from WarpRuntime. - * Each function is bound to a WarpRuntime instance at runtime via `this`. - */ - -import { diffStates, isEmptyDiff } from '../services/StateDiff.js'; -import { matchGlob } from '../utils/matchGlob.js'; - -/** - * Subscribes to graph changes. - * - * The `onChange` handler is called after each `materialize()` that results in - * state changes. The handler receives a diff object describing what changed. - * - * When `replay: true` is set and `_cachedState` is available, immediately - * fires `onChange` with a diff from empty state to current state. If - * `_cachedState` is null, replay is deferred until the first materialize. - * - * Errors thrown by handlers are caught and forwarded to `onError` if provided. - * One handler's error does not prevent other handlers from being called. - * - * @public - * @since 13.0.0 (stable) - * @stability stable - * @this {import('../WarpRuntime.js').default} - * @param {{ onChange: (diff: import('../services/StateDiff.js').StateDiffResult) => void, onError?: (error: unknown) => void, replay?: boolean }} options - Subscription options - * @returns {{unsubscribe: () => void}} Subscription handle - * @throws {Error} If onChange is not a function - * - * @example - * const { unsubscribe } = graph.subscribe({ - * onChange: (diff) => { - * console.log('Nodes added:', diff.nodes.added); - * console.log('Nodes removed:', diff.nodes.removed); - * }, - * onError: (err) => console.error('Handler error:', err), - * }); - * - * // Later, to stop receiving updates: - * unsubscribe(); - * - * @example - * // With replay: get initial state immediately - * await graph.materialize(); - * graph.subscribe({ - * onChange: (diff) => console.log('Initial or changed:', diff), - * replay: true, // Immediately fires with current state as additions - * }); - */ -export function subscribe({ onChange, onError, replay = false }) { - if (typeof onChange !== 'function') { - throw new Error('onChange must be a function'); - } - - const subscriber = { - onChange, - ...(onError !== undefined ? { onError } : {}), - pendingReplay: replay && !this._cachedState, - }; - this._subscribers.push(subscriber); - - // Immediate replay if requested and cached state is available - if (replay && this._cachedState) { - const diff = diffStates(null, this._cachedState); - if (!isEmptyDiff(diff)) { - try { - onChange(diff); - } catch (err) { - if (onError) { - try { - onError(/** @type {Error} */ (err)); - } catch { - // onError itself threw — swallow to prevent cascade - } - } - } - } - } - - return { - /** Removes this subscriber from the notification list. */ - unsubscribe: () => { - const index = this._subscribers.indexOf(subscriber); - if (index !== -1) { - this._subscribers.splice(index, 1); - } - }, - }; -} - -/** - * Watches for graph changes matching a pattern. - * - * Like `subscribe()`, but only fires for changes where node IDs match the - * provided glob pattern. Uses the same pattern syntax as `query().match()`. - * - * - Nodes: filters `added` and `removed` to matching IDs - * - Edges: filters to edges where `from` or `to` matches the pattern - * - Props: filters to properties where `nodeId` matches the pattern - * - * If all changes are filtered out, the handler is not called. - * - * When `poll` is set, periodically checks `hasFrontierChanged()` and auto-materializes - * if the frontier has changed (e.g., remote writes detected). The poll interval must - * be at least 1000ms. - * - * @public - * @since 13.0.0 (stable) - * @stability stable - * @this {import('../WarpRuntime.js').default} - * @param {string|string[]} pattern - Glob pattern(s) (e.g., 'user:*', 'order:123', '*') - * @param {{ onChange: (diff: import('../services/StateDiff.js').StateDiffResult) => void, onError?: (error: unknown) => void, poll?: number }} options - Watch options - * @returns {{unsubscribe: () => void}} Subscription handle - * @throws {Error} If pattern is not a string or array of strings - * @throws {Error} If onChange is not a function - * @throws {Error} If poll is provided but less than 1000 - * - * @example - * const { unsubscribe } = graph.watch('user:*', { - * onChange: (diff) => { - * // Only user node changes arrive here - * console.log('User nodes added:', diff.nodes.added); - * }, - * }); - * - * @example - * // With polling: checks every 5s for remote changes - * const { unsubscribe } = graph.watch('user:*', { - * onChange: (diff) => console.log('User changed:', diff), - * poll: 5000, - * }); - * - * // Later, to stop receiving updates: - * unsubscribe(); - */ -export function watch(pattern, { onChange, onError, poll }) { - /** Checks whether a pattern is a non-empty string or array of strings. @param {string|string[]} p @returns {boolean} */ - const isValidPattern = (p) => typeof p === 'string' || (Array.isArray(p) && p.length > 0 && p.every(i => typeof i === 'string')); - if (!isValidPattern(pattern)) { - throw new Error('pattern must be a non-empty string or non-empty array of strings'); - } - if (typeof onChange !== 'function') { - throw new Error('onChange must be a function'); - } - if (poll !== undefined) { - if (typeof poll !== 'number' || !Number.isFinite(poll) || poll < 1000) { - throw new Error('poll must be a finite number >= 1000'); - } - } - - // Pattern matching logic - /** Tests whether a node ID matches the subscription pattern. @param {string} nodeId @returns {boolean} */ - const matchesPattern = (nodeId) => matchGlob(pattern, nodeId); - - /** - * Filtered onChange that only passes matching changes. - * @param {import('../services/StateDiff.js').StateDiffResult} diff - */ - const filteredOnChange = (diff) => { - const filteredDiff = { - nodes: { - added: diff.nodes.added.filter(matchesPattern), - removed: diff.nodes.removed.filter(matchesPattern), - }, - edges: { - added: diff.edges.added.filter((/** @type {import('../services/StateDiff.js').EdgeChange} */ e) => matchesPattern(e.from) || matchesPattern(e.to)), - removed: diff.edges.removed.filter((/** @type {import('../services/StateDiff.js').EdgeChange} */ e) => matchesPattern(e.from) || matchesPattern(e.to)), - }, - props: { - set: diff.props.set.filter((/** @type {import('../services/StateDiff.js').PropSet} */ p) => matchesPattern(p.nodeId)), - removed: diff.props.removed.filter((/** @type {import('../services/StateDiff.js').PropRemoved} */ p) => matchesPattern(p.nodeId)), - }, - }; - - // Only call handler if there are matching changes - const hasChanges = - filteredDiff.nodes.added.length > 0 || - filteredDiff.nodes.removed.length > 0 || - filteredDiff.edges.added.length > 0 || - filteredDiff.edges.removed.length > 0 || - filteredDiff.props.set.length > 0 || - filteredDiff.props.removed.length > 0; - - if (hasChanges) { - onChange(filteredDiff); - } - }; - - // Reuse subscription infrastructure - const subscription = this.subscribe({ - onChange: filteredOnChange, - ...(onError !== undefined ? { onError } : {}), - }); - - // Polling: periodically check frontier and auto-materialize if changed - /** @type {ReturnType|null} */ - let pollIntervalId = null; - let pollInFlight = false; - if (poll !== undefined) { - pollIntervalId = setInterval(() => { - if (pollInFlight) { - return; - } - pollInFlight = true; - this.hasFrontierChanged() - .then(async (changed) => { - if (changed) { - await this.materialize(); - } - }) - .catch((err) => { - if (onError) { - try { - onError(err); - } catch { - // onError itself threw — swallow to prevent cascade - } - } - }) - .finally(() => { - pollInFlight = false; - }); - }, poll); - } - - return { - /** Stops polling and removes the filtered subscriber. */ - unsubscribe: () => { - if (pollIntervalId !== null) { - clearInterval(pollIntervalId); - pollIntervalId = null; - } - subscription.unsubscribe(); - }, - }; -} - -/** - * @typedef {Object} Subscriber - * @property {(diff: import('../services/StateDiff.js').StateDiffResult) => void} onChange - * @property {((error: unknown) => void)|undefined} [onError] - * @property {boolean} pendingReplay - */ - -/** - * Notifies all subscribers of state changes. - * Handles deferred replay for subscribers added with `replay: true` before - * cached state was available. - * - * @this {import('../WarpRuntime.js').default} - * @param {import('../services/StateDiff.js').StateDiffResult} diff - * @param {import('../services/JoinReducer.js').WarpStateV5} currentState - The current state for deferred replay - * @private - */ -export function _notifySubscribers(diff, currentState) { - for (const subscriber of /** @type {Subscriber[]} */ ([...this._subscribers])) { - try { - // Handle deferred replay: on first notification, send full state diff instead - if (subscriber.pendingReplay) { - subscriber.pendingReplay = false; - const replayDiff = diffStates(null, currentState); - if (!isEmptyDiff(replayDiff)) { - subscriber.onChange(replayDiff); - } - } else { - // Skip non-replay subscribers when diff is empty - if (isEmptyDiff(diff)) { - continue; - } - subscriber.onChange(diff); - } - } catch (err) { - if (typeof subscriber.onError === 'function') { - try { - subscriber.onError(err); - } catch { - // onError itself threw — swallow to prevent cascade - } - } - } - } -} diff --git a/src/infrastructure/codecs/CborCodec.js b/src/infrastructure/codecs/CborCodec.js index 14b2fa9c..47f5629c 100644 --- a/src/infrastructure/codecs/CborCodec.js +++ b/src/infrastructure/codecs/CborCodec.js @@ -75,15 +75,30 @@ const encoder = new Encoder({ mapsAsObjects: true, }); +/** @type {ReadonlyArray} */ +const CBOR_NATIVE_TYPES = [Uint8Array, Date, RegExp, Set, Map]; + +/** + * Returns true if the value is a built-in type with its own CBOR encoding. + * @param {object} value + * @returns {boolean} + * @private + */ +function isCborNative(value) { + return CBOR_NATIVE_TYPES.some((T) => value instanceof T); +} + /** - * Checks if a value is a plain object (constructed via Object or Object.create(null)). + * Checks if a value should have its keys sorted for canonical CBOR. + * Returns true for plain objects AND domain class instances. + * Returns false for built-in types with their own CBOR representation. * * @param {unknown} value - The value to check - * @returns {boolean} True if value is a plain object + * @returns {boolean} True if value's keys should be sorted * @private */ function isPlainObject(value) { - return typeof value === 'object' && value !== null && (value.constructor === Object || value.constructor === undefined); + return typeof value === 'object' && value !== null && !isCborNative(/** @type {object} */ (value)); } /** diff --git a/test/unit/domain/services/VisibleStateScopeV1.test.js b/test/unit/domain/services/VisibleStateScopeV1.test.js index f843b5a0..2adb159f 100644 --- a/test/unit/domain/services/VisibleStateScopeV1.test.js +++ b/test/unit/domain/services/VisibleStateScopeV1.test.js @@ -15,6 +15,7 @@ import { normalizeVisibleStateScopeV1, scopeMaterializedStateV5, } from '../../../../src/domain/services/VisibleStateScopeV1.js'; +import WarpStateV5 from '../../../../src/domain/services/WarpStateV5.js'; function buildScopedFixtureState() { const nodeAlive = createORSet(); @@ -31,7 +32,7 @@ function buildScopedFixtureState() { [encodeEdgePropKey('task:1', 'comparison-artifact:cmp-1', 'governs', 'via'), lwwSet(createEventId(3, 'alice', 'abc1236', 0), 'control-plane')], ]); - return { + return new WarpStateV5({ nodeAlive, edgeAlive, prop, @@ -39,7 +40,7 @@ function buildScopedFixtureState() { edgeBirthEvent: new Map([ [edgeKey, createEventId(3, 'alice', 'abc1236', 0)], ]), - }; + }); } describe('VisibleStateScopeV1', () => { diff --git a/test/unit/scripts/public-api-advanced-guide-shape.test.js b/test/unit/scripts/public-api-advanced-guide-shape.test.js index 710a2b34..fc4ad4f5 100644 --- a/test/unit/scripts/public-api-advanced-guide-shape.test.js +++ b/test/unit/scripts/public-api-advanced-guide-shape.test.js @@ -31,7 +31,7 @@ describe('Advanced Guide engine-room shape', () => { expect(advancedGuide).toContain("factKind: 'coordinate-transfer-plan'"); expect(advancedGuide).toContain('[API Reference](API_REFERENCE.md)'); expect(advancedGuide).toContain('[Architecture](ARCHITECTURE.md)'); - expect(advancedGuide).toContain('OG-013'); - expect(advancedGuide).toContain('OG-014'); + expect(advancedGuide).toContain('Out-of-core materialization'); + expect(advancedGuide).toContain('Streaming graph traversal'); }); }); diff --git a/test/unit/scripts/release-policy-shape.test.js b/test/unit/scripts/release-policy-shape.test.js index 6b2291e9..4caed9b8 100644 --- a/test/unit/scripts/release-policy-shape.test.js +++ b/test/unit/scripts/release-policy-shape.test.js @@ -3,7 +3,7 @@ import { fileURLToPath } from 'node:url'; import { describe, expect, it } from 'vitest'; const releaseDoc = readFileSync( - fileURLToPath(new URL('../../../docs/release.md', import.meta.url)), + fileURLToPath(new URL('../../../docs/method/release.md', import.meta.url)), 'utf8', ); const preflight = readFileSync(