ID is a protocol and reference repository for portable human-AI interaction context.
Today it also functions as the repo-local hook layer for SET-compatible orchestration flows.
Main idea:
- any AI should quickly understand how to work with a specific person;
- context has depth levels: short, extended, full;
- using context implies responsibility to keep it updated.
- orchestration should be able to call explicit hook boundaries instead of relying on hidden chat state.
What this gives in practice:
- faster onboarding for a new tool or agent;
- less prompt boilerplate repeated by hand;
- explicit privacy, freshness, and loss boundaries;
- measurable with-vs-without-ID comparisons instead of ideology-only claims.
Current repo status:
- versioned protocol surface under
spec/ - policy-aware portable artifacts (
interop,compact,mcp) - benchmark and public proof layer
- structured observed-behavior evidence
- lightweight onboarding bootstrap flow
- installable lightweight CLI surface (
idctl)
ID now has a real execution role inside the wider ABVX toolchain:
SETcan orchestrate repo-localIDhooks in compatible repositories- current CI-safe hooks are
pre_taskandweekly_review IDstays the context/protocol layer, whileSETstays the orchestration layeragentsgenremains the repo-docs/runtime companion, not a replacement for portable human context
Practical consequence:
- if a repo is
ID-compatible,SETcan validate or refresh human-context boundaries before and after the main repo workflow - if a repo only needs repo-scoped agent docs,
agentsgenis enough on its own
Why this beats ad-hoc prompts or chat memory in some workflows:
system promptsare fragile and usually copied by hand across tools;- chat-native memory is siloed inside one product and hard to audit;
- project instructions help per repo, but not across roles like writing, research, or multi-tool orchestration;
IDmakes preferences, constraints, freshness, privacy, and portability explicit and versioned.
Short explainer:
docs/WHY_ID.md
Golden workflow examples:
docs/EXAMPLES.md
Proof page:
docs/PROOF.md
Release/install path:
docs/RELEASES.md- tagged GitHub release flow with
sdist/wheelartifacts - PyPI publish workflow prepared for trusted publishing once package naming is finalized
Use this if you want the smallest practical entrypoint.
You get:
- a starter profile
- a handshake
- a privacy policy starter
- a compact portable artifact
Start here:
docs/LITE.md
Use this if you want to move context safely between tools or people.
You get:
- validated interop/compact/MCP artifacts
- explicit privacy policy
- documented loss boundaries
Start here:
docs/SHARE.md
Use this if you want proof that ID actually helps.
You get:
- benchmark runs
- with-vs-without-ID comparisons
- public metrics
- proof summaries with caveats
Start here:
docs/BENCH.md
Input:
profiles/<owner>/profile.core.mdprofiles/<owner>/profile.extended.md- repo context for the actual task
Flow:
- run pre-task hook or hand the agent the core profile + handshake.
- agent summarizes understanding, constraints, and uncertainty.
- agent executes coding work under explicit style and safety rules.
- after the session, changelog and profile updates are recorded if needed.
Output:
- faster alignment on review style, verbosity, safety, and tooling assumptions
- fewer corrective turns than repeating the same guidance manually in each repo
Input:
- core profile for tone, critique style, and hard constraints
- extended profile for taste, recurrent misalignments, and known-good phrasing
- draft text under review
Flow:
- editor model reads the profile-backed handshake.
- critique is generated in the preferred format and tone.
- mismatch notes are captured if the editor over-corrects voice or pacing.
Output:
- more stable editorial voice across sessions and tools
- lower risk of generic “AI rewrite” drift
Input:
- core profile with communication preferences and risk constraints
- extended profile with domain heuristics and decision rules
- source material for the market question
Flow:
- analyst model uses the profile to choose structure, brevity, and evidence style.
- benchmarkable outputs can be compared across tools on the same task set.
- results are scored for style fit, constraint adherence, and usefulness.
Output:
- comparable outputs across models instead of one-off subjective impressions
- easier onboarding for a new model without rebuilding context from scratch
Input:
- markdown source profile
- generated
profiles/<owner>/interop.v1.json - redaction policy when sharing externally
Flow:
- export markdown source into interop artifact.
- validate the artifact against schema and repo rules.
- hand the redacted or full package to another tool, wrapper, or automation path.
Output:
- portable context with explicit loss boundaries
- less dependence on one chat product's internal memory model
.
├── benchmarks/
│ ├── tasks/
│ └── runs/
├── data/
│ ├── raw/
│ ├── normalized/
│ └── processed/
├── docs/
│ ├── PROTOCOL.md
│ ├── OPERATIONS.md
│ ├── INGEST_SOURCES.md
│ ├── PRIVACY.md
│ ├── VALIDATION.md
│ ├── INTEGRATIONS.md
│ ├── BENCHMARK.md
│ ├── INTEROP_V1.md
│ ├── HARDENING.md
│ └── ROADMAP.md
├── integrations/
│ ├── agentsmd/
│ ├── lab/
│ └── set/
├── lab/
│ └── experiments/
├── profiles/
├── schemas/
├── scripts/
├── templates/
└── README.md
- Phase 0 (bootstrap): done
- Phase 1 (ingest + extractor MVP): done
- Phase 2 (privacy/redaction): done
- Phase 3 (validation automation): done
- Phase 4 (integrations): done
- Phase 5 (benchmark + interop): done
- Phase 6 (hardening): done
- Phase 7 (expansion): done
Today this repository functions as:
- a protocol/spec reference
- a validated tooling reference
- a benchmark/evidence reference
- a lightweight onboarding entrypoint
- an installable lightweight CLI surface
It is no longer only an internal profile format or documentation experiment.
- repo: markoblogo/lab.abvx
- landing: lab.abvx.xyz
- current relationship:
lab.abvxis the broader experiment/catalog surfaceIDsits in that ecosystem as a protocol/reference implementation for portable human-AI contextlab.abvx.xyzshould be treated as an adjacent discovery or catalog surface, not the canonical protocol source of truth
- repo: markoblogo/AGENTS.md_generator
- landing: agentsmd.abvx.xyz
- current relationship:
AGENTS.md Generatoris companion tooling for generating and maintaining agent-facing repo instructionsIDis the person/tool interaction protocol layer- they complement each other:
IDdefines portable human-AI contextAGENTS.md Generatorhelps produce repo-scoped agent guidance
agentsmd.abvx.xyzis the landing/product surface for that adjacent toolchain, not theIDprotocol home
- repo: markoblogo/SET
- current relationship:
SETis the orchestration/execution layerIDis now a real repo-local hook and protocol companion inside that orchestration path- practical boundary:
IDanswers "what context should follow the human across tools?"SETanswers "how should repo workflows execute those tools and hooks?"
If you need:
- protocol and portable context: start with
ID - repo-scoped agent instruction generation: use
AGENTS.md Generator - orchestration and workflow execution: use
SET
- protocol/spec surface:
spec/ - versioning and conformance:
docs/VERSIONING.md,spec/CONFORMANCE.md - releases and install:
docs/RELEASES.md - broader experiment catalog / ecosystem discovery: use
lab.abvx - orchestration/execution workflows: use
SET
- current protocol docs live under
docs/ - versioned standard surface lives under
spec/ - current version index:
spec/v0.2/README.md - conformance model:
spec/CONFORMANCE.md - change history:
spec/CHANGELOG.md - proposal process:
spec/RFC/README.md - versioning semantics:
docs/VERSIONING.md - compatibility matrix:
docs/COMPATIBILITY.md - observed behavior notes:
docs/OBSERVED_BEHAVIOR.md - observed behavior evidence:
evidence/observed-behavior/*.json - evidence maintenance policy:
docs/EVIDENCE_POLICY.md - compact target mapping:
docs/CONTEXT_JSON_MAPPING.md
Recommended onboarding path:
- bootstrap a starter set with
python3 scripts/bootstrap_owner.py --owner-id <owner-id> - or
make bootstrap-owner OWNER=<owner-id> - start with
templates/profile.minimal.md - then promote stable guidance into
profiles/<owner>/profile.core.md - add
profile.extended.mdonly after repeated workflows and misalignments are clear
Guide:
docs/MINIMAL_PROFILE.mddocs/QUICKSTART.md
- put exports to
data/raw/<source>/ - normalize into
data/normalized/<source>/ - run:
python3 scripts/extract_profile.py --owner-id <owner-id>
- policy:
docs/PRIVACY.md - machine-readable policy:
docs/PRIVACY_POLICY_V1.md - threat model:
docs/THREAT_MODEL.md - validate policy:
python3 scripts/validate_privacy_policy.py --owner-id <owner-id> - run:
python3 scripts/redact_for_sharing.py - review:
data/processed/redaction-report.json
- validate:
python3 scripts/validate_profile.py --owner-id <owner-id> - publish guard:
python3 scripts/check_publish_guard.py --all-tracked - post-session entry:
python3 scripts/session_update.py --owner-id <owner-id> --session-context "..." --sections-used "..." --changes-made "..."
- pre-task:
scripts/run_integration_hook.sh pre_task --owner-id <owner-id> --target agentsmd
- post-task:
scripts/run_integration_hook.sh post_task --owner-id <owner-id> --session-context "..." --sections-used "..." --changes-made "..."
- weekly review:
scripts/run_integration_hook.sh weekly_review --owner-id <owner-id>
- benchmark guide:
docs/BENCHMARK.md - evaluator protocol:
docs/EVALUATOR_PROTOCOL.md - public utility positioning:
docs/WHY_ID.md - golden examples:
docs/EXAMPLES.md - proof summary:
docs/PROOF.md - run aggregation:
python3 scripts/benchmark_report.py --run-id <run-id> - trend report across runs:
python3 scripts/benchmark_trend_report.py - public metrics report:
python3 scripts/benchmark_public_report.py - initialize benchmark run:
python3 scripts/benchmark_init_run.py --run-id <run-id> --tool <tool> --owner-id <owner-id> --profile-version <version> - validate benchmark run:
python3 scripts/benchmark_validate_run.py --run-id <run-id> - interop v1 guide:
docs/INTEROP_V1.md - compatibility guide:
docs/COMPATIBILITY.md - compact export contract:
docs/CONTEXT_JSON_MAPPING.md - public metrics guide:
docs/MEASUREMENT.md - threat model:
docs/THREAT_MODEL.md - compact exporter:
python3 scripts/export_context_compact.py --owner-id <owner-id> - compact validator:
python3 scripts/validate_context_compact.py --owner-id <owner-id> - compact import draft:
python3 scripts/import_context_compact.py --owner-id <owner-id> - export interop json:
python3 scripts/export_interop_v1.py --owner-id <owner-id> - interop artifact policy:
profiles/<owner>/interop.v1.jsonis versioned and must be regenerated after profile changes - validate interop json:
python3 scripts/validate_interop_v1.py --owner-id <owner-id> - MCP import draft:
python3 scripts/import_mcp_resource.py --owner-id <owner-id> - shortcut commands:
make validatemake bootstrap-owner OWNER=<owner-id>make interopmake compactmake mcpmake privacy-policymake observed-behaviormake metricsmake trend
- hardening guide:
docs/HARDENING.md - CI workflow:
.github/workflows/ci.yml - baseline example:
benchmarks/runs/baseline-2026-03-31-codex/
