Hi @Oaklight —
ToolRegistry caught my attention immediately — "protocol-agnostic tool management" is almost exactly the phrase I keep using internally when describing what OM World's Tool Registry primitive needs to be. The arXiv paper and the three-package ecosystem (core, server, hub) suggest you've thought carefully about the layering.
I'm drafting a protocol spec for OM World (https://omworld.one, https://github.com/omworldprotocol/om-world) — a decentralized intent economy where AI agents execute human intentions under cryptographic mandate. One of four primitives is a Tool Registry: a cross-runtime, accountable directory where tool capabilities are registered, versioned, discoverable, and staked.
Your work is directly relevant, and I'd value your critique on three design questions:
Q1 — Registration-time vs. pull-on-demand capability descriptors.
ToolRegistry resolves schemas at Python import time (registration-time). For a cross-runtime, cross-cloud protocol registry, should the canonical capability descriptor be stored in the registry at registration, or pulled on-demand from the tool endpoint? Storing is faster for discovery but risks staleness on tool updates. Pulling is always fresh but creates an availability dependency. Is there a third option you've considered?
Q2 — Canonical capability representation across vocabularies.
ToolRegistry bridges OpenAPI, MCP, and LangChain — each with its own capability vocabulary. When the same tool registers under multiple runtimes, do you have a canonical intermediate representation, or does the registry just hold multiple dialect descriptors per tool? For a protocol registry that needs to be runtime-agnostic, which layer should own the normalization?
Q3 — Interface versioning and backward compatibility.
When a tool updates its interface (new required parameter, changed output schema), agents holding mandates that declare that tool_id may break silently. What's the right protocol primitive — semver on tool_id, a new tool_id per breaking change, or something else? How does ToolRegistry currently handle this?
Happy to keep the discussion here or move it to our spec issue: omworldprotocol/om-world#6
Not a token launch, not fundraising. Open protocol in Genesis phase, looking for sharp critique from people who've actually shipped in this space.
One Mind, One World.
Hi @Oaklight —
ToolRegistry caught my attention immediately — "protocol-agnostic tool management" is almost exactly the phrase I keep using internally when describing what OM World's Tool Registry primitive needs to be. The arXiv paper and the three-package ecosystem (core, server, hub) suggest you've thought carefully about the layering.
I'm drafting a protocol spec for OM World (https://omworld.one, https://github.com/omworldprotocol/om-world) — a decentralized intent economy where AI agents execute human intentions under cryptographic mandate. One of four primitives is a Tool Registry: a cross-runtime, accountable directory where tool capabilities are registered, versioned, discoverable, and staked.
Your work is directly relevant, and I'd value your critique on three design questions:
Q1 — Registration-time vs. pull-on-demand capability descriptors.
ToolRegistry resolves schemas at Python import time (registration-time). For a cross-runtime, cross-cloud protocol registry, should the canonical capability descriptor be stored in the registry at registration, or pulled on-demand from the tool endpoint? Storing is faster for discovery but risks staleness on tool updates. Pulling is always fresh but creates an availability dependency. Is there a third option you've considered?
Q2 — Canonical capability representation across vocabularies.
ToolRegistry bridges OpenAPI, MCP, and LangChain — each with its own capability vocabulary. When the same tool registers under multiple runtimes, do you have a canonical intermediate representation, or does the registry just hold multiple dialect descriptors per tool? For a protocol registry that needs to be runtime-agnostic, which layer should own the normalization?
Q3 — Interface versioning and backward compatibility.
When a tool updates its interface (new required parameter, changed output schema), agents holding mandates that declare that tool_id may break silently. What's the right protocol primitive — semver on tool_id, a new tool_id per breaking change, or something else? How does ToolRegistry currently handle this?
Happy to keep the discussion here or move it to our spec issue: omworldprotocol/om-world#6
Not a token launch, not fundraising. Open protocol in Genesis phase, looking for sharp critique from people who've actually shipped in this space.
One Mind, One World.