Releases: awesome-pro/guardloop
v0.4.2 — docs refresh
Documentation-only release — no code changes. The published package is functionally identical to 0.4.1; this release exists to refresh the project description shown on PyPI.
Changed
- Reframed the
v0.5roadmap entry (README.md,docs/roadmap.md,docs/project-overview.md) to lead with its engineering substance — an OpenTelemetry metrics layer (counters/histograms for cost, tokens, tool calls, and verifier attempts), per-attemptagent_attemptspan nesting, and a one-command Jaeger + Phoenixdocker-composestack — rather than the trace screenshots / write-up that fall out of it. - Added
docs/media/(with a checklist of wanted assets) and a demo screenshot referenced from the README's "Try the No-Key Demo" section.
Full changelog: https://github.com/awesome-pro/guardloop/blob/main/CHANGELOG.md
v0.4.1 — OpenAI Agents SDK Adapter
OpenAI Agents SDK adapter
guardloop.adapters.openai_agents.guarded_runner(agent) returns a GuardLoop-compatible agent callable you pass to GuardLoop.run(...). A GuardLoopRunHooks (a subclass of the SDK's RunHooks) bound to the RunContext runs the pre-flight budget check before each LLM call (on_llm_start), records actual usage afterward (on_llm_end), and routes tool calls through the per-tool circuit breaker and the tool-call budget (on_tool_start / on_tool_end) — so cost / token / time caps, breakers, and llm_call / tool_call OpenTelemetry spans all apply inside a Runner.run(...). The verifier retry loop wraps the whole run, with verifier feedback injected into a copy of the run input (feedback_to_input to customise; output_from_result to customise how the answer is extracted). guarded_runner(..., reserved_output_tokens=N) sets the pre-flight output-token reservation (default 1024), since the SDK's chat models often leave model_settings.max_tokens unset.
Because the SDK wraps exceptions raised from its tool lifecycle hooks in agents.exceptions.UserError, guarded_runner unwraps a GuardLoopError from the exception chain before re-raising, so a tripped guard still becomes a clean RunResult with the right terminated_reason.
Behind the new openai-agents optional extra:
pip install "guardloop[openai-agents]"from agents import Agent
from guardloop import GuardLoop, BudgetConfig
from guardloop.adapters.openai_agents import guarded_runner
runtime = GuardLoop(budget=BudgetConfig(cost_limit_usd="0.10", token_limit=10_000, tool_call_limit=20))
agent = guarded_runner(Agent(name="researcher", model="gpt-5.2", instructions="..."))
result = await runtime.run(agent, "research agent runtime safety")
print(result.success, result.cost_usd, result.tokens_used, result.output)No-key demo: uv run python examples/openai_agents_guarded.py.
Known limitations
- The OpenAI Agents SDK has no
on_tool_errorlifecycle hook and, by default, turns a tool exception into an error string fed back to the model (soon_tool_endfires with that string). The adapter therefore records tool attempts and successes but not failures — a flaky SDK-managed tool will not open the breaker on its own. The breaker's blocking behaviour (an already-open breaker rejects the next SDK tool call) does apply; route a tool body throughctx.call_tool(...)for full breaker semantics. - Streaming (
Runner.run_streamed) is out of scope for this release (usage is still accounted viaon_llm_end).
Other
guardloop.adapters.openai_agentsexportsguarded_runnerandGuardLoopRunHooks. Adapters are intentionally not re-exported from the top-levelguardlooppackage, soimport guardloopstays dependency-light. No breaking changes; the core install (pip install guardloop) is unchanged.
Full changelog: https://github.com/awesome-pro/guardloop/blob/main/CHANGELOG.md
v0.4.0 — LangGraph Adapter
[0.4.0] - 2026-05-11
Added
- LangGraph adapter (
guardloop.adapters.langgraph).guarded_graph(graph)
returns a GuardLoop-compatible agent callable you pass toGuardLoop.run(...);
aGuardLoopCallbackHandler(a synchronous LangChainBaseCallbackHandler)
bound to theRunContextruns the pre-flight budget check before each LLM call,
records actual usage afterward, and routes tool calls through the per-tool
circuit breaker and the tool-call budget — so cost / token / time caps, breakers,
andllm_call/tool_callOpenTelemetry spans all apply inside a LangGraph
run. The verifier retry loop wraps the whole graph run, with verifier feedback
injected into a copy of the input state (feedback_to_stateto customise).
guarded_graph(..., reserved_output_tokens=N)sets the output-token reservation
for the pre-flight check (default1024), since LangChain chat models often omit
max_tokens. Behind the newlanggraphoptional extra
(pip install "guardloop[langgraph]"). guardloop.adapterssubpackage;guardloop.adapters.langgraphexports
guarded_graphandGuardLoopCallbackHandler. (Adapters are intentionally not
re-exported from the top-levelguardlooppackage, soimport guardloopstays
dependency-light.)RunContext.circuit_breakers— public read-only access to the per-tool circuit
breaker registry (used by adapters; also handy for inspecting breaker state).- No-key demo
examples/langgraph_guarded.py. .github/workflows/ci.yml— runs pytest + ruff + pyright on push / pull request
across Python 3.11–3.13.
Changed
pyproject.toml: newlanggraphoptional-dependency extra;langgraph/
langchain-coreadded to the dev dependency group;langgraphkeyword.
v0.3.0 — Verifier Retry Loop
[0.3.0] - 2026-05-10
Added
- Verifier retry loop (Pillar 3 / self-healing). After an agent finishes,
GuardLoop can run a chain of verifiers against the output; on rejection it
appends the verifier's feedback toRunContext.retry_feedbackand re-invokes
the agent, bounded byVerifierConfig.max_retries. All attempts share the
same budget (cost / tokens / time / tool calls) and the run's single
asyncio.timeout, so a verifier loop cannot bypass any guardrail. - New module
guardloop.verifierwith public exports:Verifier(callable
type alias — sync or async, returningVerifierResult,bool, orNone),
VerifierResult,VerifierContext,VerifierConfig, andVerifierChain. - Built-in rule-based verifier factories:
non_empty(),matches_regex(...),
is_json_object(required_keys=...). GuardLoop(verifiers=[...], verifier_config=VerifierConfig(...))constructor
parameters andGuardLoop.add_verifier(fn).RunResultfields:verification_passed: bool | None,
verification_attempts: int,verification_feedback: list[str].RunContext.retry_feedback: list[str]andRunContext.attempt: int.- New exceptions
VerificationFailed(terminated_reason="verification_failed",
raised only in strict mode) andVerifierExecutionError
(terminated_reason="verifier_error", raised when a verifier itself throws). - OpenTelemetry:
verifier_run <name>child spans,agent_runattributes
guardloop.verification.passed/guardloop.verification.attempts, and
guardloop.verification.failed/.retrying/.exhaustedspan events. - No-key demo
examples/verifier_retry_loop.py.
Changed
- When verification ultimately fails (retries exhausted),
RunResult.success
isFalsewithterminated_reason="verification_failed", butoutputstill
holds the last attempt's text — consistent with how budget/timeout stops
report. SetVerifierConfig(raise_on_failure=True)for strict behavior
(surfaces aVerificationFailedwithoutput=Noneand details in
metadata). pyproject.toml:ChangelogURL now points at this file.
GuardLoop v0.2.0
Highlights
- Renames the project and PyPI distribution to GuardLoop / guardloop.
- Keeps
AgentRuntimeandAgentRuntimeErroras compatibility aliases. - Publishes importable package
guardloopwith runtime budget caps, circuit breakers, and OpenTelemetry traces. - Updates PyPI Trusted Publishing configuration for
awesome-pro/guardloop.
Validation
- uv run pytest
- uv run pytest --cov=guardloop
- uv run ruff check .
- uv run ruff format --check .
- uv run pyright
- uv build
- uvx twine check dist/guardloop-0.2.0.tar.gz dist/guardloop-0.2.0-py3-none-any.whl