Reduce your OpenClaw agent costs. Free real-time LLM cost tracking + dashboard. Installs in 60 seconds.
-
Updated
Mar 15, 2026 - TypeScript
Reduce your OpenClaw agent costs. Free real-time LLM cost tracking + dashboard. Installs in 60 seconds.
Spring Boot starter that enforces LLM cost budgets, fallback decisions, and abuse protection via a single @LLMGuarded annotation
Open FinOps and governance data plane for FOCUS 1.2 cost normalization across cloud, on-prem, GPU, and AI/LLM workloads.
Claude Code plugin to track token usage and costs across sessions
Track OpenClaw LLM calls, show real costs, and cut agent spend with a local dashboard and no data leaving your machine
Token-Light, Code-Intensive (TLCI) — A design philosophy for AI agent automation. Use AI only where it's needed. 80-97% cost reduction.
Quote-as-ceiling billing for AI agent APIs that charge by the step - pre-estimate cost, enforce it as a hard ceiling, absorb overruns. Three CI-enforced invariants, ASC 606 compliant, production-extracted.
LLM cost, latency, and savings dashboard. Next.js + Python + PostgreSQL. Track costs per workflow, measure latency, and optimize AI spend.
Hard budget limits for AI coding agent sessions — per-session spend cap for Claude Code and Codex CLI
DeepClaw observability plugin for OpenClaw — real-time LLM usage, token, cache, reasoning, and cost telemetry. https://deep-claw.com
Monitor LLM token costs in real time with a terminal dashboard offering per-request tracking, budget control, and alerting without external services.
Add a description, image, and links to the llm-costs topic page so that developers can more easily learn about it.
To associate your repository with the llm-costs topic, visit your repo's landing page and select "manage topics."