Understand how your brand shows up in LLM responses.
-
Updated
Mar 25, 2026 - TypeScript
Understand how your brand shows up in LLM responses.
Langfuse MCP server with built-in analytics. 34 tools — traces, observations, sessions, scores, prompts, datasets + accuracy metrics, failure detection, token percentiles, cost breakdowns, latency analysis, context breach scanning. Works with Claude Code, Cursor, Codex.
Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like AI Agent, LLM and tools tracing, debugging multi-agentic system, self-hosted dashboards and advanced analytics with timeline and execution graph view.
Local, single-file HTML dashboard of your Claude Code usage — token totals, MCP activity, web research, and AI-clustered themes — generated by one uv-runnable Python script.
Survey intelligence platform that surveyed 10% of University of Alabama's permit holders and produced the first data-led parking case SGA brought to administration. ETL, LLM thematic analysis, semantic RAG, Next.js dashboard.
Add a description, image, and links to the llm-analytics topic page so that developers can more easily learn about it.
To associate your repository with the llm-analytics topic, visit your repo's landing page and select "manage topics."