-
Notifications
You must be signed in to change notification settings - Fork 1
fix: enable LLM instrumentation for OpenTelemetry tracing #694
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Remove broken traceloop initialize() call that had tracingEnabled: false which was preventing all LLM instrumentation from being registered - Add individual traceloop instrumentation packages (OpenAI, Anthropic, Bedrock, Cohere, VertexAI) directly to NodeSDK instrumentations array - Upgrade traceloop packages from 0.21.x to ^0.22.x - Update _tokens.ts to use official OpenTelemetry GenAI semantic conventions (ATTR_GEN_AI_*) instead of deprecated traceloop SpanAttributes.LLM_* - Create llm-instrumentations.ts module for cleaner instrumentation setup This fixes the issue where OpenAI and other LLM provider calls were not being traced in the telemetry data.
📝 WalkthroughWalkthroughUpgraded Traceloop dependencies and added instrumentation packages for multiple LLM providers. Updated semantic convention attribute references in token handling. Introduced centralized LLM instrumentation factory function and integrated it into OpenTelemetry initialization, replacing previous initialization flow. Changes
🚥 Pre-merge checks | ✅ 1✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
📦 Canary Packages Publishedversion: PackagesInstallAdd to your {
"dependencies": {
"@agentuity/cli": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-cli-0.1.24-92420bc.tgz",
"@agentuity/runtime": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-runtime-0.1.24-92420bc.tgz",
"@agentuity/evals": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-evals-0.1.24-92420bc.tgz",
"@agentuity/server": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-server-0.1.24-92420bc.tgz",
"@agentuity/auth": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-auth-0.1.24-92420bc.tgz",
"@agentuity/core": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-core-0.1.24-92420bc.tgz",
"@agentuity/opencode": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-opencode-0.1.24-92420bc.tgz",
"@agentuity/frontend": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-frontend-0.1.24-92420bc.tgz",
"@agentuity/react": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-react-0.1.24-92420bc.tgz",
"@agentuity/workbench": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-workbench-0.1.24-92420bc.tgz",
"@agentuity/schema": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-schema-0.1.24-92420bc.tgz"
}
}Or install directly: bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-cli-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-runtime-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-evals-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-server-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-auth-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-core-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-opencode-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-frontend-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-react-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-workbench-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-schema-0.1.24-92420bc.tgzCLI Executables
Run Canary CLIagentuity canary 0.1.24-92420bc [command] [...args] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@packages/runtime/package.json`:
- Around line 49-55: The package.json contains non-existent `@traceloop` versions
causing install failures; update the dependency versions for the listed packages
to published releases: change "@traceloop/instrumentation-openai" to "0.18.0",
"@traceloop/node-server-sdk" to "0.18.1", "@traceloop/instrumentation-bedrock"
to "0.19.0", "@traceloop/instrumentation-cohere" to "0.16.0", and verify
"@traceloop/ai-semantic-conventions" (use the published 0.18.0 or the correct
0.22.x if available) while keeping the caret policy consistent; after updating
the dependency entries in package.json run a fresh install (delete
lockfile/node_modules and run npm install or yarn install) to regenerate the
lockfile and ensure resolution.
🧹 Nitpick comments (1)
packages/runtime/src/otel/llm-instrumentations.ts (1)
11-14: Consider using a higher log level for instrumentation exceptions.The
exceptionLoggerlogs atdebuglevel, which may cause exceptions to be missed in production where debug logs are typically suppressed. Consider usingconsole.warnorconsole.errorfor instrumentation errors.♻️ Suggested change
export function createLLMInstrumentations() { const exceptionLogger = (e: Error) => { - console.debug('[Traceloop] Instrumentation exception:', e.message); + console.warn('[Traceloop] Instrumentation exception:', e.message); };
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
packages/runtime/package.jsonpackages/runtime/src/_tokens.tspackages/runtime/src/otel/llm-instrumentations.tspackages/runtime/src/otel/otel.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}: Use Prettier formatter with tabs (width 3), single quotes, and semicolons for TypeScript files
Use TypeScript strict mode with ESNext target and bundler moduleResolution
UseStructuredErrorfrom@agentuity/corefor error handling
Files:
packages/runtime/src/otel/llm-instrumentations.tspackages/runtime/src/_tokens.tspackages/runtime/src/otel/otel.ts
packages/runtime/**/*.{ts,tsx}
📄 CodeRabbit inference engine (packages/runtime/AGENTS.md)
packages/runtime/**/*.{ts,tsx}: Every agent handler receivesAgentContextwith logger, tracer, storage (kv, vector, stream), and auth properties
Usectx.loggerinstead ofconsole.logfor observability
Files:
packages/runtime/src/otel/llm-instrumentations.tspackages/runtime/src/_tokens.tspackages/runtime/src/otel/otel.ts
🧠 Learnings (2)
📚 Learning: 2025-12-13T14:15:18.261Z
Learnt from: jhaynie
Repo: agentuity/sdk PR: 168
File: packages/runtime/src/session.ts:536-546
Timestamp: 2025-12-13T14:15:18.261Z
Learning: The agentuity/runtime package is Bun-only; during code reviews, do not replace Bun-native APIs (e.g., Bun.CryptoHasher, Bun.serve, and other Bun namespace APIs) with Node.js alternatives. Review changes with the assumption that runtime runs on Bun, and ensure any edits preserve Bun compatibility and do not introduce Node.js-specific fallbacks. Apply this guidance broadly to files under packages/runtime (e.g., packages/runtime/src/...); if there are conditional environment checks, document why Bun is required and avoid dereferencing Bun-only APIs in non-Bun contexts.
Applied to files:
packages/runtime/src/otel/llm-instrumentations.tspackages/runtime/src/_tokens.tspackages/runtime/src/otel/otel.ts
📚 Learning: 2025-12-21T00:31:41.858Z
Learnt from: jhaynie
Repo: agentuity/sdk PR: 274
File: packages/cli/src/cmd/build/vite/server-bundler.ts:12-41
Timestamp: 2025-12-21T00:31:41.858Z
Learning: In Bun runtime, BuildMessage and ResolveMessage are global types and are not exported from the bun module. Do not import { BuildMessage } from 'bun' or similar; these types are available globally and should be used without import. This applies to all TypeScript files that target the Bun runtime within the repository.
Applied to files:
packages/runtime/src/otel/llm-instrumentations.tspackages/runtime/src/_tokens.tspackages/runtime/src/otel/otel.ts
🧬 Code graph analysis (1)
packages/runtime/src/otel/otel.ts (1)
packages/runtime/src/otel/llm-instrumentations.ts (1)
createLLMInstrumentations(11-36)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: Pack & Upload
- GitHub Check: Build
- GitHub Check: Cloud Deployment Tests
- GitHub Check: Queue SDK Tests
- GitHub Check: Sandbox CLI Tests
- GitHub Check: Package Installation & Usage Test
- GitHub Check: Template Integration Tests
- GitHub Check: Queue CLI Tests
- GitHub Check: Playwright E2E Smoke Test
- GitHub Check: SDK Integration Test Suite
- GitHub Check: Framework Integration Tests (TanStack & Next.js)
🔇 Additional comments (6)
packages/runtime/src/_tokens.ts (2)
5-10: LGTM!The migration to
@opentelemetry/semantic-conventions/incubatingwith theATTR_GEN_AI_*constants correctly replaces the deprecated traceloopSpanAttributes.LLM_*attributes. This aligns with official OpenTelemetry GenAI semantic conventions.
83-100: LGTM!The attribute mapping is correct—
ATTR_GEN_AI_USAGE_INPUT_TOKENSandATTR_GEN_AI_USAGE_OUTPUT_TOKENScorrectly replace the prompt/completion token attributes. The token accumulation logic is preserved.packages/runtime/src/otel/llm-instrumentations.ts (1)
16-35: LGTM!The environment variable parsing is robust with case-insensitive comparison and sensible default. The
enrichTokensoption being specific toOpenAIInstrumentationaligns with traceloop's API where this feature is provider-specific.packages/runtime/src/otel/otel.ts (3)
34-34: LGTM!Clean import of the new factory function for centralized LLM instrumentation management.
300-310: Good fix for the root cause.Creating the LLM instrumentations via the factory function and spreading them into the
NodeSDKinstrumentations array correctly addresses the root cause wheretraceloop.initialize()withtracingEnabled: falseprevented LLM instrumentation registration.
314-316: LGTM!Logging after
instrumentationSDK.start()ensures telemetry is actually configured before logging success.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Summary
Fixes the issue where OpenAI and other LLM provider calls were not being traced in telemetry data.
Root Cause
The traceloop
initialize()call hadtracingEnabled: false, which was intended to disable only traceloop's internal telemetry. However, in the JavaScript SDK, this setting actually preventsstartTracing()from running, which is where all LLM instrumentations (OpenAI, Anthropic, etc.) are registered.Changes
tracingEnabled: falsesetting was preventing all LLM instrumentation from being registeredinstrumentationsarray0.21.xto^0.22.x_tokens.tsnow uses official OpenTelemetry GenAI semantic conventions (ATTR_GEN_AI_*) instead of deprecated traceloopSpanAttributes.LLM_*Testing
AGENTUITY_DEBUG_OTEL_CONSOLE=trueconfirms LLM spans are now capturedSummary by CodeRabbit
Chores
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.