Skip to content

Conversation

@jhaynie
Copy link
Member

@jhaynie jhaynie commented Jan 22, 2026

Summary

Fixes the issue where OpenAI and other LLM provider calls were not being traced in telemetry data.

Root Cause

The traceloop initialize() call had tracingEnabled: false, which was intended to disable only traceloop's internal telemetry. However, in the JavaScript SDK, this setting actually prevents startTracing() from running, which is where all LLM instrumentations (OpenAI, Anthropic, etc.) are registered.

Changes

  • Remove broken traceloop initialize() call - The tracingEnabled: false setting was preventing all LLM instrumentation from being registered
  • Add individual traceloop instrumentation packages - OpenAI, Anthropic, Bedrock, Cohere, VertexAI are now added directly to the NodeSDK's instrumentations array
  • Upgrade traceloop packages - From 0.21.x to ^0.22.x
  • Update semantic conventions - _tokens.ts now uses official OpenTelemetry GenAI semantic conventions (ATTR_GEN_AI_*) instead of deprecated traceloop SpanAttributes.LLM_*
  • Create llm-instrumentations.ts - New module for cleaner instrumentation setup

Testing

  • All 572 runtime tests pass
  • Full SDK build passes
  • Manual testing with AGENTUITY_DEBUG_OTEL_CONSOLE=true confirms LLM spans are now captured

Summary by CodeRabbit

  • Chores

    • Updated and expanded instrumentation dependencies to support multiple AI providers including OpenAI, Anthropic, Bedrock, Cohere, and Vertex AI.
    • Enhanced token tracking and enrichment configuration.
    • Upgraded core semantic conventions and server SDK to latest versions.
  • Refactor

    • Reorganized telemetry initialization for improved LLM tracing and observability collection across all supported providers.

✏️ Tip: You can customize this high-level summary in your review settings.

- Remove broken traceloop initialize() call that had tracingEnabled: false
  which was preventing all LLM instrumentation from being registered
- Add individual traceloop instrumentation packages (OpenAI, Anthropic,
  Bedrock, Cohere, VertexAI) directly to NodeSDK instrumentations array
- Upgrade traceloop packages from 0.21.x to ^0.22.x
- Update _tokens.ts to use official OpenTelemetry GenAI semantic conventions
  (ATTR_GEN_AI_*) instead of deprecated traceloop SpanAttributes.LLM_*
- Create llm-instrumentations.ts module for cleaner instrumentation setup

This fixes the issue where OpenAI and other LLM provider calls were not
being traced in the telemetry data.
@coderabbitai
Copy link

coderabbitai bot commented Jan 22, 2026

📝 Walkthrough

Walkthrough

Upgraded Traceloop dependencies and added instrumentation packages for multiple LLM providers. Updated semantic convention attribute references in token handling. Introduced centralized LLM instrumentation factory function and integrated it into OpenTelemetry initialization, replacing previous initialization flow.

Changes

Cohort / File(s) Summary
Dependency Updates
packages/runtime/package.json
Updated @traceloop/ai-semantic-conventions (0.21.0 → ^0.22.5) and @traceloop/node-server-sdk (0.21.1 → ^0.22.6); added instrumentation packages for Anthropic, Bedrock, Cohere, OpenAI, and Vertex AI (all ^0.22.5)
Semantic Convention Migration
packages/runtime/src/_tokens.ts
Replaced SpanAttributes imports with ATTR_GEN_AI_\* constants; updated all attribute checks and token extraction references to new naming conventions (e.g., LLM_SYSTEM → ATTR_GEN_AI_SYSTEM, LLM_USAGE_PROMPT_TOKENS → ATTR_GEN_AI_USAGE_INPUT_TOKENS)
Instrumentation Integration
packages/runtime/src/otel/llm-instrumentations.ts, packages/runtime/src/otel/otel.ts
Created new createLLMInstrumentations() factory function centralizing setup for Anthropic, Bedrock, Cohere, OpenAI, and Vertex AI instrumentations; integrated into NodeSDK initialization array; removed previous Traceloop.initialize() call with appName and baseUrl parameters
🚥 Pre-merge checks | ✅ 1
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Jan 22, 2026

📦 Canary Packages Published

version: 0.1.24-92420bc

Packages
Package Version URL
@agentuity/cli 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-cli-0.1.24-92420bc.tgz
@agentuity/runtime 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-runtime-0.1.24-92420bc.tgz
@agentuity/evals 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-evals-0.1.24-92420bc.tgz
@agentuity/server 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-server-0.1.24-92420bc.tgz
@agentuity/auth 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-auth-0.1.24-92420bc.tgz
@agentuity/core 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-core-0.1.24-92420bc.tgz
@agentuity/opencode 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-opencode-0.1.24-92420bc.tgz
@agentuity/frontend 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-frontend-0.1.24-92420bc.tgz
@agentuity/react 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-react-0.1.24-92420bc.tgz
@agentuity/workbench 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-workbench-0.1.24-92420bc.tgz
@agentuity/schema 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-schema-0.1.24-92420bc.tgz
Install

Add to your package.json:

{
  "dependencies": {
    "@agentuity/cli": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-cli-0.1.24-92420bc.tgz",
    "@agentuity/runtime": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-runtime-0.1.24-92420bc.tgz",
    "@agentuity/evals": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-evals-0.1.24-92420bc.tgz",
    "@agentuity/server": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-server-0.1.24-92420bc.tgz",
    "@agentuity/auth": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-auth-0.1.24-92420bc.tgz",
    "@agentuity/core": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-core-0.1.24-92420bc.tgz",
    "@agentuity/opencode": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-opencode-0.1.24-92420bc.tgz",
    "@agentuity/frontend": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-frontend-0.1.24-92420bc.tgz",
    "@agentuity/react": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-react-0.1.24-92420bc.tgz",
    "@agentuity/workbench": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-workbench-0.1.24-92420bc.tgz",
    "@agentuity/schema": "https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-schema-0.1.24-92420bc.tgz"
  }
}

Or install directly:

bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-cli-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-runtime-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-evals-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-server-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-auth-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-core-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-opencode-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-frontend-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-react-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-workbench-0.1.24-92420bc.tgz
bun add https://agentuity-sdk-objects.t3.storage.dev/npm/0.1.24-92420bc/agentuity-schema-0.1.24-92420bc.tgz
CLI Executables
Platform Version URL
darwin-x64 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/binary/0.1.24-92420bc/agentuity-darwin-x64.gz
darwin-arm64 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/binary/0.1.24-92420bc/agentuity-darwin-arm64.gz
linux-x64 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/binary/0.1.24-92420bc/agentuity-linux-x64.gz
linux-arm64 0.1.24-92420bc https://agentuity-sdk-objects.t3.storage.dev/binary/0.1.24-92420bc/agentuity-linux-arm64.gz
Run Canary CLI
agentuity canary 0.1.24-92420bc [command] [...args]

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/runtime/package.json`:
- Around line 49-55: The package.json contains non-existent `@traceloop` versions
causing install failures; update the dependency versions for the listed packages
to published releases: change "@traceloop/instrumentation-openai" to "0.18.0",
"@traceloop/node-server-sdk" to "0.18.1", "@traceloop/instrumentation-bedrock"
to "0.19.0", "@traceloop/instrumentation-cohere" to "0.16.0", and verify
"@traceloop/ai-semantic-conventions" (use the published 0.18.0 or the correct
0.22.x if available) while keeping the caret policy consistent; after updating
the dependency entries in package.json run a fresh install (delete
lockfile/node_modules and run npm install or yarn install) to regenerate the
lockfile and ensure resolution.
🧹 Nitpick comments (1)
packages/runtime/src/otel/llm-instrumentations.ts (1)

11-14: Consider using a higher log level for instrumentation exceptions.

The exceptionLogger logs at debug level, which may cause exceptions to be missed in production where debug logs are typically suppressed. Consider using console.warn or console.error for instrumentation errors.

♻️ Suggested change
 export function createLLMInstrumentations() {
 	const exceptionLogger = (e: Error) => {
-		console.debug('[Traceloop] Instrumentation exception:', e.message);
+		console.warn('[Traceloop] Instrumentation exception:', e.message);
 	};
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 73e67d9 and 3dbfc9d.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • packages/runtime/package.json
  • packages/runtime/src/_tokens.ts
  • packages/runtime/src/otel/llm-instrumentations.ts
  • packages/runtime/src/otel/otel.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Use Prettier formatter with tabs (width 3), single quotes, and semicolons for TypeScript files
Use TypeScript strict mode with ESNext target and bundler moduleResolution
Use StructuredError from @agentuity/core for error handling

Files:

  • packages/runtime/src/otel/llm-instrumentations.ts
  • packages/runtime/src/_tokens.ts
  • packages/runtime/src/otel/otel.ts
packages/runtime/**/*.{ts,tsx}

📄 CodeRabbit inference engine (packages/runtime/AGENTS.md)

packages/runtime/**/*.{ts,tsx}: Every agent handler receives AgentContext with logger, tracer, storage (kv, vector, stream), and auth properties
Use ctx.logger instead of console.log for observability

Files:

  • packages/runtime/src/otel/llm-instrumentations.ts
  • packages/runtime/src/_tokens.ts
  • packages/runtime/src/otel/otel.ts
🧠 Learnings (2)
📚 Learning: 2025-12-13T14:15:18.261Z
Learnt from: jhaynie
Repo: agentuity/sdk PR: 168
File: packages/runtime/src/session.ts:536-546
Timestamp: 2025-12-13T14:15:18.261Z
Learning: The agentuity/runtime package is Bun-only; during code reviews, do not replace Bun-native APIs (e.g., Bun.CryptoHasher, Bun.serve, and other Bun namespace APIs) with Node.js alternatives. Review changes with the assumption that runtime runs on Bun, and ensure any edits preserve Bun compatibility and do not introduce Node.js-specific fallbacks. Apply this guidance broadly to files under packages/runtime (e.g., packages/runtime/src/...); if there are conditional environment checks, document why Bun is required and avoid dereferencing Bun-only APIs in non-Bun contexts.

Applied to files:

  • packages/runtime/src/otel/llm-instrumentations.ts
  • packages/runtime/src/_tokens.ts
  • packages/runtime/src/otel/otel.ts
📚 Learning: 2025-12-21T00:31:41.858Z
Learnt from: jhaynie
Repo: agentuity/sdk PR: 274
File: packages/cli/src/cmd/build/vite/server-bundler.ts:12-41
Timestamp: 2025-12-21T00:31:41.858Z
Learning: In Bun runtime, BuildMessage and ResolveMessage are global types and are not exported from the bun module. Do not import { BuildMessage } from 'bun' or similar; these types are available globally and should be used without import. This applies to all TypeScript files that target the Bun runtime within the repository.

Applied to files:

  • packages/runtime/src/otel/llm-instrumentations.ts
  • packages/runtime/src/_tokens.ts
  • packages/runtime/src/otel/otel.ts
🧬 Code graph analysis (1)
packages/runtime/src/otel/otel.ts (1)
packages/runtime/src/otel/llm-instrumentations.ts (1)
  • createLLMInstrumentations (11-36)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: Pack & Upload
  • GitHub Check: Build
  • GitHub Check: Cloud Deployment Tests
  • GitHub Check: Queue SDK Tests
  • GitHub Check: Sandbox CLI Tests
  • GitHub Check: Package Installation & Usage Test
  • GitHub Check: Template Integration Tests
  • GitHub Check: Queue CLI Tests
  • GitHub Check: Playwright E2E Smoke Test
  • GitHub Check: SDK Integration Test Suite
  • GitHub Check: Framework Integration Tests (TanStack & Next.js)
🔇 Additional comments (6)
packages/runtime/src/_tokens.ts (2)

5-10: LGTM!

The migration to @opentelemetry/semantic-conventions/incubating with the ATTR_GEN_AI_* constants correctly replaces the deprecated traceloop SpanAttributes.LLM_* attributes. This aligns with official OpenTelemetry GenAI semantic conventions.


83-100: LGTM!

The attribute mapping is correct—ATTR_GEN_AI_USAGE_INPUT_TOKENS and ATTR_GEN_AI_USAGE_OUTPUT_TOKENS correctly replace the prompt/completion token attributes. The token accumulation logic is preserved.

packages/runtime/src/otel/llm-instrumentations.ts (1)

16-35: LGTM!

The environment variable parsing is robust with case-insensitive comparison and sensible default. The enrichTokens option being specific to OpenAIInstrumentation aligns with traceloop's API where this feature is provider-specific.

packages/runtime/src/otel/otel.ts (3)

34-34: LGTM!

Clean import of the new factory function for centralized LLM instrumentation management.


300-310: Good fix for the root cause.

Creating the LLM instrumentations via the factory function and spreading them into the NodeSDK instrumentations array correctly addresses the root cause where traceloop.initialize() with tracingEnabled: false prevented LLM instrumentation registration.


314-316: LGTM!

Logging after instrumentationSDK.start() ensures telemetry is actually configured before logging success.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

@jhaynie jhaynie merged commit 6fe504a into main Jan 22, 2026
23 of 25 checks passed
@jhaynie jhaynie deleted the task/investigate-missing-openai-telemetry branch January 22, 2026 18:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants