Skip to content

fix(llm): add GLM compatibility for reasoning_content field#55

Open
jknightai wants to merge 1 commit intoNarcooo:masterfrom
jknightai:fix/glm-reasoning-content-compatibility
Open

fix(llm): add GLM compatibility for reasoning_content field#55
jknightai wants to merge 1 commit intoNarcooo:masterfrom
jknightai:fix/glm-reasoning-content-compatibility

Conversation

@jknightai
Copy link

Summary

Add compatibility for Zhipu GLM responses where text may appear in reasoning_content when content is empty.

Changes

  • packages/core/src/llm/provider.ts
  • Streaming path: accept delta.reasoning_content as output fallback.
  • Sync path: fallback to message.reasoning_content when message.content is empty.
  • Uses type assertions to keep TypeScript build passing.

Why

Some GLM models can return:

  • content: ""
  • reasoning_content: "actual text"

Without this fallback, InkOS may throw LLM returned empty response.

Validation

  • pnpm -r build passes after changes.
  • Runtime behavior verified with GLM endpoint.

@jknightai
Copy link
Author

我这边补充下本地验证结果:在 GLM 上能复现 content 为空、实际文本在 reasoning_content 的情况,所以修复前会报 LLM returned empty response。这次我把同步和流式两条路径都做了 fallback,并用类型断言处理了非标准字段;标准 OpenAI 的 content 返回路径不受影响。
I’d like to add my local verification result: on GLM, I can reproduce cases where content is empty while the actual text is in reasoning_content, which caused LLM returned empty response before this fix. In this PR, I added fallback handling in both sync and streaming paths, and used type assertions for the non-standard fields; standard OpenAI-style content behavior is unchanged.

@Narcooo
Copy link
Owner

Narcooo commented Mar 19, 2026

感谢 PR!审过代码,改动精准,流式和同步两条路径都覆盖了,不影响标准 OpenAI/Anthropic 通路。

预计将合入。

Thanks for the PR! Reviewed the code — clean, precise change that covers both streaming and sync paths without affecting standard OpenAI/Anthropic flow.

Will merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants