Skip to content

Handle Ollama models that return content in thinking field#998

Open
Br1an67 wants to merge 1 commit intodatalab-to:masterfrom
Br1an67:fix/issue-992-ollama-thinking-field
Open

Handle Ollama models that return content in thinking field#998
Br1an67 wants to merge 1 commit intodatalab-to:masterfrom
Br1an67:fix/issue-992-ollama-thinking-field

Conversation

@Br1an67
Copy link

@Br1an67 Br1an67 commented Mar 1, 2026

Summary

Fix Ollama inference failure (Expecting value: line 1 column 1 (char 0)) when models like qwen3-vl return the generated content in the thinking field instead of the response field.

Some Ollama models with thinking mode enabled (e.g., qwen3-vl) place the actual output in the thinking field while leaving response empty. The current code only reads from response, causing json.loads("") to fail.

Closes #992

Changes

  • Added a fallback in OllamaService.__call__() to read from the thinking field when response is empty

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Ollama inference failed : Expecting value: line 1 column 1 (char 0)

1 participant