Skip to content

[Feature] Reasoning traces #341

@kittrydge

Description

@kittrydge

Description

Add support for display of reasoning traces (a.k.a. "thinking" output). In many LLM tools, there is a way to toggle this.

Use Case

When using reasoning models, it's often helpful to look at reasoning traces for troubleshooting, and to see if the LLM is on the right track. If the LLM is confused / going in circles, I can interrupt it, and nudge it in the right direction with a new prompt.

Proposed Solution

I would imagine toggling display of reasoning traces using a keyboard shortcut. This would be a good use case for streaming output (which was temporarily disabled with v1.18.0). Without streaming, perhaps it could be displayed in chunks - i.e. output once every N lines. Another option could be to write the output to a file.

Alternatives Considered

Perhaps I could obtain the reasoning output directly from llama.cpp ? I'm not sure how to do this...

Additional Context

  • I have searched existing issues to ensure this is not a duplicate
  • This feature aligns with the project's goals (local-first AI assistance)

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions