-
Notifications
You must be signed in to change notification settings - Fork 179
Add OpenAI-compatible (LM Studio) provider support + reasoning handling #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add provider to config + CLI
- Implement chat/completions path with tool schema conversion
- Update README + config example for LM Studio
- Preserve Anthropic (MiniMax) default
…hinking
- Explain LM Studio setting for reasoning_content
- Note <think>…</think> fallback extraction
- Clarify Anthropic-style interleaved thinking for MiniMax M2
|
Thanks for you PR, our engineer will review it soon |
|
Thanks for the contribution! This is a solid start for LM Studio support. I've identified a few critical issues that need to be addressed before merging. Critical Issue 1: Architecture - Missing Provider AbstractionThe current implementation mixes both providers in a single Suggested refactoring: llm/base.pyclass BaseLLM(ABC): llm/anthropic.pyclass AnthropicLLM(BaseLLM): llm/openai.pyclass OpenAILLM(BaseLLM): Critical Issue 2: Missing Thinking/Reasoning ImplementationAccording to the MiniMax documentation, OpenAI-compatible responses include thinking in two ways:
Current implementation (lines 239-278): Important Issue 3: Missing Test CoverageThe PR lacks tests for the new OpenAI provider functionality. Minimum required tests:
Let me know if you need any clarification or help with the implementation. |
- Extract BaseLLM and provider classes (AnthropicLLM, OpenAILLM) - LLMClient becomes a thin facade that selects provider - OpenAI path: parse reasoning_content and <think> fallback - Convert tools to OpenAI schema; parse tool_calls - docs: LM Studio reasoning + interleaved thinking notes - tests: add OpenAI provider unit tests (no network)
|
|
|
Hi @JordiPosthumus, Thank you for taking the time to contribute this PR and for adding LM Studio support! We really appreciate your effort in implementing the OpenAI-compatible provider with reasoning handling. However, after careful review, we found that this implementation doesn't fully meet our current integration requirements and code standards. We apologize for not being able to merge it at this time. That said, we do recognize the value of OpenAI protocol support. Our team is planning to implement a native OpenAI-compatible layer in the near future that will align with our architecture and design principles. We hope you understand, and we truly appreciate your interest in improving Mini-Agent. Feel free to continue engaging with the project, and we look forward to your future contributions! Best regards, |
|
You are welcome ! I love Minimax M2 and I really really want a harness where I can use it at full power with proper thinking on LM Studio (or MLX) so much. This model is amazing! |
feat(llm): add OpenAI-compatible (LM Studio) support and provider switch
- Add provider to LLMConfig (default: anthropic; also supports openai-compatible)
- Implement OpenAI chat/completions path with tool schema conversion
- Parse tool_calls from OpenAI-style responses
- Improve error reporting for non-2xx responses
Interleaved thinking behavior
enabling true interleaved thinking across steps.
or inline …. Mini Agent now captures both and renders them in the Thinking panel.
Test plan