This guide describes how to integrate the MiroThinker model with LobeChat, an open-source, modern LLM UI framework supporting tool usage (function calling).
First, launch the MiroThinker model using vLLM with the OpenAI-compatible API adapter. Ensure you include the tool parser plugin.
# Configuration
PORT=61002
MODEL_PATH=miromind-ai/MiroThinker-v1.0-30B
# Start vLLM server
vllm serve $MODEL_PATH \
--served-model-name mirothinker \
--port $PORT \
--trust-remote-code \
--chat-template chat_template.jinja \
--tool-parser-plugin MirothinkerToolParser.py \
--tool-call-parser mirothinker \
--enable-auto-tool-choiceYou can use either the self-hosted version or the web application.
Navigate to Settings -> AI Service Provider to add a custom AI service provider.
Click the + button to add a new provider and configure it as follows:
| Field | Value | Description |
|---|---|---|
| Provider ID | miromind |
Or any identifier you prefer. |
| Request Format | OPENAI |
|
| API Key | your-api-key |
Use any string if auth is disabled. |
| API Proxy Address | http://localhost:61002/v1 |
Replace with your actual service address. |
After adding the provider, add the models you deploy to the service provider's model list.:
- Add a new model with the ID
mirothinker(must match--served-model-name). - Crucial: Enable the Function Calling capability toggle.
- Click "Check" to verify connectivity.
Once configured, you can use MiroThinker in LobeChat with full tool-calling capabilities.
- vLLM >= 0.11.0



