Skip to content

feat: allow manual endpoint type configuration and add qwen3.5-plus support#824

Open
uuuyuqi wants to merge 5 commits intoagentscope-ai:mainfrom
uuuyuqi:qwen3.5-plus-support
Open

feat: allow manual endpoint type configuration and add qwen3.5-plus support#824
uuuyuqi wants to merge 5 commits intoagentscope-ai:mainfrom
uuuyuqi:qwen3.5-plus-support

Conversation

@uuuyuqi
Copy link
Contributor

@uuuyuqi uuuyuqi commented Feb 25, 2026

AgentScope-Java Version

1.0.10-SNAPSHOT

Description

Allow manual endpoint type configuration and add qwen3.5-plus support.

Closes #817

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has been formatted with mvn spotless:apply
  • All tests are passing (mvn test)
  • Javadoc comments are complete and follow project conventions
  • Related documentation has been updated (e.g. links, examples, etc.)
  • Code is ready for review

…upport

Change-Id: I117c4adee6f8372691f9186e14b7dc26839a0528
Change-Id: I40f87398172d9330db09dccebf1fe8ab0516c51a
@uuuyuqi uuuyuqi requested a review from a team February 25, 2026 08:16
@gemini-code-assist
Copy link

Summary of Changes

Hello @uuuyuqi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the DashScope integration by providing more control over endpoint selection and adding support for new models. It introduces an EndpointType enum to allow manual configuration of the API endpoint (text or multimodal) and ensures that models like qwen3.5-plus can be correctly routed to the multimodal API when required. The changes also maintain backward compatibility, ensuring existing configurations continue to work as expected.

Highlights

  • Endpoint Type Configuration: Introduced the ability to manually configure the endpoint type for DashScope models, allowing developers to explicitly select the text generation or multimodal API.
  • Qwen3.5-plus Support: Added specific support for the qwen3.5-plus model, ensuring it can be correctly routed to the multimodal API when needed.
  • Backward Compatibility: Maintained backward compatibility by providing a constructor that defaults to automatic API type detection based on the model name.
Changelog
  • DashScopeChatModel.java
    • Added a constructor to allow specifying the endpoint type explicitly.
    • Modified the existing constructor to call the new constructor with a null endpoint type for backward compatibility.
    • Updated the doStream method to use the endpoint type when determining whether to use the multimodal API.
    • Updated the builder class to include an endpointType field and a method to set it.
    • Added documentation to explain how to use the endpointType parameter.
  • DashScopeHttpClient.java
    • Modified the call and stream methods to use the endpoint type when selecting the API endpoint.
    • Added a selectEndpoint method that takes an EndpointType parameter.
    • Added an isMultimodalModel method to check if a model is a multimodal model.
    • Added a requiresMultimodalApi method that takes an EndpointType parameter.
    • Updated the requiresMultimodalApi method to use the isMultimodalModel method.
  • DashScopeRequest.java
    • Added @JsonIgnore annotation to the endpointType field to prevent it from being serialized to JSON.
    • Initialized endpointType to EndpointType.AUTO in the default constructor.
    • Added getter and setter methods for the endpointType property.
    • Added endpointType to the builder class.
  • EndpointType.java
    • Added a new EndpointType enum with values AUTO, TEXT, and MULTIMODAL.
Activity
  • The pull request introduces the ability to manually configure the endpoint type for DashScope models.
  • It adds support for the qwen3.5-plus model.
  • The code has been formatted with mvn spotless:apply.
  • All tests are passing (mvn test).
  • Javadoc comments are complete and follow project conventions.
  • Related documentation has been updated.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request successfully introduces the EndpointType enum, allowing for explicit configuration of DashScope API endpoint types. This enhances flexibility by enabling manual override of the automatic model name-based detection, which is particularly useful for models like qwen3.5-plus that might not fit existing auto-detection patterns. The changes are well-integrated across DashScopeRequest, DashScopeChatModel, and DashScopeHttpClient, with clear documentation and appropriate handling of default values. The introduction of the isMultimodalModel helper method also improves code organization and maintainability.

@codecov
Copy link

codecov bot commented Feb 25, 2026

Codecov Report

❌ Patch coverage is 90.56604% with 5 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
...a/io/agentscope/core/model/DashScopeChatModel.java 70.00% 3 Missing ⚠️
.../io/agentscope/core/model/DashScopeHttpClient.java 92.30% 0 Missing and 2 partials ⚠️

📢 Thoughts on this report? Let us know!

Change-Id: I2020b04333c2d142142ee6be4b56eeb019494a74
Change-Id: I122cae15d91cc685cccc4b2387eac707f17245bf
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces manual endpoint type configuration for DashScope models and adds support for the qwen3.5-plus model to resolve multimodal API routing issues reported in #817. Previously, qwen3.5-plus users encountered "url error" when attempting multimodal calls because the model name didn't match the auto-detection patterns (qvq prefix or -vl pattern). The solution adds both explicit endpoint control via a new EndpointType enum and automatic detection for qwen3.5-plus models.

Changes:

  • Added EndpointType enum (AUTO, TEXT, MULTIMODAL) to allow explicit API endpoint selection
  • Extended auto-detection logic to recognize qwen3.5-plus as a multimodal model
  • Updated DashScopeChatModel, DashScopeHttpClient, and DashScopeRequest to support endpoint type configuration while maintaining backward compatibility

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
EndpointType.java New enum defining three endpoint types with comprehensive documentation
DashScopeHttpClient.java Updated endpoint selection logic to support explicit type override; added qwen3.5-plus pattern detection
DashScopeChatModel.java Added endpointType field with builder support and backward-compatible constructor overloading
DashScopeRequest.java Added JsonIgnore endpointType field with proper default initialization
DashScopeHttpClientTest.java Comprehensive test coverage for new endpoint type routing and qwen3.5-plus detection
DashScopeChatModelTest.java Tests for builder and constructor patterns with endpointType parameter

Comment on lines 446 to 455
* <p>Use this when the model name doesn't match the auto-detection patterns but
* you need to use a specific API. For example, qwen3.5-plus is a multimodal-capable
* model but its name doesn't match the auto-detection patterns.
*
* <p>Example:
* <pre>{@code
* DashScopeChatModel model = DashScopeChatModel.builder()
* .apiKey("sk-xxx")
* .modelName("qwen3.5-plus")
* .endpointType(EndpointType.MULTIMODAL) // Force multimodal API for image inputs
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation is inconsistent with the actual implementation. The comment states "qwen3.5-plus is a multimodal-capable model but its name doesn't match the auto-detection patterns", however the code in DashScopeHttpClient.isMultimodalModel() (line 380) explicitly checks for "qwen3.5-plus" with lowerModelName.startsWith("qwen3.5-plus"), which means it DOES match the auto-detection patterns.

This documentation should be corrected to reflect that qwen3.5-plus is already auto-detected as multimodal. The purpose of the endpointType parameter is to allow manual override when needed, not specifically for qwen3.5-plus.

Suggested change
* <p>Use this when the model name doesn't match the auto-detection patterns but
* you need to use a specific API. For example, qwen3.5-plus is a multimodal-capable
* model but its name doesn't match the auto-detection patterns.
*
* <p>Example:
* <pre>{@code
* DashScopeChatModel model = DashScopeChatModel.builder()
* .apiKey("sk-xxx")
* .modelName("qwen3.5-plus")
* .endpointType(EndpointType.MULTIMODAL) // Force multimodal API for image inputs
* <p>Use this when the model name does not match the auto-detection patterns or
* when you need to override the automatically selected endpoint for a specific use
* case. For example, qwen3.5-plus is a multimodal-capable model that is already
* detected as multimodal, but you can still explicitly set the endpoint type when
* you know you will provide image inputs.
*
* <p>Example:
* <pre>{@code
* DashScopeChatModel model = DashScopeChatModel.builder()
* .apiKey("sk-xxx")
* .modelName("qwen3.5-plus")
* .endpointType(EndpointType.MULTIMODAL) // Explicitly use multimodal API for image inputs

Copilot uses AI. Check for mistakes.
Comment on lines 378 to 382
* <p>The model name determines which API is used when apiType is AUTO:
* <ul>
* <li>Vision models (qvq* or *-vl*) → MultiModal API</li>
* <li>Text models → Text Generation API</li>
* </ul>
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation is incomplete and outdated. It lists "Vision models (qvq* or -vl) → MultiModal API" but omits the newly added qwen3.5-plus pattern which is also auto-detected as multimodal. The documentation should be updated to include all three patterns that trigger multimodal API routing: qvq prefix, -vl pattern, and qwen3.5-plus prefix.

Copilot uses AI. Check for mistakes.
* <li>If endpointType is {@link EndpointType#MULTIMODAL} → multimodal API</li>
* <li>If endpointType is {@link EndpointType#AUTO}:
* <ul>
* <li>Models starting with "qvq" → multimodal API</li>
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation in selectEndpoint is incomplete. It lists the AUTO mode routing logic but only mentions "Models starting with 'qvq'" and "Models containing '-vl'" patterns. It should also document the "Models starting with 'qwen3.5-plus'" pattern that was added in this PR to fix the issue with qwen-3.5-plus model routing.

Suggested change
* <li>Models starting with "qvq"multimodal API</li>
* <li>Models starting with "qvq"multimodal API</li>
* <li>Models starting with "qwen3.5-plus"multimodal API</li>

Copilot uses AI. Check for mistakes.
}
String lowerModelName = modelName.toLowerCase();
return lowerModelName.startsWith("qvq")
|| lowerModelName.contains("-vl")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the Qwen 3.5 family (e.g., qwen3.5-flash, qwen3.5-35b-a3b) is entirely multimodal, should we consider using prefix-based matching for detection?

* Automatically determine endpoint type based on model name.
*
* <p>This is the default behavior. The routing logic:
* <ul>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Routing logic isn't managed in this module. Let's skip the duplicate comments to keep the code clean.

*
* <p>This allows explicit control over which DashScope API endpoint to use:
* <ul>
* <li>{@link EndpointType#AUTO} - Automatic detection based on model name (default)</li>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use a direct reference for EndpointType to avoid redundant descriptions.

@LearningGp
Copy link
Collaborator

There are other multimodal models, such as qwen3-asr-flash. Should we consider accounting for them as well?

Change-Id: I11d18dfb4adc57a38f123a7a966704da9fafa64b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Error occurred when using qwen-3.5-plus

3 participants