Skip to content

fix: skip DQ->MatMulNBits fusion when weight/scale initializer is shared#28326

Open
Rishi-Dave wants to merge 1 commit intomicrosoft:mainfrom
Rishi-Dave:rishidave/fix/qdq-matmulnbits-shared-weight
Open

fix: skip DQ->MatMulNBits fusion when weight/scale initializer is shared#28326
Rishi-Dave wants to merge 1 commit intomicrosoft:mainfrom
Rishi-Dave:rishidave/fix/qdq-matmulnbits-shared-weight

Conversation

@Rishi-Dave
Copy link
Copy Markdown
Contributor

Summary

  • Reject the DQ→MatMulNBits fusion when the weight or scale initializer is shared by multiple consumers (e.g. tied-embedding pattern).
  • Prevents a crash in TransposeDQWeightsForMatMulNBits ("Missing required scale") when loading models like Whisper's decoder_model_merged_uint8.onnx.
  • Adds a regression test covering Int4x2 and UInt4x2 with and without zero-points.

Motivation

Fixes #28306.

ORT 1.25 regressed on quantized Whisper decoder models. PR #27769 broadened the DQ→MatMulNBits fusion, but its CheckOutputEdges guard only checks DQ output edges; it does not catch the case where two DQ nodes share the same weight + scale initializers at their inputs. The first fusion consumes the shared initializer; the second fusion then asserts in qdq_actions.cc:136 because the initializer has been removed from the graph.

Changes

  • onnxruntime/core/optimizer/qdq_transformer/selectors_actions/qdq_selectors.cc: in DQMatMulNodeGroupSelector::Check, after the existing CheckOutputEdges guard, return false if either the weight initializer or the scale initializer has more than one consumer node. This conservatively preserves both DQ nodes as DequantizeLinear + MatMul rather than attempting a fusion whose first half would invalidate the second.
  • onnxruntime/test/optimizer/qdq_matmulnbits_transformer_test.cc: adds DQMatMulNotConvertedToMatMulNBits_SharedWeight (4 variants: Int4x2/UInt4x2 × with/without zero-points). Builds two DQ nodes pointing at a shared weight + shared scale initializer, runs the QDQ transformer at TransformerLevel::Level2, and asserts no MatMulNBits is emitted (MatMul=2, DequantizeLinear=2, MatMulNBits=0). Without the fix the second fusion crashes before the assertion runs, so this is a real regression guard.

Test Plan

  • C++ unit tests: DQMatMulNotConvertedToMatMulNBits_SharedWeight is included in onnxruntime_test_all and will run in CI.
  • Local lintrunner: lintrunner -a clean on the diff.
  • Manual: loading onnx-community/whisper-tiny decoder_model_merged_uint8.onnx via InferenceSession no longer asserts in TransposeDQWeightsForMatMulNBits.

Fixes #28306

…r is shared

When two DQ nodes reference the same weight or scale initializer (the tied
embedding pattern seen in Whisper's decoder_model_merged_uint8.onnx), the
first DQ->MatMul fusion consumes the shared initializer. The second fusion
then crashes in TransposeDQWeightsForMatMulNBits with "Missing required scale"
because the initializer has already been removed from the graph.

The existing CheckOutputEdges guard does not catch this because each DQ node
has exactly one output edge — the sharing is at the initializer input level,
not the DQ output level.

Fix: in DQMatMulNodeGroupSelector::Check, after the CheckOutputEdges guard,
reject fusion if graph.GetConsumerNodes(weight_arg->Name()).size() > 1 or
graph.GetConsumerNodes(scale_arg->Name()).size() > 1. This preserves both
DQ nodes as-is when their initializers are shared.

Adds regression test DQMatMulNotConvertedToMatMulNBits_SharedWeight covering
Int4x2 and UInt4x2 with and without zero-points.

Fixes microsoft#28306
@tianleiwu tianleiwu requested review from Copilot and jambayk and removed request for Copilot May 3, 2026 00:28
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a regression in the QDQ transformer by preventing DequantizeLinear -> MatMul/Gemm from being fused into MatMulNBits when the DQ’s weight or scale initializer is shared across multiple consumers (e.g., tied embeddings), which previously could lead to a crash during TransposeDQWeightsForMatMulNBits.

Changes:

  • Add a selector-side guard to skip DQ→MatMulNBits fusion when the weight or scale initializer has multiple consumers.
  • Add a regression test that constructs two DQ→MatMul paths sharing the same weight+scale initializers and asserts that no MatMulNBits nodes are produced.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

File Description
onnxruntime/core/optimizer/qdq_transformer/selectors_actions/qdq_selectors.cc Adds a consumer-count guard in DQMatMulNodeGroupSelector::Check to reject fusion when weight/scale initializers are shared.
onnxruntime/test/optimizer/qdq_matmulnbits_transformer_test.cc Adds a regression test ensuring shared weight/scale initializers prevent fusion and avoid the prior crash.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Error loading quantized Whisper models with onnxruntime 1.25

2 participants