Skip to content

[OMNIML-3252][ONNX] Add real Q/DQ scales in Autotune#951

Merged
gcunhase merged 43 commits intoNVIDIA:mainfrom
gcunhase:dev/gcunhasergio/autotune_real_qdq_scales
Mar 11, 2026
Merged

[OMNIML-3252][ONNX] Add real Q/DQ scales in Autotune#951
gcunhase merged 43 commits intoNVIDIA:mainfrom
gcunhase:dev/gcunhasergio/autotune_real_qdq_scales

Conversation

@gcunhase
Copy link
Copy Markdown
Contributor

@gcunhase gcunhase commented Mar 2, 2026

What does this PR do?

Type of change: New feature

Overview: ONNX Autotune (also called Auto Q/DQ) is currently and standalone feature of ModelOpt that automatically adds Q/DQ where relevant according to information obtained from TensorRT inference. One issue is that the scales in those Q/DQ nodes are random.

This PR does 2 major things:

  1. Integrates Auto Q/DQ into the ONNX quantization workflow; and
  2. Enables calibration data to be used to obtain the correct scales for the Q/DQ nodes.

Usage

$ python -m modelopt.onnx.quantization --onnx_path=model.onnx --autotune={quick,default,extensive}

Please see __main__.py for other args.

Testing

  1. Added unittest for Q/DQ node placement validation: tests/gpu/onnx/quantization/test_autotune_quantization_integration.py

  2. Verified that accuracy was recovered by integrating MOQ with Autotune. Results on RTX 3090 with TRT 10.12.0.36 (--stronglyTyped) with ViT, as per examples/onnx_ptq:

Model Top-1 acc Top-5 acc
FP32 85.1% 97.5%
FP16 (FP32 with --fp16) 85.1% 97.5%
Quant (MOQ) 82.4% 96.4%
Quant (Autotune) 0.1% 0.5%
Quant (MOQ + Autotune) 79.6% 95.0%

Notice that accuracy was mostly recovered from standalone Autotune to MOQ + Autotune (real Q/DQ scales). The drop in accuracy between MOQ and MOQ + Autotune is likely due to some sensitive nodes being quantized, such as BiasAdd (see bug 5916898).

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes
  • Did you write any new necessary tests?: Yes
  • Did you add or update any necessary documentation?: No (will be done in a different PR)
  • Did you update Changelog?: No

Summary by CodeRabbit

  • New Features

    • Autotuning added to ONNX quantization: CLI flags, presets, per-region tuning, and FP8/INT8 support; accepts in-memory models and optional output dirs; node-filter loading and explicit-flag CLI behavior.
    • Activation-operation accessor exposed and autotune helpers added to the package API.
  • Bug Fixes

    • Safer graph rewiring to avoid corrupting quantized graphs when targets are absent.
  • Tests

    • New integration test and model helper validating autotune quantization consistency.

Additional information

To reproduce accuracy with ViT, call download_example_onnx.py and image_prep.py without --fp16.

If --fp16 is used here, quantizing this model with --autotune results in the following error:

[modelopt][onnx] - ERROR - Benchmark failed: Converting dtype('float16') to a ctypes type

This is fixed in #978.

@gcunhase gcunhase requested a review from a team as a code owner March 2, 2026 18:15
@gcunhase gcunhase requested a review from ajrasane March 2, 2026 18:15
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds ONNX quantization autotune: CLI presets and flags, autotune workflows and helpers, integration of autotune results into FP8/INT8 quantizers and ORT configuration, defensive graph rewiring, an activation-op helper, and tests plus test-model helpers.

Changes

Cohort / File(s) Summary
Activation Operations Taxonomy
modelopt/onnx/op_types.py
Added get_activation_ops() returning a set of activation op names.
CLI / Entrypoint
modelopt/onnx/quantization/__main__.py
Added --autotune CLI, presets, apply_mode_presets() and propagation of autotune-related args into quantize invocation.
Autotune public API & CLI helpers
modelopt/onnx/quantization/autotune/__init__.py, modelopt/onnx/quantization/autotune/__main__.py
Exported MODE_PRESETS, StoreWithExplicitFlag, get_node_filter_list; added get_node_filter_list(); renamed action class to StoreWithExplicitFlag and applied across parser args.
Autotune workflows
modelopt/onnx/quantization/autotune/workflows.py
region_pattern_autotuning_workflow() accepts onnx.ModelProto or path, optional output_dir (creates temp dir when None) and adds default_dq_dtype param; ensures temp-dir cleanup.
Autotuner base & insertion-point APIs
modelopt/onnx/quantization/autotune/autotuner_base.py
Added imports and methods get_resolved_insertion_points() and get_ort_quantization_config(); export_onnx delegates insertion-point resolution to these helpers.
Quantize integration & autotune helper
modelopt/onnx/quantization/quantize.py
Added _find_nodes_to_quantize_autotune() and extended quantize() signature to accept and merge many autotune parameters/results into quantization flow.
FP8 quantizer
modelopt/onnx/quantization/fp8.py
Added autotune: bool = False param; when autotune True, skip several pattern-detection/expansion steps and propagate op_types_needing_output_quant.
INT8 quantizer
modelopt/onnx/quantization/int8.py
Added autotune: bool = False param; gate automatic exclusions and node-expansion when autotune True (preserve non-autotune behavior otherwise).
ORT utilities
modelopt/onnx/quantization/ort_utils.py
Added op_types_needing_output_quant parameter to configure_ort() and integrated it into output-quantization exclusion logic and TRT guided options.
Graph utilities
modelopt/onnx/quantization/graph_utils.py
Hardened remove_partial_input_qdq() to locate target node inputs by DQ output name and skip rewiring when target or matching input is missing.
Autotune internals & imports
modelopt/onnx/quantization/autotune/...
Added imports (e.g., get_activation_ops, insertion-point types/helpers) and wiring to compute resolved insertion points and produce ORT configs from autotune results.
Tests & test models
tests/_test_utils/onnx/quantization/autotune/models.py, tests/gpu/onnx/quantization/test_autotune_quantization_integration.py
Added _create_simple_resnet18_model() helper and integration test test_autotune_quantization_integration verifying autotune-derived placements are applied by quantize.

Sequence Diagram

sequenceDiagram
    participant CLI as CLI
    participant Entry as quantize CLI
    participant Finder as _find_nodes_to_quantize_autotune
    participant Autotune as Autotune Workflow
    participant Quantizer as FP8/INT8 Quantizer
    participant ORT as ORT Configurator
    participant Model as ONNX Model

    CLI->>Entry: invoke quantize(..., autotune=True)
    Entry->>Entry: apply_mode_presets(args)
    Entry->>Finder: _find_nodes_to_quantize_autotune(onnx_model,...)
    Finder->>Autotune: region_pattern_autotuning_workflow(model_or_path, output_dir?)
    Autotune->>Model: analyze regions & patterns
    Autotune-->>Finder: resolved insertion points & configs
    Finder->>Finder: get_resolved_insertion_points(best=True)
    Finder->>Finder: get_ort_quantization_config()
    Finder-->>Entry: nodes_to_quantize, no_quantize_inputs, op_types_needing_output_quant
    Entry->>Quantizer: quantize(nodes_to_quantize,..., autotune=True)
    Quantizer->>ORT: configure_ort(op_types_needing_output_quant)
    ORT-->>Quantizer: configuration applied
    Quantizer->>Model: apply quantization (skip pattern expansion for autotune)
    Quantizer-->>Entry: quantized_model
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the main change: integrating real Q/DQ scales into the Autotune feature for ONNX quantization, which is the primary objective of this comprehensive PR.
Docstring Coverage ✅ Passed Docstring coverage is 82.35% which is sufficient. The required threshold is 80.00%.
Security Anti-Patterns ✅ Passed A comprehensive security review of all modified files in this PR against SECURITY.md guidelines found no security anti-patterns.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@gcunhase gcunhase requested a review from cjluo-nv March 2, 2026 18:16
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tests/unit/onnx/quantization/autotune/test_region.py (1)

16-21: ⚠️ Potential issue | 🟡 Minor

Remove duplicate license text block.

Lines 16-21 duplicate the license disclaimer already present in lines 10-14. This appears to be a copy-paste error.

🔧 Proposed fix
 # limitations under the License.
-
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
 """Tests for the Region class in the autotuner."""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/onnx/quantization/autotune/test_region.py` around lines 16 - 21,
Remove the duplicated license disclaimer block that was accidentally
copy-pasted; locate the repeated Apache license/disclaimer text that appears a
second time and delete the redundant block so only the original license header
remains at the top of the file (ensure the first license header is preserved and
no other content is altered).
modelopt/onnx/quantization/fp8.py (1)

219-232: ⚠️ Potential issue | 🟠 Major

Potential AttributeError if nodes_to_exclude is None.

Same issue as in int8.py: line 232 calls nodes_to_exclude.extend() before validation on line 236. If nodes_to_exclude is passed as None, this will fail.

🐛 Proposed fix
     enable_gemv_detection_for_trt = kwargs.get("enable_gemv_detection_for_trt", True)
+    nodes_to_exclude = nodes_to_exclude or []
     if enable_gemv_detection_for_trt and not autotune:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/onnx/quantization/fp8.py` around lines 219 - 232, The block that
calls nodes_to_exclude.extend(...) when enable_gemv_detection_for_trt and not
autotune can raise AttributeError if nodes_to_exclude is None; before calling
find_nodes_from_matmul_to_exclude and extending, ensure nodes_to_exclude is
initialized to a list (e.g., if nodes_to_exclude is None assign an empty list)
or guard the extend call by creating a new list and assigning it back to
nodes_to_exclude; update the code around enable_gemv_detection_for_trt /
autotune, the find_nodes_from_matmul_to_exclude call, and the nodes_to_exclude
handling so extend is only called on a list.
modelopt/onnx/quantization/int8.py (1)

161-174: ⚠️ Potential issue | 🟠 Major

Potential AttributeError if nodes_to_exclude is None.

When enable_gemv_detection_for_trt is True and autotune is False, line 174 calls nodes_to_exclude.extend() before nodes_to_exclude is validated/converted by find_nodes_to_exclude() on line 178. If nodes_to_exclude is passed as None, this will raise an AttributeError.

🐛 Proposed fix
     enable_gemv_detection_for_trt = kwargs.get("enable_gemv_detection_for_trt", True)
+    nodes_to_exclude = nodes_to_exclude or []
     if enable_gemv_detection_for_trt and not autotune:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/onnx/quantization/int8.py` around lines 161 - 174, The code may call
nodes_to_exclude.extend(...) when nodes_to_exclude can be None; ensure
nodes_to_exclude is a list before extending: in the block guarded by
enable_gemv_detection_for_trt and not autotune, either initialize
nodes_to_exclude if None (e.g., nodes_to_exclude = nodes_to_exclude or []) or
call find_nodes_to_exclude() earlier and assign/normalize nodes_to_exclude
before using extend; update the logic around nodes_to_exclude,
find_nodes_from_matmul_to_exclude, and find_nodes_to_exclude to guarantee
nodes_to_exclude is always a list when extending.
🧹 Nitpick comments (3)
modelopt/onnx/quantization/quantize.py (1)

272-274: Filename replacement may fail with edge-case paths.

Using onnx_path.replace(".onnx", ".quant_autotune.onnx") could produce unexpected results if ".onnx" appears elsewhere in the path (e.g., /models/onnx.models/model.onnx).

💡 Safer alternative using path manipulation
+    import os
     # Export model with Q/DQ insertion
-    onnx_path_autotune = onnx_path.replace(".onnx", ".quant_autotune.onnx")
+    base, ext = os.path.splitext(onnx_path)
+    onnx_path_autotune = f"{base}.quant_autotune{ext}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/onnx/quantization/quantize.py` around lines 272 - 274, The filename
construction using onnx_path.replace(".onnx", ".quant_autotune.onnx") can
mis-replace when ".onnx" appears elsewhere in the path; change the logic that
computes onnx_path_autotune to use proper path/suffix manipulation (e.g.,
Path(onnx_path).with_suffix(".quant_autotune.onnx") or equivalent) before
calling autotuner.export_onnx and appending to intermediate_generated_files,
updating references to onnx_path_autotune, onnx_path, and the
autotuner.export_onnx call accordingly.
modelopt/onnx/utils.py (1)

175-191: Potential IndexError if input/output lists contain unexpected elements.

The list comprehension assumes inp.inputs[0] and out.outputs[0] exist when inp.inputs / out.outputs are truthy. While graphsurgeon typically ensures non-empty lists here, adding explicit length checks would make this more robust.

🛡️ Proposed defensive fix
     return [
         node
         for node in graph.nodes
-        if any(inp.inputs[0].op == "DequantizeLinear" for inp in node.inputs if inp.inputs)
-        or any(out.outputs[0].op == "QuantizeLinear" for out in node.outputs if out.outputs)
+        if any(
+            len(inp.inputs) > 0 and inp.inputs[0].op == "DequantizeLinear"
+            for inp in node.inputs
+            if inp.inputs
+        )
+        or any(
+            len(out.outputs) > 0 and out.outputs[0].op == "QuantizeLinear"
+            for out in node.outputs
+            if out.outputs
+        )
     ]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/onnx/utils.py` around lines 175 - 191, The comprehension in
get_quantized_nodes assumes inp.inputs[0] and out.outputs[0] exist and can raise
IndexError; change the two any() guards to explicitly check length (or
truthiness plus index-safe access) before indexing (e.g., ensure len(inp.inputs)
> 0 and len(out.outputs) > 0) so you only evaluate inp.inputs[0].op ==
"DequantizeLinear" and out.outputs[0].op == "QuantizeLinear" when the lists have
at least one element; update the generator to use these safe conditions around
node.inputs and node.outputs to avoid crashes.
modelopt/onnx/quantization/autotune/workflows.py (1)

202-203: Docstring should document temp directory behavior.

The docstring for output_dir doesn't mention that when None is provided, a temporary directory is automatically created via tempfile.mkdtemp(). This is important for API consumers to understand, especially since temp directories may accumulate if keep_output_dir=True (the default).

📝 Suggested docstring update
-        output_dir: Directory for output files (state, logs, models). Created if it doesn't exist.
+        output_dir: Directory for output files (state, logs, models). Created if it doesn't exist.
+                   If None, a temporary directory is created via tempfile.mkdtemp().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/onnx/quantization/autotune/workflows.py` around lines 202 - 203,
Update the docstring for the output_dir parameter in the function/class that
defines output_dir (the docstring in workflows.py around the autotune workflow)
to explicitly state that when output_dir is None a temporary directory is
created via tempfile.mkdtemp(), and note that the temporary directory will be
retained if keep_output_dir=True (the default), so callers may need to remove it
to avoid accumulation; reference the output_dir parameter name and the
keep_output_dir flag in the description.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@modelopt/onnx/quantization/autotune/workflows.py`:
- Around line 386-390: The log message in the cleanup branch is inverted: inside
the if not keep_output_dir block (where shutil.rmtree(output_dir) is called)
update the logger.debug message to tell users to set keep_output_dir=True to
retain the directory; specifically modify the message emitted by logger.debug
near the removal call that references output_dir and keep_output_dir so it
correctly reads that setting keep_output_dir=True will keep the directory.

In `@modelopt/onnx/quantization/quantize.py`:
- Around line 246-253: The function _find_nodes_to_quantize_autotune uses a
mutable default for intermediate_generated_files (list[str] = []); change the
signature to use None as the default (intermediate_generated_files:
Optional[list[str]] = None) and inside the function, if
intermediate_generated_files is None then set intermediate_generated_files = []
so each call gets a fresh list; update any type hints/imports if needed and
ensure all code in _find_nodes_to_quantize_autotune that appends or inspects
intermediate_generated_files works with the new initialization.

In `@setup.py`:
- Line 62: The dependency entry for "cuda-python" in setup.py lacks a version
constraint and the inline comment "For autotune" is misleading; change the
dependency to include a minimum version compatible with your CUDA/driver/ONNX
Runtime stack (e.g., "cuda-python>=13.0") and update the comment to accurately
state its purpose (e.g., "CUDA Python bindings for GPU/driver interactions -
ensure matches CUDA/ONNX Runtime version"). Ensure this follows the same pinning
style as other dependencies like "onnxslim>=0.1.76" and "polygraphy>=0.49.22".

---

Outside diff comments:
In `@modelopt/onnx/quantization/fp8.py`:
- Around line 219-232: The block that calls nodes_to_exclude.extend(...) when
enable_gemv_detection_for_trt and not autotune can raise AttributeError if
nodes_to_exclude is None; before calling find_nodes_from_matmul_to_exclude and
extending, ensure nodes_to_exclude is initialized to a list (e.g., if
nodes_to_exclude is None assign an empty list) or guard the extend call by
creating a new list and assigning it back to nodes_to_exclude; update the code
around enable_gemv_detection_for_trt / autotune, the
find_nodes_from_matmul_to_exclude call, and the nodes_to_exclude handling so
extend is only called on a list.

In `@modelopt/onnx/quantization/int8.py`:
- Around line 161-174: The code may call nodes_to_exclude.extend(...) when
nodes_to_exclude can be None; ensure nodes_to_exclude is a list before
extending: in the block guarded by enable_gemv_detection_for_trt and not
autotune, either initialize nodes_to_exclude if None (e.g., nodes_to_exclude =
nodes_to_exclude or []) or call find_nodes_to_exclude() earlier and
assign/normalize nodes_to_exclude before using extend; update the logic around
nodes_to_exclude, find_nodes_from_matmul_to_exclude, and find_nodes_to_exclude
to guarantee nodes_to_exclude is always a list when extending.

In `@tests/unit/onnx/quantization/autotune/test_region.py`:
- Around line 16-21: Remove the duplicated license disclaimer block that was
accidentally copy-pasted; locate the repeated Apache license/disclaimer text
that appears a second time and delete the redundant block so only the original
license header remains at the top of the file (ensure the first license header
is preserved and no other content is altered).

---

Nitpick comments:
In `@modelopt/onnx/quantization/autotune/workflows.py`:
- Around line 202-203: Update the docstring for the output_dir parameter in the
function/class that defines output_dir (the docstring in workflows.py around the
autotune workflow) to explicitly state that when output_dir is None a temporary
directory is created via tempfile.mkdtemp(), and note that the temporary
directory will be retained if keep_output_dir=True (the default), so callers may
need to remove it to avoid accumulation; reference the output_dir parameter name
and the keep_output_dir flag in the description.

In `@modelopt/onnx/quantization/quantize.py`:
- Around line 272-274: The filename construction using
onnx_path.replace(".onnx", ".quant_autotune.onnx") can mis-replace when ".onnx"
appears elsewhere in the path; change the logic that computes onnx_path_autotune
to use proper path/suffix manipulation (e.g.,
Path(onnx_path).with_suffix(".quant_autotune.onnx") or equivalent) before
calling autotuner.export_onnx and appending to intermediate_generated_files,
updating references to onnx_path_autotune, onnx_path, and the
autotuner.export_onnx call accordingly.

In `@modelopt/onnx/utils.py`:
- Around line 175-191: The comprehension in get_quantized_nodes assumes
inp.inputs[0] and out.outputs[0] exist and can raise IndexError; change the two
any() guards to explicitly check length (or truthiness plus index-safe access)
before indexing (e.g., ensure len(inp.inputs) > 0 and len(out.outputs) > 0) so
you only evaluate inp.inputs[0].op == "DequantizeLinear" and out.outputs[0].op
== "QuantizeLinear" when the lists have at least one element; update the
generator to use these safe conditions around node.inputs and node.outputs to
avoid crashes.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0f668a3 and 52b9c31eba63d51a6169c4b6718db227ae3ab4a0.

📒 Files selected for processing (11)
  • modelopt/onnx/op_types.py
  • modelopt/onnx/quantization/__main__.py
  • modelopt/onnx/quantization/autotune/workflows.py
  • modelopt/onnx/quantization/fp8.py
  • modelopt/onnx/quantization/graph_utils.py
  • modelopt/onnx/quantization/int8.py
  • modelopt/onnx/quantization/ort_utils.py
  • modelopt/onnx/quantization/quantize.py
  • modelopt/onnx/utils.py
  • setup.py
  • tests/unit/onnx/quantization/autotune/test_region.py

Comment thread modelopt/onnx/quantization/autotune/workflows.py Outdated
Comment thread modelopt/onnx/quantization/quantize.py
Comment thread setup.py Outdated
Comment thread modelopt/onnx/quantization/__main__.py
Comment thread modelopt/onnx/quantization/quantize.py Outdated
@@ -242,6 +243,81 @@ def _preprocess_onnx(
)


Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall Assessment

This PR is well-structured and achieves its stated goals of:

  1. Integrating Auto Q/DQ into the ONNX quantization workflow
  2. Enabling calibration data to obtain correct scales for Q/DQ nodes

The changes are substantial but well-organized across multiple files. Below are my detailed review comments.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Integrates ONNX Auto Q/DQ (TensorRT-driven autotuning) into the existing ONNX quantization workflow so Q/DQ placement can be derived from TensorRT profiling and then calibrated to produce real (non-random) Q/DQ scales.

Changes:

  • Added an --autotune flag (and autotune plumbing) to route INT8/FP8 quantization through the Auto Q/DQ placement workflow.
  • Introduced utilities to detect “quantized nodes” from a Q/DQ-inserted model and used this to drive node selection + ORT configuration tweaks (output quantization for certain producers).
  • Updated autotune workflow API to accept in-memory models and optionally auto-manage its output directory.

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
tests/unit/onnx/quantization/autotune/test_region.py Updates file header metadata.
setup.py Adds cuda-python to ONNX optional dependencies to support TensorRT Python autotune benchmarking.
modelopt/onnx/utils.py Adds get_quantized_nodes() helper for extracting quantized nodes from a Q/DQ graph.
modelopt/onnx/quantization/quantize.py Adds autotune flag, integrates Auto Q/DQ placement, and feeds results into INT8/FP8 quantizers.
modelopt/onnx/quantization/ort_utils.py Extends ORT configuration to optionally allow output quantization for selected op types.
modelopt/onnx/quantization/int8.py Adds autotune plumbing and bypasses some default heuristics when autotune is enabled.
modelopt/onnx/quantization/graph_utils.py Fixes partial-input Q/DQ removal to patch the intended consumer branch (shared Q/DQ case).
modelopt/onnx/quantization/fp8.py Adds autotune plumbing and bypasses some default heuristics when autotune is enabled.
modelopt/onnx/quantization/autotune/workflows.py Allows ModelProto input, optional output_dir, and adds optional output-dir cleanup.
modelopt/onnx/quantization/__main__.py Adds CLI flag --autotune.
modelopt/onnx/op_types.py Adds get_activation_ops() used by autotune integration logic.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread modelopt/onnx/quantization/quantize.py Outdated
Comment thread modelopt/onnx/quantization/autotune/workflows.py Outdated
Comment thread modelopt/onnx/quantization/quantize.py
Comment thread modelopt/onnx/utils.py Outdated
Comment thread modelopt/onnx/quantization/quantize.py
Comment thread modelopt/onnx/quantization/graph_utils.py Outdated
Comment thread modelopt/onnx/quantization/autotune/workflows.py
Comment thread modelopt/onnx/utils.py Outdated
Comment thread modelopt/onnx/quantization/quantize.py Outdated
Comment thread modelopt/onnx/quantization/fp8.py
Comment thread modelopt/onnx/quantization/autotune/workflows.py
Comment thread modelopt/onnx/quantization/quantize.py Outdated
Comment thread modelopt/onnx/quantization/graph_utils.py
Comment thread setup.py Outdated
Comment thread tests/unit/onnx/quantization/autotune/test_region.py
Comment thread modelopt/onnx/quantization/quantize.py Outdated
Comment thread modelopt/onnx/op_types.py
Copy link
Copy Markdown

@modelopt-bot modelopt-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. I've posted several inline comments on specific lines. Overall this is a well-structured PR that successfully integrates Auto Q/DQ into the ONNX quantization workflow. Key highlights include good integration via _find_nodes_to_quantize_autotune, flexible API changes for in-memory models, and an important bug fix for shared Q/DQ pair handling. Please address the inline comments regarding documentation completion, code organization suggestions, and the copyright year consistency. Recommend approving with minor changes.

@gcunhase gcunhase changed the title [OMNIML-3252][ONNX] Add real Q/DQ scales in Autotune Draft: [OMNIML-3252][ONNX] Add real Q/DQ scales in Autotune Mar 2, 2026
@gcunhase gcunhase force-pushed the dev/gcunhasergio/autotune_real_qdq_scales branch from ab4c5a3 to c23208f Compare March 2, 2026 20:40
@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 2, 2026

Codecov Report

❌ Patch coverage is 46.66667% with 80 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.11%. Comparing base (fe83270) to head (de2ef53).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...elopt/onnx/quantization/autotune/autotuner_base.py 32.72% 37 Missing ⚠️
modelopt/onnx/quantization/autotune/utils.py 46.42% 15 Missing ⚠️
modelopt/onnx/quantization/quantize.py 31.25% 11 Missing ⚠️
modelopt/onnx/quantization/autotune/workflows.py 16.66% 10 Missing ⚠️
modelopt/onnx/quantization/graph_utils.py 66.66% 3 Missing ⚠️
modelopt/onnx/quantization/int8.py 84.61% 2 Missing ⚠️
modelopt/onnx/op_types.py 50.00% 1 Missing ⚠️
modelopt/onnx/quantization/autotune/__main__.py 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #951      +/-   ##
==========================================
- Coverage   70.25%   70.11%   -0.14%     
==========================================
  Files         220      221       +1     
  Lines       25368    25459      +91     
==========================================
+ Hits        17822    17851      +29     
- Misses       7546     7608      +62     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated 8 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread tests/_test_utils/onnx/quantization/autotune/models.py
Comment thread tests/gpu/onnx/quantization/test_autotune_quantization_integration.py Outdated
Comment thread modelopt/onnx/quantization/quantize.py
Comment thread modelopt/onnx/quantization/autotune/autotuner_base.py
Comment thread tests/gpu/onnx/quantization/test_autotune_quantization_integration.py Outdated
Comment thread modelopt/onnx/quantization/autotune/workflows.py Outdated
Comment thread modelopt/onnx/quantization/autotune/workflows.py Outdated
Comment thread tests/_test_utils/onnx/quantization/autotune/models.py Outdated
@gcunhase gcunhase changed the title Draft: [OMNIML-3252][ONNX] Add real Q/DQ scales in Autotune [OMNIML-3252][ONNX] Add real Q/DQ scales in Autotune Mar 5, 2026
Comment thread modelopt/onnx/quantization/autotune/__init__.py Outdated
@gcunhase gcunhase force-pushed the dev/gcunhasergio/autotune_real_qdq_scales branch from c147979 to 0a32bea Compare March 9, 2026 16:58
gcunhase added 18 commits March 11, 2026 10:44
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
…eded

Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
… silent corruption of the graph.

Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
…ize overwrite and other flags (should have the same behavior as pre-autotune)

Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
@gcunhase gcunhase force-pushed the dev/gcunhasergio/autotune_real_qdq_scales branch from 1be0cf2 to de2ef53 Compare March 11, 2026 14:44
@gcunhase gcunhase enabled auto-merge (squash) March 11, 2026 14:44
@gcunhase gcunhase merged commit 26cad67 into NVIDIA:main Mar 11, 2026
40 checks passed
gcunhase added a commit that referenced this pull request Mar 12, 2026
### What does this PR do?

**Type of change**: documentation

**Overview**: This PR updates the documentation and does some folder
re-structuring and file re-naming related to
#951.

### Usage

Documentation

### Testing

Documentation

### Before your PR is "*Ready for review*"

- Is this change backward compatible?: ✅
- If you copied code from any other sources or added a new PIP
dependency, did you follow guidance in `CONTRIBUTING.md`: ✅
- Did you write any new necessary tests?: N/A <!--- Mandatory for new
features or examples. -->
- Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?:
✅ (renamed `AutoQDQ` to `Autotune`)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
  * Renamed AutoQDQ to Autotune across guides and changelog.
  * Updated Autotune guide descriptions and wording.
* Added a new section on optimizing Q/DQ node placement with Autotune,
including CLI usage and API links (appears twice in one README).
  * Applied minor grammar and capitalization corrections.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
DrXuQian pushed a commit to DrXuQian/Model-Optimizer that referenced this pull request Mar 13, 2026
### What does this PR do?

**Type of change**: documentation

**Overview**: This PR updates the documentation and does some folder
re-structuring and file re-naming related to
NVIDIA#951.

### Usage

Documentation

### Testing

Documentation

### Before your PR is "*Ready for review*"

- Is this change backward compatible?: ✅
- If you copied code from any other sources or added a new PIP
dependency, did you follow guidance in `CONTRIBUTING.md`: ✅
- Did you write any new necessary tests?: N/A <!--- Mandatory for new
features or examples. -->
- Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?:
✅ (renamed `AutoQDQ` to `Autotune`)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
  * Renamed AutoQDQ to Autotune across guides and changelog.
  * Updated Autotune guide descriptions and wording.
* Added a new section on optimizing Q/DQ node placement with Autotune,
including CLI usage and API links (appears twice in one README).
  * Applied minor grammar and capitalization corrections.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
kevalmorabia97 pushed a commit that referenced this pull request Apr 7, 2026
### What does this PR do?

Type of change: Bug fix

Regression bug introduced by the Autotune integration into ModelOpt ONNX
quantization (#951),
making ModelOpt dependent on TensorRT in all scenarios. This PR fixes
this issue by requiring TensorRT only when `--autotune` is enabled.

### Usage

```python
$ python -m modelopt.onnx.quantization --onnx_path=${MODEL_NAME}.onnx
```

### Testing
See bug 6056809.

### Before your PR is "*Ready for review*"

Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)
and your commits are signed (`git commit -s -S`).

Make sure you read and follow the [Security Best
Practices](https://github.com/NVIDIA/Model-Optimizer/blob/main/SECURITY.md#security-coding-practices-for-contributors)
(e.g. avoiding hardcoded `trust_remote_code=True`, `torch.load(...,
weights_only=False)`, `pickle`, etc.).

- Is this change backward compatible?: ✅
- If you copied code from any other sources or added a new PIP
dependency, did you follow guidance in `CONTRIBUTING.md`: N/A
- Did you write any new necessary tests?: ✅
- Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?:
N/A

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Autotune dependency failures now surface as clearer runtime errors
instead of only logging warnings.

* **Chores**
* Centralized autotune presets and numeric defaults into a shared
configuration.
* Core autotune components are conditionally exposed so initialization
succeeds when optional acceleration libraries are absent.
  * Deferred autotune imports to runtime to improve failure handling.

* **Tests**
* Added a test ensuring the quantization CLI/parser initializes
correctly without optional acceleration libraries.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
kevalmorabia97 pushed a commit that referenced this pull request Apr 7, 2026
### What does this PR do?

Type of change: Bug fix

Regression bug introduced by the Autotune integration into ModelOpt ONNX
quantization (#951),
making ModelOpt dependent on TensorRT in all scenarios. This PR fixes
this issue by requiring TensorRT only when `--autotune` is enabled.

### Usage

```python
$ python -m modelopt.onnx.quantization --onnx_path=${MODEL_NAME}.onnx
```

### Testing
See bug 6056809.

### Before your PR is "*Ready for review*"

Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)
and your commits are signed (`git commit -s -S`).

Make sure you read and follow the [Security Best
Practices](https://github.com/NVIDIA/Model-Optimizer/blob/main/SECURITY.md#security-coding-practices-for-contributors)
(e.g. avoiding hardcoded `trust_remote_code=True`, `torch.load(...,
weights_only=False)`, `pickle`, etc.).

- Is this change backward compatible?: ✅
- If you copied code from any other sources or added a new PIP
dependency, did you follow guidance in `CONTRIBUTING.md`: N/A
- Did you write any new necessary tests?: ✅
- Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?:
N/A

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Autotune dependency failures now surface as clearer runtime errors
instead of only logging warnings.

* **Chores**
* Centralized autotune presets and numeric defaults into a shared
configuration.
* Core autotune components are conditionally exposed so initialization
succeeds when optional acceleration libraries are absent.
  * Deferred autotune imports to runtime to improve failure handling.

* **Tests**
* Added a test ensuring the quantization CLI/parser initializes
correctly without optional acceleration libraries.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
kinjalpatel27 pushed a commit that referenced this pull request Apr 13, 2026
### What does this PR do?

Type of change: Bug fix

Regression bug introduced by the Autotune integration into ModelOpt ONNX
quantization (#951),
making ModelOpt dependent on TensorRT in all scenarios. This PR fixes
this issue by requiring TensorRT only when `--autotune` is enabled.

### Usage

```python
$ python -m modelopt.onnx.quantization --onnx_path=${MODEL_NAME}.onnx
```

### Testing
See bug 6056809.

### Before your PR is "*Ready for review*"

Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)
and your commits are signed (`git commit -s -S`).

Make sure you read and follow the [Security Best
Practices](https://github.com/NVIDIA/Model-Optimizer/blob/main/SECURITY.md#security-coding-practices-for-contributors)
(e.g. avoiding hardcoded `trust_remote_code=True`, `torch.load(...,
weights_only=False)`, `pickle`, etc.).

- Is this change backward compatible?: ✅
- If you copied code from any other sources or added a new PIP
dependency, did you follow guidance in `CONTRIBUTING.md`: N/A
- Did you write any new necessary tests?: ✅
- Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?:
N/A

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Autotune dependency failures now surface as clearer runtime errors
instead of only logging warnings.

* **Chores**
* Centralized autotune presets and numeric defaults into a shared
configuration.
* Core autotune components are conditionally exposed so initialization
succeeds when optional acceleration libraries are absent.
  * Deferred autotune imports to runtime to improve failure handling.

* **Tests**
* Added a test ensuring the quantization CLI/parser initializes
correctly without optional acceleration libraries.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants