Skip to content

Support YAML quant recipe in PTQ and remove first/last layer modifier code#4503

Merged
jenchen13 merged 4 commits intoNVIDIA:mainfrom
jenchen13:jennifchen/ptq_recipe
Apr 30, 2026
Merged

Support YAML quant recipe in PTQ and remove first/last layer modifier code#4503
jenchen13 merged 4 commits intoNVIDIA:mainfrom
jenchen13:jennifchen/ptq_recipe

Conversation

@jenchen13
Copy link
Copy Markdown
Contributor

@jenchen13 jenchen13 commented Apr 28, 2026

What does this PR do ?

Support YAML quant recipe in PTQ, which eliminates the need of the code to modify first/last layers in quant config. User can now skip layers directly in the yaml, e.g.

Disable one layer entirely                                                                                              
                                                                                                                          
  - quantizer_name: '*layers.42.*'                                                                               
    enable: false
    
    Disable one module on one layer                                                                                         
                                                         
  - quantizer_name: '*layers.42.mixer.o_proj*'
    enable: false                                                                                                         
  
  Disable a few specific layers                                                                                           
                                                         
  - quantizer_name: '*layers.42.mixer.o_proj*'                                                                   
    enable: false
  - quantizer_name: '*layers.43.mixer.o_proj*'                                                                   
    enable: false
  • quantize.py has new --recipe flag
  • quantize.sh works with quant config or recipe, e.g. ... quantize.sh <model> NVFP4_DEFAULT_CONFIG or ...quantize.sh <model> /path/to/modelopt_recipe
  • remove --force-all-expert-routing which is a no-op

Deprecation

Deprecating these flags in quantize.py: --num-first-layers-to-skip-quant, --num-last-layers-to-skip-quant, --force-all-expert-routing

  • --force-all-expert-routing was already a no-op in the process of deprecation
  • use YAML instead of --num-first-layers-to-skip-quant, --num-last-layers-to-skip-quant to skip layers
    ⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Issue tracking

For PRs from open-source community contributors:

  • New features: a linked issue is required. Please open a feature request and reference it here before submitting the PR.
  • Small updates (bug fixes, minor improvements): a linked issue is recommended and will accelerate the PR review process.

Linked issue:

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Signed-off-by: Jennifer Chen <jennifchen@nvidia.com>
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft April 28, 2026 19:58
@github-actions
Copy link
Copy Markdown
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 28, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@jenchen13 jenchen13 marked this pull request as ready for review April 28, 2026 20:46
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 28, 2026 20:46
@jenchen13
Copy link
Copy Markdown
Contributor Author

/ok to test 1aa8e99

Copy link
Copy Markdown
Contributor

@ChenhanYu ChenhanYu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for moving layer skipping into the recipe — much cleaner than the imperative get_first/last_layers_disabled_config helpers. Requesting changes for one blocker plus a few smaller items.

🚨 Attention required — public-CLI contract change

This PR removes three documented flags (--num-first-layers-to-skip-quant, --num-last-layers-to-skip-quant, --force-all-expert-routing) with no deprecation window. Worth calling out in the PR description / release notes, not buried in a refactor. Suggest keeping the flags one cycle with a DeprecationWarning pointing users at the recipe equivalent.

🔗 Upstream dependency not closed up

The recipe refactorization on the ModelOpt side (NVIDIA/Model-Optimizer#1253) doesn't look merged/closed yet. from modelopt.recipe import ModelOptPTQRecipe, load_recipe will fail against any modelopt build that predates that PR. Please confirm the minimum modelopt version this depends on, gate or pin accordingly, and ideally land/tag #1253 first so CI here isn't fragile.

🐛 Blocker — quantize.py will crash on the non-recipe path

Lines 221–232 of examples/post_training/modelopt/quantize.py (untouched by this PR) still do:

if args.num_first_layers_to_skip_quant is not None:
    mtq_config = get_first_layers_disabled_config(...)
if args.num_last_layers_to_skip_quant is not None:
    mtq_config = get_last_layers_disabled_config(...)

But this PR removed both argparse args (so args.num_first_layers_to_skip_quant doesn't exist) and both helper functions. The first call to get_modelopt_torch_quantization_config() without --recipe will hit AttributeError, and even with the attrs stubbed the helpers are gone (NameError). Please either delete this block too or restore the helpers/args. Add a smoke test that runs the legacy --export-quant-cfg NVFP4_DEFAULT_CFG path so this doesn't regress.

🧪 Test coverage request

Before merging, please run this end-to-end against nmm-sandbox PTQ and QAD test suites — both the new --recipe path and the legacy --export-quant-cfg path (the one currently broken above). The shell-side dispatch in quantize.sh also needs both branches exercised.

Inline notes

examples/post_training/modelopt/quantize.py ~L170assert isinstance(recipe, ModelOptPTQRecipe) is stripped under python -O. For user-facing input validation, prefer:

if not isinstance(recipe, ModelOptPTQRecipe):
    raise TypeError(f"Expected PTQ recipe, but got {type(recipe).__name__} from {args.recipe}")

examples/post_training/modelopt/quantize.py ~L173–174 — only --export-kv-cache-quant gets a "being ignored" warning, but the --recipe help text says both --export-quant-cfg and --export-kv-cache-quant are ignored. If a user passes a non-default --export-quant-cfg together with --recipe, they get no feedback. Suggest one print_rank_0 that lists every overridden arg. Also, the f-string is ~120 chars — likely to fail autoformat.sh (black 100-col).

examples/post_training/modelopt/quantize.py L5import copy is now unused (only used by the removed helpers). Please remove.

examples/post_training/modelopt/quantize.sh L23–32 — heuristic (/ or .yaml/.yml → recipe) is reasonable since the predefined configs are uppercase-only identifiers, but please add a one-line comment noting that contract so a future contributor doesn't add a config like models/foo and silently break it.

Signed-off-by: Jennifer Chen <jennifchen@nvidia.com>
@jenchen13
Copy link
Copy Markdown
Contributor Author

/claude review

Comment thread examples/post_training/modelopt/quantize.py Outdated
Comment thread examples/post_training/modelopt/quantize.py Outdated
Copy link
Copy Markdown
Contributor

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two issues found:

  1. Syntax error (blocker): Line 170 in quantize.py has a comma instead of a colon on the if not isinstance(...) line, which will cause a SyntaxError at import time.
  2. Dead import: import copy on line 5 is no longer used after removing the layer-skip helper functions.

jenchen13 and others added 2 commits April 29, 2026 10:26
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
@jenchen13
Copy link
Copy Markdown
Contributor Author

/claude review

Copy link
Copy Markdown
Contributor

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jenchen13
Copy link
Copy Markdown
Contributor Author

/ok to test 0610539

@ChenhanYu ChenhanYu self-requested a review April 30, 2026 17:56
@jenchen13 jenchen13 added this pull request to the merge queue Apr 30, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25181011042

Merged via the queue into NVIDIA:main with commit 1a83320 Apr 30, 2026
69 checks passed
@jenchen13 jenchen13 deleted the jennifchen/ptq_recipe branch April 30, 2026 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants