Skip to content

Draft: Support mamba layer fine grained offloading with cudagraphs#3204

Draft
jiemingz wants to merge 9 commits intoNVIDIA:mainfrom
jiemingz:jiemingz/fg_offload_cudagraph
Draft

Draft: Support mamba layer fine grained offloading with cudagraphs#3204
jiemingz wants to merge 9 commits intoNVIDIA:mainfrom
jiemingz:jiemingz/fg_offload_cudagraph

Conversation

@jiemingz
Copy link
Copy Markdown
Contributor

@jiemingz jiemingz commented Feb 2, 2026

What does this PR do ?

Support offloading to mamba input and output projections, and support cudagraphing them. There will be a followup PR to add a unit test, which will be part of a larger refactor of the cudagraph unit tests.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@jiemingz jiemingz requested review from a team as code owners February 2, 2026 15:20
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Feb 2, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ko3n1g ko3n1g requested a review from a team February 2, 2026 15:20
@jiemingz jiemingz changed the title Support mamba model offloading with cudagraphs draft: Support mamba model offloading with cudagraphs Feb 2, 2026
@jiemingz jiemingz marked this pull request as draft February 2, 2026 15:22
@jiemingz jiemingz force-pushed the jiemingz/fg_offload_cudagraph branch from 1398b01 to da13baf Compare February 2, 2026 15:30
Signed-off-by: Jimmy Zhang <jiemingz@nvidia.com>
@jiemingz jiemingz force-pushed the jiemingz/fg_offload_cudagraph branch from da13baf to fc05ed8 Compare February 2, 2026 15:42
Signed-off-by: Jimmy Zhang <jiemingz@nvidia.com>
@jiemingz jiemingz self-assigned this Feb 3, 2026
@jiemingz jiemingz changed the title draft: Support mamba model offloading with cudagraphs Support mamba model offloading with cudagraphs Feb 3, 2026
@jiemingz jiemingz marked this pull request as ready for review February 3, 2026 14:46
@jiemingz jiemingz changed the title Support mamba model offloading with cudagraphs Support mamba layer fine grained offloading with cudagraphs Feb 3, 2026
Signed-off-by: Jimmy Zhang <jiemingz@nvidia.com>
@dimapihtar dimapihtar requested review from wdykas and yanring February 4, 2026 15:31
@dimapihtar dimapihtar added the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Feb 4, 2026
@dimapihtar
Copy link
Copy Markdown
Contributor

/ok to test 4fd8b86

@ko3n1g ko3n1g added this to the Core 0.16 milestone Feb 4, 2026
Comment thread megatron/core/pipeline_parallel/fine_grained_activation_offload.py Outdated
Comment thread megatron/core/pipeline_parallel/fine_grained_activation_offload.py Outdated
Signed-off-by: Jieming Zhang <jiemingz@nvidia.com>
Signed-off-by: Jimmy Zhang <jiemingz@nvidia.com>
Signed-off-by: Jieming Zhang <jiemingz@nvidia.com>
@jiemingz
Copy link
Copy Markdown
Contributor Author

jiemingz commented Feb 6, 2026

/ok to test c437085

jiemingz and others added 2 commits February 6, 2026 10:33
Signed-off-by: Jimmy Zhang <jiemingz@nvidia.com>
@jiemingz
Copy link
Copy Markdown
Contributor Author

jiemingz commented Feb 6, 2026

/ok to test 494e675

@jiemingz jiemingz changed the title Support mamba layer fine grained offloading with cudagraphs Draft: Support mamba layer fine grained offloading with cudagraphs Mar 4, 2026
@jiemingz jiemingz marked this pull request as draft March 4, 2026 02:57
@Phlip79 Phlip79 removed the request for review from a team April 15, 2026 16:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

complexity: medium Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants