Skip to content

Conversation

@chinmaydk99
Copy link

Proposed changes

This PR enables group mode (variable-length attention) kernel generation for PyTorch's CK SDPA backend.

Checklist

Please put an x into the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.

  • I have added tests relevant to the introduced functionality, and the unit tests are passing locally
  • I have added the test to REGRESSION_TESTS list defined at the top of CMakeLists.txt in tests/CMakeLists.txt, IF the test takes more than 30 seconds to run.
  • I have added inline documentation which enables the maintainers with understanding the motivation
  • I have removed the stale documentation which is no longer relevant after this pull request
  • (If this change is user-facing) I have added release notes which provide the end users with a brief summary of the improvement from this pull request
  • I have run clang-format on all changed files
  • Any dependent changes have been merged

Discussion

The change is minimal (single line deletion) but enables a significant feature: variable-length attention support for ROCm users via PyTorch's torch.nn.attention.varlen API.

@poyenc
Copy link
Contributor

poyenc commented Jan 13, 2026

May I know why the group-mode kernels were not included previously, and why they are required now?

@chinmaydk99 chinmaydk99 force-pushed the ck-enable-varlen-group-mode branch from 4670a96 to 6258488 Compare January 13, 2026 22:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants