[Inductor] Make combo kernel MAX_NUM_ARGS configurable#166274
Closed
andyanwang wants to merge 1 commit intogh/andyanwang/40/basefrom
Closed
[Inductor] Make combo kernel MAX_NUM_ARGS configurable#166274andyanwang wants to merge 1 commit intogh/andyanwang/40/basefrom
andyanwang wants to merge 1 commit intogh/andyanwang/40/basefrom
Conversation
Differential Revision: [D85509352](https://our.internmc.facebook.com/intern/diff/D85509352/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166274
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 8e29b50 with merge base c7eee49 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This was referenced Oct 26, 2025
eellison
approved these changes
Oct 27, 2025
Contributor
|
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Collaborator
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
pytorchmergebot
pushed a commit
that referenced
this pull request
Oct 29, 2025
…uts (#166275) MTIA triton currently has a limit that it can't support the cases when there are too many input/output buffers. This PR adds the limitation to prevent large fusion with many input/output buffer. Differential Revision: [D85509351](https://our.internmc.facebook.com/intern/diff/D85509351/) Pull Request resolved: #166275 Approved by: https://github.com/eellison ghstack dependencies: #166274
tianrengao
pushed a commit
that referenced
this pull request
Oct 30, 2025
The MAX_NUM_ARGS of ComboKernel is currently a fixed number. We need to tune this number to avoid large fusion for MTIA, thus making it configurable. Differential Revision: [D85509352](https://our.internmc.facebook.com/intern/diff/D85509352/) Pull Request resolved: #166274 Approved by: https://github.com/eellison
BoyuanFeng
pushed a commit
that referenced
this pull request
Oct 31, 2025
…uts (#166275) MTIA triton currently has a limit that it can't support the cases when there are too many input/output buffers. This PR adds the limitation to prevent large fusion with many input/output buffer. Differential Revision: [D85509351](https://our.internmc.facebook.com/intern/diff/D85509351/) Pull Request resolved: #166275 Approved by: https://github.com/eellison ghstack dependencies: #166274
Khanaksahu
pushed a commit
to Khanaksahu/pytorch-fork
that referenced
this pull request
Nov 17, 2025
The MAX_NUM_ARGS of ComboKernel is currently a fixed number. We need to tune this number to avoid large fusion for MTIA, thus making it configurable. Pull Request resolved: pytorch/pytorch#166274 ghstack-source-id: 318804069 @exported-using-ghexport Differential Revision: [D85509352](https://our.internmc.facebook.com/intern/diff/D85509352/)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The MAX_NUM_ARGS of ComboKernel is currently a fixed number. We need to tune this number to avoid large fusion for MTIA, thus making it configurable.
Stack from ghstack (oldest at bottom):
Differential Revision: D85509352
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben