[inductor] pre grad graph bisecting#166344
[inductor] pre grad graph bisecting#166344shunting314 wants to merge 2 commits intogh/shunting314/249/basefrom
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166344
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit cb7f67f with merge base a076b4d ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| import torch._inductor.config as inductor_config | ||
|
|
||
| if inductor_config.test_configs.bisect_pre_grad_graph: | ||
| BACKENDS["inductor"].insert(0, BisectSubsystem("pre_grad_graph")) |
There was a problem hiding this comment.
This will insert on each invocation. can we make sure we delete this on exit ? You can stick it incleanup below if you need.
A few things to note: 1. Customers like vllm use a custom backend (e.g. VllmBackend), split the graph, and call standalone_compile for each split. If we let the bisector override the backend, we won't bisect thru the custom backend. `test_configs.bisect_keep_custom_backend_for_inductor` is used to keep the custom backend if we are bisecting for inductor. 2. pre_grad_graph bisecting and lowering bisecting so far does not compose well with each other since an issue may be just captured by the first one we try. `test_configs.bisect_pre_grad_graph` is used to enable the 'pre_grad_graph' bisecting. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela [ghstack-poisoned]
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Is the new config sufficient for using compiler bisector with vLLM or do we have other issues? |
A few things to note: 1. Customers like vllm use a custom backend (e.g. VllmBackend), split the graph, and call standalone_compile for each split. If we let the bisector override the backend, we won't bisect thru the custom backend. `test_configs.bisect_keep_custom_backend_for_inductor` is used to keep the custom backend if we are bisecting for inductor. 2. pre_grad_graph bisecting and lowering bisecting so far does not compose well with each other since an issue may be just captured by the first one we try. `test_configs.bisect_pre_grad_graph` is used to enable the 'pre_grad_graph' bisecting. Pull Request resolved: pytorch#166344 Approved by: https://github.com/eellison
|
yea, The other one |
Stack from ghstack (oldest at bottom):
A few things to note:
test_configs.bisect_keep_custom_backend_for_inductoris used to keep the custom backend if we are bisecting for inductor.test_configs.bisect_pre_grad_graphis used to enable the 'pre_grad_graph' bisecting.cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @Lucaskabela