[Inductor-FX] Support symbol and dynamic scalar graph inputs and outputs#163596
[Inductor-FX] Support symbol and dynamic scalar graph inputs and outputs#163596blaine-rister wants to merge 19 commits intomainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/163596
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 61c7d4c with merge base da05aa7 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 3, 5, lf.linux.g6.4xlarge.experimental.nvidia.gpu) Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…uts (pytorch#163596) # Problems This PR fixes a few edge cases that the FX converter missed related to dynamic shapes. 1. Inductor graphs can sometimes take `sympy.Symbol` inputs. We have logic to convert these to FX placeholder nodes. However, this logic did not update the `self.expr_to_proxy` table mapping symbols to proxy nodes. (There was existing logic to do this for `ir.TensorBox` inputs, but not `sympy.Symbol`.) This caused sympy tracing to fail when these symbol inputs were used in other expressions. 2. We lacked codegen for `ShapeAsConstantBuffer`. This IR node is seen when the graph input or output is a scalar computed from dynamic shapes. # Fixes a. Update `self.expr_to_proxy` when generating placeholders for `sympy.Symbol` inputs. Change `SymbolBuffer.get_example` to convert the symbol to a `torch.SymInt`, so we can populate `meta["val"]` correctly and use the value in other computations. b. Support `ShapeAsConstantBuffer` by tracing the sympy expression. c. Move output generation inside the metadata hook, allowing us to populate `meta["val"]` for the nodes computing `ShapeAsConstantBuffer`. # Test plan Added several new CI tests: 1. `torch.cond` with dynamic shapes. This exposes both issues, as the predicate is a `ShapeAsConstantBuffer` and one of the subgraphs uses a symbol input, due to the closure. Also tests when the parent and subgraphs have different input shapes. 2. Output dynamic shape scalar. This tests `ShapeAsConstantBuffer` as an output. Pull Request resolved: pytorch#163596 Approved by: https://github.com/angelayi, https://github.com/jansel
…uts (#163596) # Problems This PR fixes a few edge cases that the FX converter missed related to dynamic shapes. 1. Inductor graphs can sometimes take `sympy.Symbol` inputs. We have logic to convert these to FX placeholder nodes. However, this logic did not update the `self.expr_to_proxy` table mapping symbols to proxy nodes. (There was existing logic to do this for `ir.TensorBox` inputs, but not `sympy.Symbol`.) This caused sympy tracing to fail when these symbol inputs were used in other expressions. 2. We lacked codegen for `ShapeAsConstantBuffer`. This IR node is seen when the graph input or output is a scalar computed from dynamic shapes. # Fixes a. Update `self.expr_to_proxy` when generating placeholders for `sympy.Symbol` inputs. Change `SymbolBuffer.get_example` to convert the symbol to a `torch.SymInt`, so we can populate `meta["val"]` correctly and use the value in other computations. b. Support `ShapeAsConstantBuffer` by tracing the sympy expression. c. Move output generation inside the metadata hook, allowing us to populate `meta["val"]` for the nodes computing `ShapeAsConstantBuffer`. # Test plan Added several new CI tests: 1. `torch.cond` with dynamic shapes. This exposes both issues, as the predicate is a `ShapeAsConstantBuffer` and one of the subgraphs uses a symbol input, due to the closure. Also tests when the parent and subgraphs have different input shapes. 2. Output dynamic shape scalar. This tests `ShapeAsConstantBuffer` as an output. Pull Request resolved: #163596 Approved by: https://github.com/angelayi, https://github.com/jansel
Problems
This PR fixes a few edge cases that the FX converter missed related to dynamic shapes.
Inductor graphs can sometimes take
sympy.Symbolinputs. We have logic to convert these to FX placeholder nodes. However, this logic did not update theself.expr_to_proxytable mapping symbols to proxy nodes. (There was existing logic to do this forir.TensorBoxinputs, but notsympy.Symbol.) This caused sympy tracing to fail when these symbol inputs were used in other expressions.We lacked codegen for
ShapeAsConstantBuffer. This IR node is seen when the graph input or output is a scalar computed from dynamic shapes.Fixes
a. Update
self.expr_to_proxywhen generating placeholders forsympy.Symbolinputs. ChangeSymbolBuffer.get_exampleto convert the symbol to atorch.SymInt, so we can populatemeta["val"]correctly and use the value in other computations.b. Support
ShapeAsConstantBufferby tracing the sympy expression.c. Move output generation inside the metadata hook, allowing us to populate
meta["val"]for the nodes computingShapeAsConstantBuffer.Test plan
Added several new CI tests:
torch.condwith dynamic shapes. This exposes both issues, as the predicate is aShapeAsConstantBufferand one of the subgraphs uses a symbol input, due to the closure. Also tests when the parent and subgraphs have different input shapes.ShapeAsConstantBufferas an output.cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben