Skip to content

[Inductor-FX] Support symbol and dynamic scalar graph inputs and outputs#163596

Closed
blaine-rister wants to merge 19 commits intomainfrom
brister/fx_cond
Closed

[Inductor-FX] Support symbol and dynamic scalar graph inputs and outputs#163596
blaine-rister wants to merge 19 commits intomainfrom
brister/fx_cond

Conversation

@blaine-rister
Copy link
Contributor

@blaine-rister blaine-rister commented Sep 23, 2025

Problems

This PR fixes a few edge cases that the FX converter missed related to dynamic shapes.

  1. Inductor graphs can sometimes take sympy.Symbol inputs. We have logic to convert these to FX placeholder nodes. However, this logic did not update the self.expr_to_proxy table mapping symbols to proxy nodes. (There was existing logic to do this for ir.TensorBox inputs, but not sympy.Symbol.) This caused sympy tracing to fail when these symbol inputs were used in other expressions.

  2. We lacked codegen for ShapeAsConstantBuffer. This IR node is seen when the graph input or output is a scalar computed from dynamic shapes.

Fixes

a. Update self.expr_to_proxy when generating placeholders for sympy.Symbol inputs. Change SymbolBuffer.get_example to convert the symbol to a torch.SymInt, so we can populate meta["val"] correctly and use the value in other computations.
b. Support ShapeAsConstantBuffer by tracing the sympy expression.
c. Move output generation inside the metadata hook, allowing us to populate meta["val"] for the nodes computing ShapeAsConstantBuffer.

Test plan

Added several new CI tests:

  1. torch.cond with dynamic shapes. This exposes both issues, as the predicate is a ShapeAsConstantBuffer and one of the subgraphs uses a symbol input, due to the closure. Also tests when the parent and subgraphs have different input shapes.
  2. Output dynamic shape scalar. This tests ShapeAsConstantBuffer as an output.

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 23, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/163596

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 61c7d4c with merge base da05aa7 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@blaine-rister blaine-rister changed the title [Inductor-FX] Support cond with dynamic shapes [Inductor-FX] Support dynamic shape expressions containing input symbols Sep 23, 2025
@blaine-rister blaine-rister changed the title [Inductor-FX] Support dynamic shape expressions containing input symbols [Inductor-FX] Support dynamic shape expressions as graph inputs and outputs Sep 23, 2025
@blaine-rister blaine-rister changed the title [Inductor-FX] Support dynamic shape expressions as graph inputs and outputs [Inductor-FX] Support sympy expressions as graph inputs and outputs Sep 23, 2025
@blaine-rister blaine-rister changed the title [Inductor-FX] Support sympy expressions as graph inputs and outputs [Inductor-FX] Support symbol and dynamic scalar graph inputs and outputs Sep 23, 2025
@blaine-rister blaine-rister marked this pull request as ready for review September 23, 2025 18:39
@blaine-rister
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 23, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 3, 5, lf.linux.g6.4xlarge.experimental.nvidia.gpu)

Details for Dev Infra team Raised by workflow job

@blaine-rister
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

dsashidh pushed a commit to dsashidh/pytorch that referenced this pull request Sep 26, 2025
…uts (pytorch#163596)

# Problems
This PR fixes a few edge cases that the FX converter missed related to dynamic shapes.

1. Inductor graphs can sometimes take `sympy.Symbol` inputs. We have logic to convert these to FX placeholder nodes. However, this logic did not update the `self.expr_to_proxy` table mapping symbols to proxy nodes. (There was existing logic to do this for `ir.TensorBox` inputs, but not `sympy.Symbol`.) This caused sympy tracing to fail when these symbol inputs were used in other expressions.

2. We lacked codegen for `ShapeAsConstantBuffer`. This IR node is seen when the graph input or output is a scalar computed from dynamic shapes.

# Fixes
a. Update `self.expr_to_proxy` when generating placeholders for `sympy.Symbol` inputs. Change `SymbolBuffer.get_example` to convert the symbol to a `torch.SymInt`, so we can populate `meta["val"]` correctly and use the value in other computations.
b. Support `ShapeAsConstantBuffer` by tracing the sympy expression.
c. Move output generation inside the metadata hook, allowing us to populate `meta["val"]` for the nodes computing `ShapeAsConstantBuffer`.

# Test plan
Added several new CI tests:
 1. `torch.cond` with dynamic shapes. This exposes both issues, as the predicate is a `ShapeAsConstantBuffer` and one of the subgraphs uses a symbol input, due to the closure. Also tests when the parent and subgraphs have different input shapes.
 2. Output dynamic shape scalar. This tests `ShapeAsConstantBuffer` as an output.

Pull Request resolved: pytorch#163596
Approved by: https://github.com/angelayi, https://github.com/jansel
jainapurva pushed a commit that referenced this pull request Sep 29, 2025
…uts (#163596)

# Problems
This PR fixes a few edge cases that the FX converter missed related to dynamic shapes.

1. Inductor graphs can sometimes take `sympy.Symbol` inputs. We have logic to convert these to FX placeholder nodes. However, this logic did not update the `self.expr_to_proxy` table mapping symbols to proxy nodes. (There was existing logic to do this for `ir.TensorBox` inputs, but not `sympy.Symbol`.) This caused sympy tracing to fail when these symbol inputs were used in other expressions.

2. We lacked codegen for `ShapeAsConstantBuffer`. This IR node is seen when the graph input or output is a scalar computed from dynamic shapes.

# Fixes
a. Update `self.expr_to_proxy` when generating placeholders for `sympy.Symbol` inputs. Change `SymbolBuffer.get_example` to convert the symbol to a `torch.SymInt`, so we can populate `meta["val"]` correctly and use the value in other computations.
b. Support `ShapeAsConstantBuffer` by tracing the sympy expression.
c. Move output generation inside the metadata hook, allowing us to populate `meta["val"]` for the nodes computing `ShapeAsConstantBuffer`.

# Test plan
Added several new CI tests:
 1. `torch.cond` with dynamic shapes. This exposes both issues, as the predicate is a `ShapeAsConstantBuffer` and one of the subgraphs uses a symbol input, due to the closure. Also tests when the parent and subgraphs have different input shapes.
 2. Output dynamic shape scalar. This tests `ShapeAsConstantBuffer` as an output.

Pull Request resolved: #163596
Approved by: https://github.com/angelayi, https://github.com/jansel
@github-actions github-actions bot deleted the brister/fx_cond branch October 25, 2025 02:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged module: inductor release notes: fx release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants