Skip to content

Commit b465a58

Browse files
bdhirshpytorchmergebot
authored andcommitted
DTensor: add more foreach ops to supported sharding prop list (#132066)
fixes #132016. Right now if you run an op that DTensor has no sharding prop rule, **and** that op accepts non-trivial pytrees of inputs tensors as arguments, DTensor can end up infinite looping before it has the chance to error due to not having a sharding prop rule. This PR doesn't fix the problem, but adds rules for the culprit ops (missing foreach ops) Pull Request resolved: #132066 Approved by: https://github.com/wanchaol
1 parent c3ee07c commit b465a58

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

torch/distributed/_tensor/ops/_pointwise_ops.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -545,10 +545,18 @@ def linear_pointwise_strategy(mesh: DeviceMesh, op_schema: OpSchema) -> Strategy
545545
aten._foreach_clamp_max_.Scalar,
546546
aten._foreach_clamp_min_.Scalar,
547547
aten._foreach_div_.List,
548+
aten._foreach_div_.Scalar,
548549
aten._foreach_div_.ScalarList,
550+
aten._foreach_div_.Tensor,
551+
aten._foreach_div.List,
552+
aten._foreach_div.Scalar,
553+
aten._foreach_div.ScalarList,
554+
aten._foreach_div.Tensor,
549555
aten._foreach_lerp_.Scalar,
550556
aten._foreach_maximum_.List,
551557
aten._foreach_mul.Scalar,
558+
aten._foreach_mul.ScalarList,
559+
aten._foreach_mul.Tensor,
552560
aten._foreach_mul.List,
553561
aten._foreach_mul_.Scalar,
554562
aten._foreach_mul_.ScalarList,

0 commit comments

Comments
 (0)