-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Open
Labels
module: dtensordistributed tensor tagdistributed tensor tagoncall: distributedAdd this issue/PR to distributed oncall triage queueAdd this issue/PR to distributed oncall triage queuetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Describe the bug
global_tensor = torch.arange(8).view(4, 2)
mesh_2d = init_device_mesh(
self.device_type, (2, 2), mesh_dim_names=("DP", "TP")
)
tp_mesh = mesh_2d["TP"]
dtensor_tp = distribute_tensor(
global_tensor, tp_mesh, placements=[Shard(0)]
)
# This is expected, as we don't have DTensor op strategy registered for aten.diagonal.default
# Error -- NotImplementedError: Operator aten.diagonal.default does not have a sharding strategy registered.
# b= torch.diagonal(dtensor_tp)
# This is an example that `_propagate_tensor_meta(op_schema)` get called first and errors out,
# so it doesn't correctly show the no sharding strategy registered error.
# Error -- torch._subclasses.fake_tensor.DynamicOutputShapeException: aten.nonzero.default
a = torch.nonzero(dtensor_tp)
Related issue: #132016
Versions
N/A
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @tianyu-l @wconstab @yf225 @chauhang @d4l3k
awgu
Metadata
Metadata
Assignees
Labels
module: dtensordistributed tensor tagdistributed tensor tagoncall: distributedAdd this issue/PR to distributed oncall triage queueAdd this issue/PR to distributed oncall triage queuetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module