Skip to content

Conversation

@XilunWu
Copy link
Contributor

@XilunWu XilunWu commented Jan 20, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 20, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/92677

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 3ea6b8c:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

XilunWu added a commit that referenced this pull request Jan 20, 2023
ghstack-source-id: a27c5a3
Pull Request resolved: #92677
@XilunWu XilunWu added the release notes: distributed (dtensor) release notes category label Jan 20, 2023
XilunWu added a commit that referenced this pull request Jan 20, 2023
ghstack-source-id: d4cbf11
Pull Request resolved: #92677
Copy link
Collaborator

@wanchaol wanchaol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments inlined, we should make sure the xfail of cat to be removed so that it passes all the possible cases.

return output_sharding


def _update_schema_suggestion_for_cat(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you tell me what exactly this function is doing? it looks like a lot of duplicate logic with the rule itself and I am not quite sure what this function is used for.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

einop_rule expects the op_schema argument to have its args_schema in form [DTensorSpec, DTensorSpec, ...] but when it's passed into cat_rule the schema is actually [List[DTensorSpec]]. That's why I convert the args_schema at the beginning of cat_rule (https://github.com/pytorch/pytorch/pull/92677/files#diff-ebc7be1151cf411ce7edf46c4ca1cabb74cd953a2bdf47e04b4cc733c31f6085R492) before feeding it into einop_rule. Thus, we need to convert it back if a schema_suggestion is present here.

Copy link
Collaborator

@wanchaol wanchaol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks for working on it! left a couple of suggestions and some question.

dim_word = free_dim[:dim] + alphabet[i] + free_dim[dim:]
einop_notation_list.append(dim_word)
else:
einop_notation_list.append(alphabet[i])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this the empty tensor annotation where it have a single char?

Copy link
Contributor Author

@XilunWu XilunWu Jan 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not entirely for empty tensor but empty tensor whose ndim is smaller than other tensors. This is for case like concatenating Tensor([], shape=torch.Size([0])) with Tensor([[1, 2], [3, 4]], shape=torch.Size([2, 2])).

In this case, an empty annotation may still work but we want to ensure that the dim char for cat_dim in output tensor annotation must appear in input as well. Adding each input tensor's cat_dim dim char into annotation guarantees that.

@XilunWu XilunWu added the ciflow/trunk Trigger trunk jobs on your pull request label Jan 26, 2023
XilunWu added a commit that referenced this pull request Jan 26, 2023
ghstack-source-id: 4c2f629
Pull Request resolved: #92677
@XilunWu
Copy link
Contributor Author

XilunWu commented Jan 27, 2023

@pytorchmergebot merge -g

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@XilunWu XilunWu deleted the gh/XilunWu/12/head branch April 11, 2023 21:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: distributed (dtensor) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants