Skip to content

[DTensor] implement logsumexp#163879

Closed
suo wants to merge 4 commits intomainfrom
logsumexp
Closed

[DTensor] implement logsumexp#163879
suo wants to merge 4 commits intomainfrom
logsumexp

Conversation

@suo
Copy link
Member

@suo suo commented Sep 25, 2025

as title, mostly copypasta from internal. I am a dtensor noob, so please scrutinize my added test.

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 25, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/163879

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit 113cdd7 with merge base c4312b4 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added ciflow/inductor oncall: distributed Add this issue/PR to distributed oncall triage queue labels Sep 25, 2025
@suo suo requested review from XilunWu and tianyu-l September 25, 2025 17:21
@suo suo added the release notes: distributed (dtensor) release notes category label Sep 25, 2025
@suo suo requested a review from ezyang September 25, 2025 17:39
Copy link
Contributor

@XilunWu XilunWu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Comment on lines +1240 to +1246
schema_info=RuntimeSchemaInfo(
# static_argnum is the position where non-Tensor args beings.
static_argnum=1,
# static_kwargkey is the name of kwargs to hash (which determines
# whether sharding prop can be cached).
static_kwargkey=["keepdim"],
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks correct


# args_schema contains all but the DTensor args (e.g., dim, keepdim).
args_schema = op_schema.args_schema
assert len(args_schema) > 1 # input and dim are required.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert len(args_schema) == 2?

@XilunWu
Copy link
Contributor

XilunWu commented Sep 25, 2025

Does DTensor provide an easy way to register reduction ops regardless of their varying arguments? i.e. extract the argument processing part.

@suo
Copy link
Member Author

suo commented Sep 25, 2025

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 25, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

jainapurva pushed a commit that referenced this pull request Sep 29, 2025
as title, mostly copypasta from internal. I am a dtensor noob, so please scrutinize my added test.

Pull Request resolved: #163879
Approved by: https://github.com/XilunWu
@github-actions github-actions bot deleted the logsumexp branch October 26, 2025 02:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (dtensor) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants