Skip to content

[DTensor] Add sharding prop for masked_fill_.Scalar#169668

Closed
wconstab wants to merge 2 commits intogh/wconstab/470/basefrom
gh/wconstab/470/head
Closed

[DTensor] Add sharding prop for masked_fill_.Scalar#169668
wconstab wants to merge 2 commits intogh/wconstab/470/basefrom
gh/wconstab/470/head

Conversation

@wconstab
Copy link
Contributor

@wconstab wconstab commented Dec 5, 2025

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 5, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/169668

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 910ea58 with merge base 716edc3 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Member

@zpcore zpcore left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Comment on lines +394 to +408
# Test with Replicate placement
input_tensor2 = (
torch.arange(40, dtype=torch.float32, device=self.device_type).reshape(8, 5)
- 20
)
mask2 = input_tensor2 < 0
fill_value2 = 42.0

dt_input2 = distribute_tensor(input_tensor2.clone(), device_mesh, [Replicate()])
dt_mask2 = distribute_tensor(mask2, device_mesh, [Replicate()])

input_tensor2.masked_fill_(mask2, fill_value2)
dt_input2.masked_fill_(dt_mask2, fill_value2)

self.assertEqual(input_tensor2, dt_input2.full_tensor())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel test with Partial is more meaningful. We can distribute_tensor directly to Partial now.

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #169667

pytorchmergebot pushed a commit that referenced this pull request Dec 6, 2025
Addresses an issue reported where foreach_max and foreach_abs do not
compose, which claimed foreach_max and foreach_abs work fine
individually.

Actually, just adds support for foreach_max, which appears to have been
missing all along
Pull Request resolved: #169667
Approved by: https://github.com/zpcore
ghstack dependencies: #169668
umechand-amd pushed a commit to ROCm/pytorch that referenced this pull request Dec 8, 2025
umechand-amd pushed a commit to ROCm/pytorch that referenced this pull request Dec 8, 2025
Addresses an issue reported where foreach_max and foreach_abs do not
compose, which claimed foreach_max and foreach_abs work fine
individually.

Actually, just adds support for foreach_max, which appears to have been
missing all along
Pull Request resolved: pytorch#169667
Approved by: https://github.com/zpcore
ghstack dependencies: pytorch#169668
JacobSzwejbka pushed a commit that referenced this pull request Dec 8, 2025
JacobSzwejbka pushed a commit that referenced this pull request Dec 8, 2025
Addresses an issue reported where foreach_max and foreach_abs do not
compose, which claimed foreach_max and foreach_abs work fine
individually.

Actually, just adds support for foreach_max, which appears to have been
missing all along
Pull Request resolved: #169667
Approved by: https://github.com/zpcore
ghstack dependencies: #169668
tiendatngcs pushed a commit to tiendatngcs/pytorch-Dec25 that referenced this pull request Dec 10, 2025
Fixes #152249

ghstack-source-id: 2adb0a5
Pull Request resolved: pytorch/pytorch#169668
@github-actions github-actions bot deleted the gh/wconstab/470/head branch January 6, 2026 02:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants