-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[pt] Add incude_last_offset option to EmbeddingBag mean and max #42215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Specifically on #27477 (comment) We would like to supported with include_last=True overall for other reduction types like mean and max. It now causes further code fragmentation in DPER (https://www.internalfb.com/intern/diff/D22794469/) Differential Revision: [D22801881](https://our.internmc.facebook.com/intern/diff/D22801881/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22801881/)! [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit daacef0 (more details on the Dr. CI page):
🕵️ 3 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
dzhulgakov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, just fix the test a bit please! And lint seems to be complaining
test/test_nn.py
Outdated
| for dtype, mode, trainable, include_last_offset in itertools.product(dtypes, modes, trainable_scale, include_last_offset): | ||
| test_per_sample_weights_new_offsets(mode, dtype, trainable, include_last_offset) | ||
| if mode != 'sum': | ||
| has_weight = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this actually disables has_weight for future iterations of the loop too :(
just pass has_weight and mode == 'sum' below as an arg
test/test_nn.py
Outdated
| modes = ('sum', 'max', 'mean') | ||
| trainable_scale = (True, False) | ||
| include_last_offset = (True, False) | ||
| has_weight = (True, False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you probably wanted to include has_weight in the for loop iteration too
or you could do for simplicity:
modes = (('sum', False), ('sum', True), ('max', False), ('mean', False))
for dtype, (mode, has_weight), trainable, ...
test/test_nn.py
Outdated
| offsets = torch.tensor([0, 0, 3, 3, 6], device=device, dtype=torch.long) | ||
|
|
||
| if include_last_offset is True and mode == 'sum': | ||
| if include_last_offset is True: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: no need for is True :)
test/test_nn.py
Outdated
| def _embedding_bag_reference_impl(self, input, weight, offsets=None, mode='sum', | ||
| per_sample_weights=None, include_last_offset=False): | ||
| assert mode == 'sum' | ||
| assert mode == 'sum' or per_sample_weights == None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
linter is yelling
…d max" Specifically on #27477 (comment) We would like to supported with include_last=True overall for other reduction types like mean and max. It now causes further code fragmentation in DPER (https://www.internalfb.com/intern/diff/D22794469/) Differential Revision: [D22801881](https://our.internmc.facebook.com/intern/diff/D22801881/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22801881/)! [ghstack-poisoned]
…d max" Specifically on #27477 (comment) We would like to supported with include_last=True overall for other reduction types like mean and max. It now causes further code fragmentation in DPER (https://www.internalfb.com/intern/diff/D22794469/) Differential Revision: [D22801881](https://our.internmc.facebook.com/intern/diff/D22801881/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22801881/)! [ghstack-poisoned]
…d max" Specifically on #27477 (comment) We would like to supported with include_last=True overall for other reduction types like mean and max. It now causes further code fragmentation in DPER (https://www.internalfb.com/intern/diff/D22794469/) Differential Revision: [D22801881](https://our.internmc.facebook.com/intern/diff/D22801881/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22801881/)! [ghstack-poisoned]
Pull Request resolved: #42215 Specifically on #27477 (comment) We would like to supported with include_last=True overall for other reduction types like mean and max. It now causes further code fragmentation in DPER (https://www.internalfb.com/intern/diff/D22794469/). More details: https://www.internalfb.com/intern/diff/D22794469/?dest_fbid=309597093427021&transaction_id=631457624153457 ghstack-source-id: 108733009 Differential Revision: [D22801881](https://our.internmc.facebook.com/intern/diff/D22801881/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22801881/)!
|
This pull request has been merged in 1c5c289. |
Stack from ghstack:
Specifically on #27477 (comment)
We would like to supported with include_last=True overall for other reduction types like mean and max. It now causes further code fragmentation in DPER (https://www.internalfb.com/intern/diff/D22794469/)
Differential Revision: D22801881
NOTE FOR REVIEWERS: This PR has internal Facebook specific changes or comments, please review them on Phabricator!