-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Avoid resizing in MinMaxObserver #43789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: Since it's single element.. In some cases we may not be able to resize the buffers. Test Plan: waiting for unit tests Differential Revision: D23393108 fbshipit-source-id: e4fdc94589a04494aa9d7988afef3988fd6e6082
|
This pull request was exported from Phabricator. Differential Revision: D23393108 |
| quant_max=quant_max) | ||
| self.register_buffer('min_val', torch.tensor([])) | ||
| self.register_buffer('max_val', torch.tensor([])) | ||
| self.register_buffer('min_val', torch.tensor(float('inf'))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we serialize tensors with inf/-inf correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems ok
>>> m
EmbeddingBag(1, 2, mode=mean)
>>> m.register_buffer("v", torch.tensor([float('inf')]))
>>> m.v
tensor([inf])
>>> ms = torch.jit.script(m)
>>> ms
RecursiveScriptModule(original_name=EmbeddingBag)
>>> ms.v
tensor([inf])
>>> torch.jit.save(ms, "/tmp/test.pt")
>>> ms_2 = torch.jit.load("/tmp/test.pt")
>>> ms_2.v
tensor([inf])
💊 CI failures summary and remediationsAs of commit 181492d (more details on the Dr. CI page):
1 failure confirmed as flaky and can be ignored:
Extra GitHub checks: 1 failed
codecov.io: 1 failed
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 6 times. |
supriyar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, if all tests pass
Codecov Report
@@ Coverage Diff @@
## master #43789 +/- ##
==========================================
- Coverage 69.34% 69.33% -0.02%
==========================================
Files 378 378
Lines 46698 46692 -6
==========================================
- Hits 32383 32374 -9
- Misses 14315 14318 +3
Continue to review full report at Codecov.
|
|
This pull request has been merged in f73ba88. |
Summary:
Since it's single element.. In some cases we may not be able to resize the
buffers.
Test Plan: waiting for unit tests
Differential Revision: D23393108