-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Updates div to perform true division #42907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
💊 CI failures summary and remediationsAs of commit 7744f2b (more details on the Dr. CI page):
XLA failureJob pytorch_xla_linux_bionic_py3_6_clang9_build is failing. Please create an issue with title prefixed by This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 66 times. |
|
@ailzhang Looks like this change is breaking XLA because it changes |
|
cc: @JackCaoG This requires updating |
|
@mruberry Sorry, missed this email. I will take a look tomorrow. Feel free to ping me on Slack if I didn't response to these kind of failures within a day 😄 , or open an issue on pt/xla which I check more frequently. |
|
@mruberry Sounds good, I will work on it tomorrow. |
|
@mruberry Currently I am seeing I think the intent was to make |
|
Hey @mruberry I am running into an issue where if I do the two tensor version of the divide |
No it's not. Let me do some local testing. I thought we had tests for this. I also need to rebase this. |
|
@JackCaoG I believe I fixed the issue you were seeing. This PR is also rebased - sorry there was a holdup. |
| - func: true_divide.Tensor(Tensor self, Tensor other) -> Tensor | ||
| use_c10_dispatcher: full | ||
| variants: function, method | ||
| dispatch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does setting dispatch override the gradient? Check what test would catch this.
Delete dispatches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting follow-up: probably want to add gradcheck to test_op_aliases, method_test entries missing for true_divide and floor_divide.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Explicitly dispatching was causing autograd to fail. Deleting the dispatches works. Unfortunately aliases can't be added to method_tests() without additional test changes because test_jit.py has a check where it looks for the node, by name, but the node doesn't appear in the JIT's graph (because it's been aliased to another op). In the near future aliases will be refactored into OpInfos and gradcheck'd automatically there.
| Performs a "true" division like Python 3. See :func:`torch.floor_divide` | ||
| for floor division. | ||
| Supports :ref:`broadcasting to a common shape <broadcasting-semantics>`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: check if scalar semantics are covered
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The broadcasting note DOES NOT describe this, and the note is in need of an udpate. I added a note to our Docs Hackathon planner.
|
@mruberry I am still seeing the same behavior, running give me I am using the most recent div_true and I also rebased my pt/xla branch Could you give it another look? |
I agree with you but I'm glad we've verified it's not a regression. Want to file a follow-up issue on the PyTorch/XLA side? How does this PR look otherwise? |
This pr looks good. I need to fix/workaround the type promotion issue caused by the |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| } | ||
| } | ||
|
|
||
| TensorIterator TensorIterator::binary_op(Tensor& out, const Tensor& a, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
binary_op_float
| ('div', (S, S, S), (uniform_scalar(0.1),), 'scalar_broadcast_rhs', (True,)), | ||
| ('div', (), (uniform_scalar(0.1),), 'scalar_broadcast_lhs', (True,)), | ||
| ('div', torch.rand(S, S, S) + 1e-1, (3.14,), 'constant', (True,)), | ||
| ('div', uniform_scalar(1e-1, requires_grad=True), (3.14,), 'scalar_constant', (True,)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add true_divide and ignore in jit (for now) with comment
| self.assertEqual(div_result, dividend.clone().true_divide_(divisor)) | ||
| self.assertEqual(dividend.clone().div_(2), dividend.clone().true_divide_(2)) | ||
| def test_div_promotion_inplace(self, device, dtype): | ||
| for op in (torch.Tensor.div_, torch.Tensor.true_divide_): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be nicer if you could just assert div_ is true_divide_ eh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be. We could still set the functions equal in Python and C++.
|
@JackCaoG I was planning to land this on Monday, that OK with you? I can start the land in the morning and I would expect it to land before noon. |
Sounds good |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@JackCaoG This is merged (finally). Sorry it's later than expected due to some internal test slowdowns today. |
|
@mruberry thanks for the reminder, I merged xla side change too. |
Summary: This PR: - updates div to perform true division - makes torch.true_divide an alias of torch.div This follows on work in previous PyTorch releases that first deprecated div performing "integer" or "floor" division, then prevented it by throwing a runtime error. Pull Request resolved: #42907 Reviewed By: ngimel Differential Revision: D23622114 Pulled By: mruberry fbshipit-source-id: 414c7e3c1a662a6c3c731ad99cc942507d843927
This PR:
This follows on work in previous PyTorch releases that first deprecated div performing "integer" or "floor" division, then prevented it by throwing a runtime error.