Skip to content

Conversation

@RockingJavaBean
Copy link
Contributor

@RockingJavaBean RockingJavaBean commented Jul 17, 2020

Related #38349

Numpy-like functionalities isposinf and isneginf are implemented.

Test-Plan:

  • pytest test/test_torch.py -k "test_isposinf_isneginf"

@dr-ci
Copy link

dr-ci bot commented Jul 17, 2020

💊 CI failures summary and remediations

As of commit 398ed32 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 38 times.

@RockingJavaBean RockingJavaBean force-pushed the numpy_isposinf_isneginf branch from 03e17f0 to d6c6d4f Compare July 17, 2020 12:29
@RockingJavaBean RockingJavaBean force-pushed the numpy_isposinf_isneginf branch from d6c6d4f to 63470b4 Compare July 20, 2020 01:33
@RockingJavaBean RockingJavaBean changed the title [WIP] implement numpy-like functionality isposinf, isneginf implement numpy-like functionality isposinf, isneginf Jul 20, 2020
@RockingJavaBean
Copy link
Contributor Author

@mruberry The PR implements both isposinf and isneginf methods according to the numpy's APIs.
Please kindly help review it.

@RockingJavaBean
Copy link
Contributor Author

pytorch_xla_linux_bionic_py3_6_clang9_test job failed due to the XLA tensors do not have storage.

Jul 20 04:05:30 ERROR [0.006s]: test_isneginf_float_xla_float32 (__main__.TestTorchDeviceTypeXLA)
Jul 20 04:05:30 ----------------------------------------------------------------------
Jul 20 04:05:30 Traceback (most recent call last):
Jul 20 04:05:30   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 241, in instantiated_test
Jul 20 04:05:30     result = test(self, device_arg, dtype)
Jul 20 04:05:30   File "/var/lib/jenkins/workspace/xla/test/../../test/test_torch.py", line 6495, in test_isneginf_float
Jul 20 04:05:30     self.compare_with_numpy(torch.isneginf, np.isneginf, vals, device, dtype)
Jul 20 04:05:30   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 885, in compare_with_numpy
Jul 20 04:05:30     torch_result = torch_fn(t).cpu()
Jul 20 04:05:30 RuntimeError: torch_xla/csrc/tensor_impl.cpp:144 : XLA tensors do not have storage

Please kindly let me know whether I need to update the logic for supporting XLA, or skip the XLA tests with onlyOnCPUAndCUDA decorator.

@ailzhang ailzhang requested a review from JackCaoG July 20, 2020 16:08
@ailzhang
Copy link
Contributor

cc: @JackCaoG for the XLA failure.

@gchanan gchanan added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 20, 2020
@gchanan gchanan requested a review from mruberry July 20, 2020 23:44
@RockingJavaBean RockingJavaBean force-pushed the numpy_isposinf_isneginf branch from b91e402 to c1bde68 Compare July 21, 2020 04:10
@RockingJavaBean RockingJavaBean requested a review from ailzhang July 21, 2020 10:24
@RockingJavaBean
Copy link
Contributor Author

Apologizes for triggering requesting reviews from code owners due to the inappropriate operations of git merge.
After investigations, I found the following Github docs that contains the suitable commands for merging upstream changes into the fork.

$ git pull https://github.com/pytorch/pytorch.git master

I will use this command in further contributions.

@RockingJavaBean
Copy link
Contributor Author

The CI checks all passed thanks to ailzhang's kind suggestions regarding XLA, and this PR is ready for review,

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed the user of TensorIterator. You're on the right path but I suggest using a trick for bool and integer cases that will greatly simplify your code. I also have a question about how the bfloat16 dispatch path works since I don't think the cpp standard supports numeric limits for bfloat16? Maybe I'm mistaken.

The tests look good. I'm sorry I wrote something confusing about the documentation in our last discussion. I added more context. Please let me know if it makes sense now.

@RockingJavaBean
Copy link
Contributor Author

Reviewed the user of TensorIterator. You're on the right path but I suggest using a trick for bool and integer cases that will greatly simplify your code. I also have a question about how the bfloat16 dispatch path works since I don't think the cpp standard supports numeric limits for bfloat16? Maybe I'm mistaken.

The tests look good. I'm sorry I wrote something confusing about the documentation in our last discussion. I added more context. Please let me know if it makes sense now.

@mruberry I cannot thank you more for the kind help and mentoring during this PR, the kernels have been simplified significantly by adopting the suggested approach.

NumPy supports users in passing non-bool arrays to the out= parameters.

In [1]: import numpy as np
In [2]: x = np.array([float('inf'), float('inf')])
In [3]: y = np.array([10, 20])
In [4]: np.isposinf(x, out=y)
Out[4]: array([1, 1])
In [5]: y
Out[5]: array([1, 1])

Thus I kept cast_common_dtype_to_outputs and didn't add bool check for result tensors.

I also experimented with dispatching bfloat16 in my local environment.
It seems std namespace may refer to the one defined in the BFloat16-inl.h, instead of the cpp standard one.

namespace std {
template <>
class numeric_limits<c10::BFloat16> {
public:
static constexpr bool is_signed = true;

This may be the reason why bfloat16 tests are passed.

@RockingJavaBean RockingJavaBean requested a review from mruberry July 24, 2020 15:43
@RockingJavaBean
Copy link
Contributor Author

@mruberry Thanks for the reply, I have pushed a new commit 398ed32 to restrict output tensors to be bool.
Please kindly help review.

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Looks good to me!

Thank you for working on this, @RockingJavaBean, and being willing to iterate on it. Please let me know if you'd like to implement another PyTorch function.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@RockingJavaBean
Copy link
Contributor Author

@mruberry sure, I love to implement another PyTorch function.
Please kindly suggest which function that I should work on, I will give it a try.

@mruberry
Copy link
Collaborator

@mruberry sure, I love to implement another PyTorch function.
Please kindly suggest which function that I should work on, I will give it a try.

Great!

Since you just did a pair of unary functions, how about a pair of binary functions?

numpy.maximum and numpy.minimum are another pair of functions whose implementations will have some similarities with the work you've done here, like using TensorIterator, but will require writing binary kernels, handling complex values, and dealing with different output dtypes.

Do those sound interesting?

@RockingJavaBean
Copy link
Contributor Author

@mruberry Definitely, they sound really interesting, and thanks for the tips regarding implementation.
I am going to start working on them.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 90074bb.

@mruberry
Copy link
Collaborator

This has been merged! Congratulations, @RockingJavaBean!

@RockingJavaBean RockingJavaBean deleted the numpy_isposinf_isneginf branch August 5, 2020 11:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants