Skip to content

Conversation

@kshitij12345
Copy link
Collaborator

@kshitij12345 kshitij12345 commented Jul 12, 2020

#38349

TODO:

  • Docs
  • Tests

@dr-ci
Copy link

dr-ci bot commented Jul 12, 2020

💊 CI failures summary and remediations

As of commit c6dd6de (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 59 times.

@kshitij12345
Copy link
Collaborator Author

@mruberry

Please review:)

@ailzhang ailzhang requested a review from mruberry July 13, 2020 15:50
@ailzhang ailzhang added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 13, 2020
else:
raise ValueError("{} is not a valid value for compute_mode".format(compute_mode))

def atleast_1d(*tensors):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you really need to document this here? Seems like it could go in _torch_docs.py?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried moving them to _torch_docs but on importing got

E   TypeError: don't know how to add docstring to type 'function'

I guess keeping the documentation in functional.py is in-spirit with the functions already present in that file.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm. You should definitely be able to add them to _torch_docs since these are new docstrings!

Docstrings should really be in _torch_docs.py. Having them elsewhere is a pain. I don't think you need to define these functions in that file at all.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that having the else is not intuitive to find ( faced that while trying to find meshgrid) but I think this is where it will have to stay here.

I don't think you need to define these functions in that file at all.

The reason the function is in functional.py is so that it can support the torch.atleast_1d(x,y,z) signature similar to numpy which does. The signatures in native_functions.yaml are wrapped to support the above signature.

- func: atleast_1d(Tensor self) -> Tensor
  use_c10_dispatcher: full
  variants: function

- func: atleast_1d.Sequence(Tensor[] tensors) -> Tensor[]
  use_c10_dispatcher: full

I tried this

>>> torch._C._add_docstr(torch.atleast_1d, "Test Doc")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: don't know how to add docstring to type 'function'
>>> torch._C._add_docstr(torch.gt, "Test Doc")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: function 'gt' already has a docstring

Also I looked at Module.cpp which has the code for _add_docstr.

PyObject *THPModule_addDocStr(PyObject *_unused, PyObject *args)

the type of the new atleast_1d function falls in the following error branch.
return PyErr_Format(PyExc_TypeError,

Also checking the type of few functions, the type of the functions added in functional.py is different from the ones which are present in _torch_docs.py.

>>> torch.atleast_1d # functional.py
<function atleast_1d at 0x7f4c3336c510>
>>> torch.meshgrid # functional.py
<function meshgrid at 0x7f4c3329a950>
>>> torch.add # _torch_docs
<built-in method add of type object at 0x7f4c5c45f000>
>>> torch.combinations # _torch_docs
<built-in method combinations of type object at 0x7f4c5c45f000>

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks great, thanks for the PR, @kshitij12345!

I made a few code cleanup comments and have some suggestions for the docs. I hope we can put them in the _torch_docs.py file, too, and get rid of that JIT-specific code.

I really like your test, by the way.

@kshitij12345
Copy link
Collaborator Author

get rid of that JIT-specific code

Tried removing it 9dce44a
but with that we get

Jul 14 03:24:29 ======================================================================
Jul 14 03:24:29 ERROR: test_torch_functional_atleast_1d (__main__.TestTorchFunctionOverride)
Jul 14 03:24:29 ----------------------------------------------------------------------
Jul 14 03:24:29 Traceback (most recent call last):
Jul 14 03:24:29   File "test_overrides.py", line 475, in test
Jul 14 03:24:29     self.assertEqual(func(*func_args), -1)
Jul 14 03:24:29   File "/Users/distiller/workspace/miniconda3/lib/python3.7/site-packages/torch/functional.py", line 1013, in atleast_1d
Jul 14 03:24:29     return _VF.atleast_1d(tensors)
Jul 14 03:24:29 TypeError: expected Tensor as element 0 in argument 0, but got TensorLike

https://dr.pytorch.org/api/view-log-full?build_id=122698430

On a bit digging, looks like the JIT branch is needed to handle __torch_function_ support.

def handle_torch_function(

So will add that branch again.

@mruberry
Copy link
Collaborator

get rid of that JIT-specific code

Tried removing it 9dce44a
but with that we get

Jul 14 03:24:29 ======================================================================
Jul 14 03:24:29 ERROR: test_torch_functional_atleast_1d (__main__.TestTorchFunctionOverride)
Jul 14 03:24:29 ----------------------------------------------------------------------
Jul 14 03:24:29 Traceback (most recent call last):
Jul 14 03:24:29   File "test_overrides.py", line 475, in test
Jul 14 03:24:29     self.assertEqual(func(*func_args), -1)
Jul 14 03:24:29   File "/Users/distiller/workspace/miniconda3/lib/python3.7/site-packages/torch/functional.py", line 1013, in atleast_1d
Jul 14 03:24:29     return _VF.atleast_1d(tensors)
Jul 14 03:24:29 TypeError: expected Tensor as element 0 in argument 0, but got TensorLike

https://dr.pytorch.org/api/view-log-full?build_id=122698430

On a bit digging, looks like the JIT branch is needed to handle __torch_function_ support.

def handle_torch_function(

So will add that branch again.

How can that be, though? Many functions, like the recently added lcm and gcd #40651, don't have the same issue and they support overriding. What's the difference between functions like that and these?

@kshitij12345
Copy link
Collaborator Author

The difference is that those functions rely on bindings generated from native_functions.yaml.

For this one, we generate

- func: atleast_1d(Tensor self) -> Tensor
use_c10_dispatcher: full
variants: function
- func: atleast_1d.Sequence(Tensor[] tensors) -> Tensor[]
use_c10_dispatcher: full

So this allows,

x = torch.randn(3)
torch.atleast_1d(x)
torch.atleast_1d((x,x))

torch.atleast_1d(x,x,x) # Will fail. However this is supported by numpy.

So to support torch.atleast_1d(x,x,x), we wrap it in functional.py to support *args type of arguments and wrap them in list to pass it to the correct variant.

def atleast_1d(*tensors):

However with this, like mentioned above the type of the wrapped function is different from the type which of the ones generated from native_functions.yaml and thus _add_docstr does not support them.

I guess that is why all the functions which support *args are wrapped in functional.py like meshgrid, block_diag, broadcast_tensors and all of them have documentation in functional.py itself.

@mruberry
Copy link
Collaborator

The difference is that those functions rely on bindings generated from native_functions.yaml.

For this one, we generate

- func: atleast_1d(Tensor self) -> Tensor
use_c10_dispatcher: full
variants: function
- func: atleast_1d.Sequence(Tensor[] tensors) -> Tensor[]
use_c10_dispatcher: full

So this allows,

x = torch.randn(3)
torch.atleast_1d(x)
torch.atleast_1d((x,x))

torch.atleast_1d(x,x,x) # Will fail. However this is supported by numpy.

So to support torch.atleast_1d(x,x,x), we wrap it in functional.py to support *args type of arguments and wrap them in list to pass it to the correct variant.

def atleast_1d(*tensors):

However with this, like mentioned above the type of the wrapped function is different from the type which of the ones generated from native_functions.yaml and thus _add_docstr does not support them.

I guess that is why all the functions which support *args are wrapped in functional.py like meshgrid, block_diag, broadcast_tensors and all of them have documentation in functional.py itself.

Interesting! So we don't support passing arbitrarily long lists in native_functions.yaml. That makes a lot of sense. Thanks for taking the time and explaining that to me, @kshitij12345! Sounds like you're doing the right thing.

Let me know when you're ready for review again.

fyi @bhosmer

@kshitij12345
Copy link
Collaborator Author

I believe the PR is ready for review. :)


def atleast_1d(*tensors):
r"""
atleast_1d(input) -> Tensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

atleast_1d(input) -> Tensor or list of Tensors?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will remove this line as we already have

pytorch/torch/functional.py

Lines 993 to 994 in 2700938

Args:
input (Tensor or list of Tensors)

tensor(0.5000)
>>> torch.atleast_3d(x)
tensor([[[0.5000]]])
>>> x = torch.randn(1,1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about a 2x2 example?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure.

@mruberry
Copy link
Collaborator

Still looks very, very good. Made a couple notes and asked a new question about the return type (tuple vs list). Let me know your thoughts!

* update example
* fix typo
* add return type
* remove incorrect signature of function
@kshitij12345
Copy link
Collaborator Author

@mruberry

Had forgotten to add autograd test. Have added it now.
Please review.
Thanks!

@mruberry mruberry self-requested a review July 16, 2020 17:20
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Great work as always, @kshitij12345!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 71fdf74.

@mruberry
Copy link
Collaborator

Merged! Congrats, @kshitij12345!

@kshitij12345 kshitij12345 deleted the numpy/develop/atleast branch July 18, 2020 04:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants