-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add torch.atleast_{1d/2d/3d}
#41317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add torch.atleast_{1d/2d/3d}
#41317
Conversation
💊 CI failures summary and remediationsAs of commit c6dd6de (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 59 times. |
|
Please review:) |
| else: | ||
| raise ValueError("{} is not a valid value for compute_mode".format(compute_mode)) | ||
|
|
||
| def atleast_1d(*tensors): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you really need to document this here? Seems like it could go in _torch_docs.py?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried moving them to _torch_docs but on importing got
E TypeError: don't know how to add docstring to type 'function'
I guess keeping the documentation in functional.py is in-spirit with the functions already present in that file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm. You should definitely be able to add them to _torch_docs since these are new docstrings!
Docstrings should really be in _torch_docs.py. Having them elsewhere is a pain. I don't think you need to define these functions in that file at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that having the else is not intuitive to find ( faced that while trying to find meshgrid) but I think this is where it will have to stay here.
I don't think you need to define these functions in that file at all.
The reason the function is in functional.py is so that it can support the torch.atleast_1d(x,y,z) signature similar to numpy which does. The signatures in native_functions.yaml are wrapped to support the above signature.
- func: atleast_1d(Tensor self) -> Tensor
use_c10_dispatcher: full
variants: function
- func: atleast_1d.Sequence(Tensor[] tensors) -> Tensor[]
use_c10_dispatcher: full
I tried this
>>> torch._C._add_docstr(torch.atleast_1d, "Test Doc")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: don't know how to add docstring to type 'function'
>>> torch._C._add_docstr(torch.gt, "Test Doc")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: function 'gt' already has a docstringAlso I looked at Module.cpp which has the code for _add_docstr.
Line 210 in c20426f
| PyObject *THPModule_addDocStr(PyObject *_unused, PyObject *args) |
the type of the new
atleast_1d function falls in the following error branch.Line 260 in c20426f
| return PyErr_Format(PyExc_TypeError, |
Also checking the type of few functions, the type of the functions added in functional.py is different from the ones which are present in _torch_docs.py.
>>> torch.atleast_1d # functional.py
<function atleast_1d at 0x7f4c3336c510>
>>> torch.meshgrid # functional.py
<function meshgrid at 0x7f4c3329a950>
>>> torch.add # _torch_docs
<built-in method add of type object at 0x7f4c5c45f000>
>>> torch.combinations # _torch_docs
<built-in method combinations of type object at 0x7f4c5c45f000>
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks great, thanks for the PR, @kshitij12345!
I made a few code cleanup comments and have some suggestions for the docs. I hope we can put them in the _torch_docs.py file, too, and get rid of that JIT-specific code.
I really like your test, by the way.
Tried removing it 9dce44a https://dr.pytorch.org/api/view-log-full?build_id=122698430 On a bit digging, looks like the JIT branch is needed to handle Line 734 in c20426f
So will add that branch again. |
How can that be, though? Many functions, like the recently added lcm and gcd #40651, don't have the same issue and they support overriding. What's the difference between functions like that and these? |
|
The difference is that those functions rely on bindings generated from For this one, we generate pytorch/aten/src/ATen/native/native_functions.yaml Lines 495 to 500 in 2700938
So this allows, x = torch.randn(3)
torch.atleast_1d(x)
torch.atleast_1d((x,x))
torch.atleast_1d(x,x,x) # Will fail. However this is supported by numpy.So to support Line 987 in 2700938
However with this, like mentioned above the type of the wrapped function is different from the type which of the ones generated from I guess that is why all the functions which support |
Interesting! So we don't support passing arbitrarily long lists in native_functions.yaml. That makes a lot of sense. Thanks for taking the time and explaining that to me, @kshitij12345! Sounds like you're doing the right thing. Let me know when you're ready for review again. fyi @bhosmer |
|
I believe the PR is ready for review. :) |
torch/functional.py
Outdated
|
|
||
| def atleast_1d(*tensors): | ||
| r""" | ||
| atleast_1d(input) -> Tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
atleast_1d(input) -> Tensor or list of Tensors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will remove this line as we already have
Lines 993 to 994 in 2700938
| Args: | |
| input (Tensor or list of Tensors) |
torch/functional.py
Outdated
| tensor(0.5000) | ||
| >>> torch.atleast_3d(x) | ||
| tensor([[[0.5000]]]) | ||
| >>> x = torch.randn(1,1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about a 2x2 example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure.
|
Still looks very, very good. Made a couple notes and asked a new question about the return type (tuple vs list). Let me know your thoughts! |
* update example * fix typo * add return type * remove incorrect signature of function
|
Had forgotten to add autograd test. Have added it now. |
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Great work as always, @kshitij12345!
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Merged! Congrats, @kshitij12345! |
#38349
TODO: