Skip to content

Conversation

@muthuArivoli
Copy link
Contributor

Related to #38349

@muthuArivoli muthuArivoli marked this pull request as draft September 7, 2020 05:30
@dr-ci
Copy link

dr-ci bot commented Sep 7, 2020

💊 CI failures summary and remediations

As of commit a5455f8 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 47 times.

@zhangguanheng66 zhangguanheng66 added module: numpy Related to numpy support, and also numpy compatibility of our operators module: operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Sep 7, 2020
@muthuArivoli muthuArivoli marked this pull request as ready for review September 7, 2020 21:48
@muthuArivoli
Copy link
Contributor Author

@mruberry PTAL. The XLA failure is real, but I'm not sure why it works for the other window functions. but fails on kaiser_window (maybe because I'm using a kernel for kaiser).

In response to the comments from the i0 PR:

  1. I believe I have fused the operations (not completely sure if I did it correctly, though). I placed the kernel in the UnaryOpsKernel files, since there was no file in the cpu directory for TensorFactories, but I can create one if that is needed.

  2. This function currently uses the periodic arg, which acts opposite to the sym arg in scipy.signal. I did this because all of the other pytorch window functions use this periodic arg, and do not use the sym version.

res = torch_method(size, *args, periodic=periodic, device=device)
# NB: scipy always returns a float32 result
ref = torch.from_numpy(signal.get_window(name, size, fftbins=periodic))
ref = torch.from_numpy(signal.get_window((name, *args), size, fftbins=periodic))
Copy link
Collaborator

@mruberry mruberry Sep 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the signature for get_window() will do what you like. Instead you can pass the torch function and the reference as a pair:

(torch.kaiser_window, scipy.signal.kaiser)

You could also pass the args to each in the tuples so you can unpack them as:

torch_fn, torch_args, ref_fn, ref_ags = ...

test(window)

for num_test in range(50):
test('kaiser', (random.random() * 30,))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice to see a bigger cross of window sizes and beta params, that can be crossed with the periodic argument, too.

kaiser_window(window_length, beta, periodic=True, dtype=None, \
layout=torch.strided, device=None, requires_grad=False) -> Tensor
""" + r"""
Kaiser window function.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would a more accurate description be something like, "Computes symmetric samples from the Kaiser windowing function"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I agree with this, since you can compute non symmetric samples with this function (in fact the default is to do that).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense, I realize now what I suggested is a problem. What I'm trying to get at is something more like, "samples symmetrically from..." as in this Wikipedia article. But all the other documentation, like NumPy and matlab, seem to just refer to this as computing the Kaiser window function, which doesn't seem correct but is probably good enough for now.

What about just, "Computes the Kaiser window with window length :attr:`window_length` and shape parameter :attr:`beta`."? It's nice to include how the positional arguments relate to the function in the opening sentence. I don't love the user of the term "shape factor" for beta, but it seems reasonable and consistent with other descriptions.

@muthuArivoli
Copy link
Contributor Author

@mruberry, Unfortunately, the doc build keeps failing if I list out the keyword args, giving me the same error as in #43669. I also tried the fix recommended in that issue, and it still failed to build. I have removed the keyword args description, but kept the star in the function description. What is the right way to proceed?

@mruberry
Copy link
Collaborator

@mruberry, Unfortunately, the doc build keeps failing if I list out the keyword args, giving me the same error as in #43669. I also tried the fix recommended in that issue, and it still failed to build. I have removed the keyword args description, but kept the star in the function description. What is the right way to proceed?

That's the best we can do for now. Sorry about the docs build. We should really root cause and fix this issue.

@muthuArivoli
Copy link
Contributor Author

@mruberry, Unfortunately, the doc build keeps failing if I list out the keyword args, giving me the same error as in #43669. I also tried the fix recommended in that issue, and it still failed to build. I have removed the keyword args description, but kept the star in the function description. What is the right way to proceed?

That's the best we can do for now. Sorry about the docs build. We should really root cause and fix this issue.

Alright, in that case I think this can use another review, hopefully the content in the docs is better now.

- func: kaiser_window(int window_length, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
use_c10_dispatcher: full

- func: kaiser_window.periodic(int window_length, bool periodic, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these three native_functions.yaml entries be combined if defaults for periodic and beta are specified?

- func: kaiser_window.beta(int window_length, bool periodic=True, float beta=12, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
  use_c10_dispatcher: full

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work, @muthuArivoli! i0 and this are really interesting functions and it's great to have them available in PyTorch.

I have a couple nits about the docs. I also think you could get away with a single dispatch for this function, too, but it's OK if you'd prefer to leave the three dispatch entries you have (that's consistent with the other window functions, after all). Let me know when you're satisfied and we'll get this merged!

@muthuArivoli
Copy link
Contributor Author

@mruberry, I updated the docs. I tried implementing the single dispatch, but I ran into this compile error:

../torch/csrc/autograd/generated/variable_factories.h:371:72: error: ‘True’ was not declared in this scope
 inline at::Tensor kaiser_window(int64_t window_length, bool periodic = True, double beta = 12.0, const at::TensorOptions & options = {}) {
                                                                        ^~~~

I tried to make the True lowercase in the yaml file, but this still popped up. If this is due to something not related to the implementation of the kaiser window function, then I think I'll stick with the multiple dispatches, and this should be good to merge.

@mruberry
Copy link
Collaborator

@mruberry, I updated the docs. I tried implementing the single dispatch, but I ran into this compile error:

../torch/csrc/autograd/generated/variable_factories.h:371:72: error: ‘True’ was not declared in this scope
 inline at::Tensor kaiser_window(int64_t window_length, bool periodic = True, double beta = 12.0, const at::TensorOptions & options = {}) {
                                                                        ^~~~

I tried to make the True lowercase in the yaml file, but this still popped up. If this is due to something not related to the implementation of the kaiser window function, then I think I'll stick with the multiple dispatches, and this should be good to merge.

That's weird. See, for example:

- func: requires_grad_(Tensor(a!) self, bool requires_grad=True) -> Tensor(a!)

which sets a bool to a default value. We don't need to worry about it in this PR, however. Let's wait for the tests and get it merged.

@codecov
Copy link

codecov bot commented Sep 16, 2020

Codecov Report

Merging #44271 into master will decrease coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #44271      +/-   ##
==========================================
- Coverage   67.95%   67.95%   -0.01%     
==========================================
  Files         384      384              
  Lines       49768    49769       +1     
==========================================
- Hits        33822    33821       -1     
- Misses      15946    15948       +2     
Impacted Files Coverage Δ
torch/overrides.py 98.05% <ø> (ø)
torch/_torch_docs.py 100.00% <100.00%> (ø)
torch/utils/_benchmark/utils/common.py 77.68% <0.00%> (-1.66%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1d733d6...a5455f8. Read the comment docs.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in b61d3d8.

@mruberry
Copy link
Collaborator

This is merged! Congrats @muthuArivoli. Let me know if you're looking for something else and, if you are, what might be fun.

@muthuArivoli
Copy link
Contributor Author

Yes, I'm looking for something else. Any recommendations?

@mruberry
Copy link
Collaborator

It depends on what you're looking for. Let me offer a few options:

  • torch.divmod would be nice. I think contributors have avoided it because it returns two tensors instead of one.
  • We're starting to invest more in linear algebra operations, like improving our Cholesky decomposition. These are especially challenging tasks but one of them might be interesting.
  • kron would be a tricky tensor factory to implement.
  • pad would be useful. PyTorch already has a pad function but there are discrepancies between our Python and C++ APIs, and NumPy supports many more modes (making pad consistent and/or support even one new mode would be nice).

Would any of those be interesting? Or would you like something simpler as a break? I can break down linear algebra-related ideas further if that direction is interesting.

@muthuArivoli
Copy link
Contributor Author

I'd like to hear more about the linear algebra ideas.

One other question. When implementing the specialized splits (hsplit, dsplit, vsplit), I ran into a problem where the arg name for the input didn't match up with the arg name in the torch.split function. For some reason the torch.split uses the arg name tensor as its input, but I could only get the arg name input to work. Any ideas on how to fix that/is that important?

@mruberry
Copy link
Collaborator

I'd like to hear more about the linear algebra ideas.

I'll pull a few and update the linalg rollup issue soon. It's a really interesting space.

One other question. When implementing the specialized splits (hsplit, dsplit, vsplit), I ran into a problem where the arg name for the input didn't match up with the arg name in the torch.split function. For some reason the torch.split uses the arg name tensor as its input, but I could only get the arg name input to work. Any ideas on how to fix that/is that important?

That is strange. I thought we required native_functions always start with self, but I might be mistaken. Do you have a draft PR I could look at? Also, it seems like the first arg in torch.split should be called input in the documentation, to be consistent with other functions.

xuzhao9 pushed a commit that referenced this pull request Sep 18, 2020
Summary:
Related to #38349

Pull Request resolved: #44271

Reviewed By: ngimel

Differential Revision: D23727972

Pulled By: mruberry

fbshipit-source-id: b4c931b2eb3a536231ad6d6c3cb66e52a13286ac
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: numpy Related to numpy support, and also numpy compatibility of our operators open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants