Skip to content

Conversation

@peterbell10
Copy link
Collaborator

@peterbell10 peterbell10 commented Sep 11, 2020

Part of the torch.fft work (gh-42175).
This adds n-dimensional transforms: fftn, ifftn, rfftn and irfftn.

This is aiming for correctness first, with the implementation on top of the existing _fft_with_size restrictions. I plan to follow up later with a more efficient rewrite that makes _fft_with_size work with arbitrary numbers of dimensions.

Stack from ghstack:

Differential Revision: D23846032

@dr-ci
Copy link

dr-ci bot commented Sep 11, 2020

💊 CI failures summary and remediations

As of commit 9bccee0 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 77 times.

peterbell10 added a commit that referenced this pull request Sep 11, 2020
ghstack-source-id: bc92ef4
Pull Request resolved: #44550
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
peterbell10 added a commit that referenced this pull request Sep 12, 2020
ghstack-source-id: 16ea948
Pull Request resolved: #44550
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
peterbell10 added a commit that referenced this pull request Sep 12, 2020
ghstack-source-id: 805300c
Pull Request resolved: #44550
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Sep 17, 2020
ghstack-source-id: 805300c
Pull Request resolved: pytorch#44550
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Sep 19, 2020
ghstack-source-id: 4fee26e
Pull Request resolved: pytorch#44550
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
@codecov
Copy link

codecov bot commented Sep 20, 2020

Codecov Report

❗ No coverage uploaded for pull request base (gh/peterbell10/10/base@c70e6fe). Click here to learn what that means.
The diff coverage is n/a.

Impacted file tree graph

@@                    Coverage Diff                    @@
##             gh/peterbell10/10/base   #44550   +/-   ##
=========================================================
  Coverage                          ?   68.08%           
=========================================================
  Files                             ?      393           
  Lines                             ?    50765           
  Branches                          ?        0           
=========================================================
  Hits                              ?    34564           
  Misses                            ?    16201           
  Partials                          ?        0           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c70e6fe...9bccee0. Read the comment docs.

@mruberry mruberry added this to the 1.7.0 milestone Sep 22, 2020
@mruberry
Copy link
Collaborator

Hey @peterbell10, this is looking great. Let me know when I should review it again and if there's anything I can help with. As mentioned on the next PR: 1.7's branch cut is 8 days away, so I'd like to get this in within the next week.

@peterbell10
Copy link
Collaborator Author

@mruberry this is ready for another look.

Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work as usual, @peterbell10!

Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.




[ghstack-poisoned]
@peterbell10
Copy link
Collaborator Author

Fixed a few small doc errors which weren't updated properly after copy-pasting.


@skipCPUIfNoMkl
@skipCUDAIfRocm
@unittest.skipIf(not TEST_NUMPY, 'NumPy not found')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may need @onlyOnCPUAndCUDA here and on another test below. Would you rebase to test the XLA build?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noticed some of the fft tests were missing this as well. Added and rebased on viable/strict.

@mruberry

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. XLA build looks good.

Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.


Differential Revision: [D23846032](https://our.internmc.facebook.com/intern/diff/D23846032)

[ghstack-poisoned]
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.


Differential Revision: [D23846032](https://our.internmc.facebook.com/intern/diff/D23846032)

[ghstack-poisoned]
Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.


Differential Revision: [D23846032](https://our.internmc.facebook.com/intern/diff/D23846032)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 6a2e9eb.

@facebook-github-bot facebook-github-bot deleted the gh/peterbell10/10/head branch September 27, 2020 14:15
facebook-github-bot pushed a commit that referenced this pull request Oct 6, 2020
Summary:
Between September 25 and September 27, approximately half an hour was added to the running time of `pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test`. Judging from the CircleCI data, it looks like the majority of the new time was added by the following PRs:

- #44550
- #45298

I'm not sure what to do about #44550, but it looks like #45298 increased the `N` for `TestForeach` from just 20 to include both 30 and 300. This PR would remove the 300, decreasing the test time by a couple orders of magnitude (at least when running it on my devserver), from over ten minutes to just a few seconds.

Pull Request resolved: #45785

Reviewed By: malfet

Differential Revision: D24094782

Pulled By: samestep

fbshipit-source-id: 2476cee9d513b2b07bc384de751e08d0e5d8b5e7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants