Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions aten/src/ATen/native/SpectralOps.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -561,13 +561,21 @@ void _cufft_clear_plan_cache(int64_t device_index) {
}

Tensor fft(const Tensor& self, const int64_t signal_ndim, const bool normalized) {
TORCH_WARN_ONCE(
"The function torch.fft is deprecated and will be removed in PyTorch 1.8. "
"Use the new torch.fft module functions, instead, by importing torch.fft "
"and calling torch.fft.fft or torch.fft.fftn.");
return _fft(self, signal_ndim, /* complex_input */ true,
/* complex_output */ true, /* inverse */ false, {},
normalized ? fft_norm_mode::by_root_n : fft_norm_mode::none,
/* onesided */ false);
}

Tensor ifft(const Tensor& self, const int64_t signal_ndim, const bool normalized) {
TORCH_WARN_ONCE(
"The function torch.ifft is deprecated and will be removed in a future "
"PyTorch release. Use the new torch.fft module functions, instead, by "
"importing torch.fft and calling torch.fft.ifft or torch.fft.ifftn.");
return _fft(self, signal_ndim, /* complex_input */ true,
/* complex_output */ true, /* inverse */ true, {},
normalized ? fft_norm_mode::by_root_n : fft_norm_mode::by_n,
Expand All @@ -576,6 +584,10 @@ Tensor ifft(const Tensor& self, const int64_t signal_ndim, const bool normalized

Tensor rfft(const Tensor& self, const int64_t signal_ndim, const bool normalized,
const bool onesided) {
TORCH_WARN_ONCE(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deprecating torch.fft() is the priority (since its name conflicts with the module), but deprecating these is OK if the same pattern is followed. I.e.:

""The function torch.rfft is deprecated and will be removed in a future PyTorch release. Use the new torch.fft module functions, instead, by importing torch.fft and calling torch.fft.rfft or torch.fft.rfftn."

And also updating the torch.rfft and torch.irfft docs.

"The function torch.rfft is deprecated and will be removed in a future "
"PyTorch release. Use the new torch.fft module functions, instead, by "
"importing torch.fft and calling torch.fft.fft or torch.fft.rfft.");
return _fft(self, signal_ndim, /* complex_input */ false,
/* complex_output */ true, /* inverse */ false, {},
normalized ? fft_norm_mode::by_root_n : fft_norm_mode::none,
Expand All @@ -584,6 +596,10 @@ Tensor rfft(const Tensor& self, const int64_t signal_ndim, const bool normalized

Tensor irfft(const Tensor& self, const int64_t signal_ndim, const bool normalized,
const bool onesided, IntArrayRef signal_sizes) {
TORCH_WARN_ONCE(
"The function torch.irfft is deprecated and will be removed in a future "
"PyTorch release. Use the new torch.fft module functions, instead, by "
"importing torch.fft and calling torch.fft.ifft or torch.fft.irfft.");
return _fft(self, signal_ndim, /* complex_input */ true,
/* complex_output */ false, /* inverse */ true, signal_sizes,
normalized ? fft_norm_mode::by_root_n : fft_norm_mode::by_n,
Expand Down
2 changes: 2 additions & 0 deletions docs/source/fft.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
.. role:: hidden
:class: hidden-section

.. _torch-fft-module:

torch.fft
=========

Expand Down
26 changes: 26 additions & 0 deletions torch/_torch_docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -8514,6 +8514,12 @@ def merge_dicts(*dicts):

The inverse of this function is :func:`~torch.ifft`.

.. deprecated:: 1.7.0
The function :func:`torch.fft` is deprecated and will be removed in
PyTorch 1.8. Use the new :ref:`torch.fft <torch-fft-module>` module
functions, instead, by importing :ref:`torch.fft <torch-fft-module>` and
calling :func:`torch.fft.fft` or :func:`torch.fft.fftn`.

.. note::
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
repeatedly running FFT methods on tensors of same geometry with same
Expand Down Expand Up @@ -8617,6 +8623,12 @@ def merge_dicts(*dicts):

The inverse of this function is :func:`~torch.fft`.

.. deprecated:: 1.7.0
The function :func:`torch.ifft` is deprecated and will be removed in a
future PyTorch release. Use the new :ref:`torch.fft <torch-fft-module>`
module functions, instead, by importing :ref:`torch.fft <torch-fft-module>`
and calling :func:`torch.fft.ifft` or :func:`torch.fft.ifftn`.

.. note::
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
repeatedly running FFT methods on tensors of same geometry with same
Expand Down Expand Up @@ -8705,6 +8717,13 @@ def merge_dicts(*dicts):

The inverse of this function is :func:`~torch.irfft`.

.. deprecated:: 1.7.0
The function :func:`torch.rfft` is deprecated and will be removed in a
future PyTorch release. Use the new :ref:`torch.fft <torch-fft-module>`
module functions, instead, by importing :ref:`torch.fft <torch-fft-module>`
and calling :func:`torch.fft.rfft` for one-sided output, or
:func:`torch.fft.fft` for two-sided output.

.. note::
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
repeatedly running FFT methods on tensors of same geometry with same
Expand Down Expand Up @@ -8777,6 +8796,13 @@ def merge_dicts(*dicts):

The inverse of this function is :func:`~torch.rfft`.

.. deprecated:: 1.7.0
The function :func:`torch.irfft` is deprecated and will be removed in a
future PyTorch release. Use the new :ref:`torch.fft <torch-fft-module>`
module functions, instead, by importing :ref:`torch.fft <torch-fft-module>`
and calling :func:`torch.fft.irfft` for one-sided input, or
:func:`torch.fft.ifft` for two-sided input.

.. warning::
Generally speaking, input to this function should contain values
following conjugate symmetry. Note that even if :attr:`onesided` is
Expand Down