Skip to content

Commit 940ceac

Browse files
committed
lint
1 parent e4bf4a9 commit 940ceac

File tree

1 file changed

+20
-20
lines changed

1 file changed

+20
-20
lines changed

torch/_torch_docs.py

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -5297,12 +5297,12 @@ def parse_kwargs(desc):
52975297
.. note::
52985298
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
52995299
repeatedly running FFT methods on tensors of same geometry with same
5300-
same configuration.
5301-
5300+
same configuration.
5301+
53025302
Changing ``torch.backends.cuda.cufft_plan_cache.max_size`` (default 1023)
5303-
controls the capacity of this cache. Some cuFFT plans may allocate GPU
5304-
memory. You may use ``torch.backends.cuda.cufft_plan_cache.size`` to query
5305-
the number of plans currently in cache, and
5303+
controls the capacity of this cache. Some cuFFT plans may allocate GPU
5304+
memory. You may use ``torch.backends.cuda.cufft_plan_cache.size`` to query
5305+
the number of plans currently in cache, and
53065306
``torch.backends.cuda.cufft_plan_cache.clear()`` to clear the cache.
53075307
53085308
.. warning::
@@ -5397,12 +5397,12 @@ def parse_kwargs(desc):
53975397
.. note::
53985398
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
53995399
repeatedly running FFT methods on tensors of same geometry with same
5400-
same configuration.
5401-
5400+
same configuration.
5401+
54025402
Changing ``torch.backends.cuda.cufft_plan_cache.max_size`` (default 1023)
5403-
controls the capacity of this cache. Some cuFFT plans may allocate GPU
5404-
memory. You may use ``torch.backends.cuda.cufft_plan_cache.size`` to query
5405-
the number of plans currently in cache, and
5403+
controls the capacity of this cache. Some cuFFT plans may allocate GPU
5404+
memory. You may use ``torch.backends.cuda.cufft_plan_cache.size`` to query
5405+
the number of plans currently in cache, and
54065406
``torch.backends.cuda.cufft_plan_cache.clear()`` to clear the cache.
54075407
54085408
.. warning::
@@ -5486,12 +5486,12 @@ def parse_kwargs(desc):
54865486
.. note::
54875487
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
54885488
repeatedly running FFT methods on tensors of same geometry with same
5489-
same configuration.
5490-
5489+
same configuration.
5490+
54915491
Changing ``torch.backends.cuda.cufft_plan_cache.max_size`` (default 1023)
5492-
controls the capacity of this cache. Some cuFFT plans may allocate GPU
5493-
memory. You may use ``torch.backends.cuda.cufft_plan_cache.size`` to query
5494-
the number of plans currently in cache, and
5492+
controls the capacity of this cache. Some cuFFT plans may allocate GPU
5493+
memory. You may use ``torch.backends.cuda.cufft_plan_cache.size`` to query
5494+
the number of plans currently in cache, and
54955495
``torch.backends.cuda.cufft_plan_cache.clear()`` to clear the cache.
54965496
54975497
.. warning::
@@ -5567,12 +5567,12 @@ def parse_kwargs(desc):
55675567
.. note::
55685568
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
55695569
repeatedly running FFT methods on tensors of same geometry with same
5570-
same configuration.
5571-
5570+
same configuration.
5571+
55725572
Changing ``torch.backends.cuda.cufft_plan_cache.max_size`` (default 1023)
5573-
controls the capacity of this cache. Some cuFFT plans may allocate GPU
5574-
memory. You may use ``torch.backends.cuda.cufft_plan_cache.size`` to query
5575-
the number of plans currently in cache, and
5573+
controls the capacity of this cache. Some cuFFT plans may allocate GPU
5574+
memory. You may use ``torch.backends.cuda.cufft_plan_cache.size`` to query
5575+
the number of plans currently in cache, and
55765576
``torch.backends.cuda.cufft_plan_cache.clear()`` to clear the cache.
55775577
55785578
.. warning::

0 commit comments

Comments
 (0)