Skip to content

Commit d8ffc60

Browse files
authored
Remove mention of dynamo.optimize() in docs (#95802) (#96007)
This should be self containable to merge but other stuff that's been bugging me is * Instructions on debugging IMA issues * Dynamic shape instructions * Explaining config options better Will look at adding a config options doc Pull Request resolved: #95802 Approved by: https://github.com/svekars
1 parent 1483723 commit d8ffc60

File tree

4 files changed

+17
-30
lines changed

4 files changed

+17
-30
lines changed

docs/source/dynamo/get-started.rst

Lines changed: 14 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,12 @@ significant speedups the newer your GPU is.
66

77
.. code:: python
88
9-
from torch._dynamo import optimize
109
import torch
1110
def fn(x, y):
1211
a = torch.cos(x).cuda()
1312
b = torch.sin(y).cuda()
1413
return a + b
15-
new_fn = optimize("inductor")(fn)
14+
new_fn = torch.compile(fn, backend="inductor")
1615
input_tensor = torch.randn(10000).to(device="cuda:0")
1716
a = new_fn(input_tensor, input_tensor)
1817
@@ -54,7 +53,7 @@ with the actual generated kernel being
5453
tmp2 = tl.sin(tmp1)
5554
tl.store(out_ptr0 + (x0 + tl.zeros([XBLOCK], tl.int32)), tmp2, xmask)
5655
57-
And you can verify that fusing the two ``sins`` did actually occur
56+
And you can verify that fusing the two ``sin`` did actually occur
5857
because the two ``sin`` operations occur within a single Triton kernel
5958
and the temporary variables are held in registers with very fast access.
6059

@@ -69,13 +68,12 @@ hub.
6968
.. code-block:: python
7069
7170
import torch
72-
import torch._dynamo as dynamo
7371
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)
74-
opt_model = dynamo.optimize("inductor")(model)
72+
opt_model = torch.compile(model, backend="inductor")
7573
model(torch.randn(1,3,64,64))
7674
7775
And that is not the only available backend, you can run in a REPL
78-
``dynamo.list_backends()`` to see all the available backends. Try out the
76+
``torch._dynamo.list_backends()`` to see all the available backends. Try out the
7977
``cudagraphs`` or ``nvfuser`` next as inspiration.
8078

8179
Let’s do something a bit more interesting now, our community frequently
@@ -92,11 +90,10 @@ HuggingFace hub and optimize it:
9290
9391
import torch
9492
from transformers import BertTokenizer, BertModel
95-
import torch._dynamo as dynamo
9693
# Copy pasted from here https://huggingface.co/bert-base-uncased
9794
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
9895
model = BertModel.from_pretrained("bert-base-uncased").to(device="cuda:0")
99-
model = dynamo.optimize("inductor")(model) # This is the only line of code that we changed
96+
model = torch.compile(model, backend="inductor") # This is the only line of code that we changed
10097
text = "Replace me by any text you'd like."
10198
encoded_input = tokenizer(text, return_tensors='pt').to(device="cuda:0")
10299
output = model(**encoded_input)
@@ -116,7 +113,7 @@ Similarly let’s try out a TIMM example
116113
import torch._dynamo as dynamo
117114
import torch
118115
model = timm.create_model('resnext101_32x8d', pretrained=True, num_classes=2)
119-
opt_model = dynamo.optimize("inductor")(model)
116+
opt_model = torch.compile(model, backend="inductor")
120117
opt_model(torch.randn(64,3,7,7))
121118
122119
Our goal with Dynamo and inductor is to build the highest coverage ML compiler
@@ -132,16 +129,16 @@ or ``torch._dynamo.list_backends()`` each of which with its optional dependencie
132129
Some of the most commonly used backends include:
133130

134131
**Training & inference backends**:
135-
* ``dynamo.optimize("inductor")`` - Uses ``TorchInductor`` backend. `Read more <https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747>`__
136-
* ``dynamo.optimize("aot_ts_nvfuser")`` - nvFuser with AotAutograd/TorchScript. `Read more <https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593>`__
137-
* ``dynamo.optimize("nvprims_nvfuser")`` - nvFuser with PrimTorch. `Read more <https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593>`__
138-
* ``dynamo.optimize("cudagraphs")`` - cudagraphs with AotAutograd. `Read more <https://github.com/pytorch/torchdynamo/pull/757>`__
132+
* ``torch.compile(m, backend="inductor")`` - Uses ``TorchInductor`` backend. `Read more <https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747>`__
133+
* ``torch.compile(m, backend="aot_ts_nvfuser")`` - nvFuser with AotAutograd/TorchScript. `Read more <https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593>`__
134+
* ``torch.compile(m, backend=""nvprims_nvfuser")`` - nvFuser with PrimTorch. `Read more <https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593>`__
135+
* ``torch.compile(m, backend="cudagraphs")`` - cudagraphs with AotAutograd. `Read more <https://github.com/pytorch/torchdynamo/pull/757>`__
139136

140137
**Inference-only backends**:
141-
* ``dynamo.optimize("onnxrt")`` - Uses ONNXRT for inference on CPU/GPU. `Read more <https://onnxruntime.ai/>`__
142-
* ``dynamo.optimize("tensorrt")`` - Uses ONNXRT to run TensorRT for inference optimizations. `Read more <https://github.com/onnx/onnx-tensorrt>`__
143-
* ``dynamo.optimize("ipex")`` - Uses IPEX for inference on CPU. `Read more <https://github.com/intel/intel-extension-for-pytorch>`__
144-
* ``dynamo.optimize("tvm")`` - Uses Apach TVM for inference optimizations. `Read more <https://tvm.apache.org/>`__
138+
* ``torch.compile(m, backend="onnxrt")`` - Uses ONNXRT for inference on CPU/GPU. `Read more <https://onnxruntime.ai/>`__
139+
* ``torch.compile(m, backend="tensorrt")`` - Uses ONNXRT to run TensorRT for inference optimizations. `Read more <https://github.com/onnx/onnx-tensorrt>`__
140+
* ``torch.compile(m, backend="ipex")`` - Uses IPEX for inference on CPU. `Read more <https://github.com/intel/intel-extension-for-pytorch>`__
141+
* ``torch.compile(m, backend="tvm")`` - Uses Apach TVM for inference optimizations. `Read more <https://tvm.apache.org/>`__
145142

146143
Why do you need another way of optimizing PyTorch code?
147144
-------------------------------------------------------

docs/source/dynamo/guards-overview.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Where a complete example looks like this:
1515
1616
from typing import List
1717
import torch
18-
import torchdynamo
18+
from torch import _dynamo as torchdynamo
1919
def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
2020
print("my_compiler() called with FX graph:")
2121
gm.graph.print_tabular()

docs/source/dynamo/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ worlds — usability and performance.
1414

1515
TorchDynamo makes it easy to experiment with different compiler
1616
backends to make PyTorch code faster with a single line decorator
17-
``torch._dynamo.optimize()``
17+
``torch._dynamo.optimize()`` which is wrapped for convenience by ``torch.compile()``
1818

1919
.. image:: ../_static/img/dynamo/TorchDynamo.png
2020

docs/source/dynamo/installation.rst

Lines changed: 1 addition & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ TorchDynamo dependencies (for CUDA 11.7):
2727

2828
.. code-block:: shell
2929
30-
pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117
30+
pip3 install numpy --pre torch --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117
3131
3232
CPU requirements
3333
~~~~~~~~~~~~~~~~
@@ -41,16 +41,6 @@ To install, run the following command:
4141
pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu
4242
4343
44-
Install from Local Source
45-
~~~~~~~~~~~~~~~~~~~~~~~~~
46-
47-
Alternatively, you can build PyTorch from `source
48-
<https://github.com/pytorch/pytorch#from-source>`__, which has TorchDynamo
49-
included.
50-
51-
To install GPU TorchDynamo dependencies, run ``make triton`` in the
52-
PyTorch repo root directory.
53-
5444
Verify Installation
5545
~~~~~~~~~~~~~~~~~~~
5646

0 commit comments

Comments
 (0)