Skip to content

Commit 66d23db

Browse files
authored
fix spurious aot autograd warning (#95521) (#95614)
The _make_boxed logic probably needs a cleanup, but this fixes a spurious warning that we should get in before the release. Confirmed that this used to emit a warning and no longer does: ``` import torch lin = torch.nn.Linear(100, 10) def f(x): return lin(x) opt_f = torch.compile(f) opt_f(torch.randn(10, 100, requires_grad=False)) ``` Pull Request resolved: #95521 Approved by: https://github.com/ngimel
1 parent e2fff58 commit 66d23db

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

torch/_functorch/aot_autograd.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1870,6 +1870,9 @@ def create_runtime_wrapper(
18701870
trace_joint: bool,
18711871
keep_input_mutations: bool,
18721872
):
1873+
if not hasattr(compiled_fn, "_boxed_call"):
1874+
compiled_fn = make_boxed_func(compiled_fn)
1875+
18731876
def runtime_wrapper(*args):
18741877
# Step 2: remove aliased inputs that are mutated, replace with synthetic bases
18751878
# Only happens if our graph mutates an input that aliases another input.

0 commit comments

Comments
 (0)