Skip to content

Very uninformative stack-trace when exception thrown in custom autograd backward #41659

@vadimkantorov

Description

@vadimkantorov

I had message-less assert in my custom autograd function backward. Note absence of the function name or class name. This happens with conda python. I think this is standard Python problem: allow_unreachable=True) # allow_unreachable flag, but maybe this is related to Python/C++ interaction?

Traceback (most recent call last):
  File "benchmark.py", line 103, in <module>
    y.sum().backward()
  File "/miniconda/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/miniconda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError

In fact this is an AssertionError

cc @ezyang @ssnl @albanD @zou3519 @gqchen @yf225 @glaringlee

Metadata

Metadata

Assignees

Labels

module: autogradRelated to torch.autograd, and the autograd engine in generalmodule: cpp-extensionsRelated to torch.utils.cpp_extensionmodule: error checkingBugs related to incorrect/lacking error checkingtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions