Skip to content

Commit 18a288a

Browse files
author
Taylor Robie
committed
Update on "[Profiler] Account for caching when assigning IDs"
The python tracer caches information about module and optimizer state. That means that for subsequent calls, the presence of a Tensor in these fields does not imply that the Tensor is still live; just that it was live during the first call. (I should perhaps rename the fields to something like `stale_parameters` to convey this.) Unless we discard subsequent calls ID assignment get tripped up when it see's a Tensor that was already released. Differential Revision: [D41226827](https://our.internmc.facebook.com/intern/diff/D41226827/) [ghstack-poisoned]
1 parent e9e3ad6 commit 18a288a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torch/csrc/profiler/data_flow.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ void calculateUniqueTensorIDs(
8282
},
8383
[&](ExtraFields<EventType::PyCall>& py_call) {
8484
// torch.nn.Module
85-
if (py_call.module_.has_value()&&
85+
if (py_call.module_.has_value() &&
8686
seen_modules.insert(py_call.module_->self_).second) {
8787
for (auto& p : py_call.module_->parameters_) {
8888
raw_tensors(p.metadata_);

0 commit comments

Comments
 (0)