Skip to content

Commit 4bd5c1e

Browse files
authored
Fix warning if backend registers timer (#91702) (#95363)
currently logger timer is registered default for cpu/cuda. for other backends, it may or may not registers this timer. It reports warning for other backends and return which is not expected. The above may fail, if the backends has have registered this timer. For example, HPU(habana) backend registers this timer. so, in this case it reports a warning and return which is incorrect. Other case is where lazy backend timer is never registered. so, this returns a warning, and this is the reason the check was added, but it fails for other cases. Add a generic check if the timer is registered, then don’t report warning. Signed-off-by: Jeeja <jeejakp@habana.ai> Fixes #ISSUE_NUMBER Pull Request resolved: #91702 Approved by: https://github.com/kit1980
1 parent f3c97a4 commit 4bd5c1e

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

torch/csrc/distributed/c10d/logger.cpp

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -320,7 +320,9 @@ void Logger::set_runtime_stats_and_log() {
320320
"Cuda time stats are not collected for multi-device modules.");
321321
return;
322322
}
323-
if (!reducer_->params_[0].is_cuda() && !reducer_->params_[0].is_cpu()) {
323+
324+
if (!reducer_->timer_ &&
325+
(!reducer_->params_[0].is_cuda() && !reducer_->params_[0].is_cpu())) {
324326
TORCH_WARN_ONCE(
325327
"Time stats are currently only collected for CPU and CUDA devices. "
326328
"Please refer to CpuTimer or CudaTimer for how to register timer "

0 commit comments

Comments
 (0)