torch.fx: add debug-level logging to Interpreter.run_node (#117351)#166622
torch.fx: add debug-level logging to Interpreter.run_node (#117351)#166622mdbarnesUCSD wants to merge 3 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166622
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (4 Unrelated Failures)As of commit 2ce2726 with merge base a533526 ( FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
can you do the python printing instead |
|
Thanks @ezyang, I’ve updated the PR to use python printing instead. Each node is now logged in a format that mirrors the original Python code, including module prefixes like Example output: Let me know if any further changes would be helpful. |
torch/fx/interpreter.py
Outdated
| f"{n.name} = " | ||
| f"{getattr(n.target, '__module__', '') + '.' if hasattr(n.target, '__module__') else ''}" | ||
| f"{getattr(n.target, '__name__', n.target)}" | ||
| f"({', '.join(map(str, n.args))})" |
There was a problem hiding this comment.
Should we print kwargs too?
torch/fx/interpreter.py
Outdated
| """ | ||
| log.debug( | ||
| "run_node %s", | ||
| LazyString(lambda: |
There was a problem hiding this comment.
nit: This might be easier to read if the LazyString called a top-level helper function - especially if we want to eventually do more complex formatting (such as printing "call_method" as obj.method() instead of class.method(obj))
(but I don't feel strongly - so only do it if you agree)
There was a problem hiding this comment.
Thanks for the review.
I added kwargs to the debug output and moved the string formatting into a top-level _format_fx_node helper for readability and easier future extensions.
|
Need to update the example output in the PR summary |
|
This LGTM but the lintrunner error needs to be addressed (I can't imagine it will cause anything else to fail). |
9611467 to
2ce2726
Compare
|
I ran I noticed that several of the remaining CI jobs are failing under the Linux Jammy configurations (e.g., the |
You should be good. If you look above (in the first "pytorch-bot" comment) it gives a status message. It currently says "You can merge normally" so once I've approved you can use torchbot to merge. |
|
Thanks @aorenste, I’ll go ahead and comment |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary
Adds a debug-level logging statement to torch.fx.Interpreter.run_node, as proposed in #117351, to make FX graph execution traceable when debugging or instrumenting model transformations.
When debug logging is enabled, each executed node emits a single structured log line formatted via
LazyString(lambda: n.format_node()), deferring string construction unless logging is active.Example Output
With
logging.DEBUGenabled:With
logging.DEBUGdisabled no additional output is produced (unchanged default behavior).Test Plan
Verified locally with Python 3.11 on macOS using a PyTorch build from source.
logging.DEBUGenabled: each node emits a debug log via LazyString.logging.DEBUGdisabled: no additional output.Interpretertests pass locally:pytest test/test_fx.py -k "Interpreter"Updated the example output to reflect the new
_format_fx_nodehelper and inclusion ofkwargs.cc @ezyang @EikanWang @jgong5 @wenzhe-nrv