Skip to content

[Inductor] Decouple flags for optimization and debug symbols#167385

Closed
bbeckca wants to merge 1 commit intopytorch:mainfrom
bbeckca:export-D86363355
Closed

[Inductor] Decouple flags for optimization and debug symbols#167385
bbeckca wants to merge 1 commit intopytorch:mainfrom
bbeckca:export-D86363355

Conversation

@bbeckca
Copy link
Contributor

@bbeckca bbeckca commented Nov 7, 2025

Summary:
What: Decouple flags for optimization and debug symbols

Why: The current flag for debug symbols only compiles the .so binary in unoptimized mode

Differential Revision: D86363355

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben

@meta-codesync
Copy link

meta-codesync bot commented Nov 7, 2025

@bbeckca has exported this pull request. If you are a Meta employee, you can view the originating Diff in D86363355.

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 7, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/167385

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 4eb121b with merge base 69ecb56 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@bbeckca
Copy link
Contributor Author

bbeckca commented Nov 7, 2025

@pytorchbot label "topic: not user facing"

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label Nov 7, 2025
…#167385)

Summary:

What: Decouple flags for optimization and debug symbols

Why: The current flag for debug symbols only compiles the .so binary in unoptimized mode

Test Plan:
```
mkdir -p ~/testing

manifold get ads_storage_fblearner/tree/user/facebook/fblearner/predictor/812888234/215/lowering/.predictor.local/input_model ~/testing/812888234_215.input.predictor.local

rm -rf /var/tmp/torchinductor_* && buck2 run mode/opt deeplearning/aot_inductor/cpu:cli -- --local-model-path ~/testing/812888234_215.input.predictor.local --submodule mix --preset ads_pre_match_ranking 2>&1 | tee benchmark_test.txt
```

Before: P2028790409 contains "O0", “g”
After: P2028736491 contains "O3", "DNDEBUG", “g”


```
mkdir -p ~/testing

manifold get ads_storage_fblearner/tree/user/facebook/fblearner/predictor/726554047/202/lowering/.predictor.local/input_model ~/testing/726554047_202.input.predictor.local

rm -rf /var/tmp/torchinductor_* && buck2 run mode/opt deeplearning/aot_inductor/cpu:cli -- --local-model-path ~/testing/726554047_202.input.predictor.local --submodule merge --preset ads_second_stage_ranking_type_2 2>&1 | tee benchmark_test.txt
```

Before: P2028790088 contains "O3", "DNDEBUG", 
After: P2028744377 contains "O3", "DNDEBUG",

Differential Revision: D86363355
Copy link
Contributor

@hl475 hl475 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for decoupling this!

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 8, 2025
@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Silv3S pushed a commit to Silv3S/pytorch that referenced this pull request Nov 18, 2025
…#167385)

Summary:
What: Decouple flags for optimization and debug symbols

Why: The current flag for debug symbols only compiles the .so binary in unoptimized mode

Differential Revision: D86363355

Pull Request resolved: pytorch#167385
Approved by: https://github.com/hl475, https://github.com/jansel
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants