Expose torch.compiler.config.force_disable_caches as a public API#166699
Expose torch.compiler.config.force_disable_caches as a public API#166699gmagogsfm wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166699
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit a0c5b25 with merge base e69aaaf ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot merge -r master |
|
❌ 🤖 pytorchbot command failed: Try |
|
@pytorchbot merge -r main |
|
@pytorchbot started a rebase job onto refs/remotes/origin/main. Check the current status here |
|
Successfully rebased |
7f48705 to
bed2aa4
Compare
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: Could not find commit that was pushed before comment 3492444133 Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -r main |
|
@pytorchbot started a rebase job onto refs/remotes/origin/main. Check the current status here |
|
Successfully rebased |
bed2aa4 to
a0c5b25
Compare
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: Commit bed2aa4 was HEAD when comment 3493698982 was posted but now the latest commit on the PR is a0c5b25. Please re-issue the merge command to merge the latest commit. Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Exposing this flag as some upstream frameworks (like vLLM) could benefit from knowing whether torch.compile caches are enabled or not to adjust their own caching behavior.
cc @mlazos