Add attention benchmarking numbers to pytorch operator microbenchmarks#164155
Add attention benchmarking numbers to pytorch operator microbenchmarks#164155jainapurva wants to merge 19 commits intomainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/164155
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2882e90 with merge base 8110ce0 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| @@ -0,0 +1,29 @@ | |||
| # Comprehensive benchmark configuration for PyTorch transformer benchmarks | |||
| # Usage: python score_mod.py --config config_comprehensive.yaml | |||
There was a problem hiding this comment.
lets just keep the basicyaml config, we can add there
8fbefae to
9d8778b
Compare
jbschlosser
left a comment
There was a problem hiding this comment.
Nice work! I've got a few stylistic comments but nothing major
| experiment_count += 1 | ||
| # Periodic memory cleanup every 10 experiments | ||
| if experiment_count % 10 == 0: | ||
| cleanup_memory() |
There was a problem hiding this comment.
what's the reason behind manually doing memory cleanup? I expect this to be handled automatically, is this not the case?
There was a problem hiding this comment.
I'm just doing it to be more thorough.
There was a problem hiding this comment.
alrighty, I don't have a strong opinion against it so I won't hold up the review, but I do generally prefer introducing logic that isn't strictly necessary to keep complexity lower, up to you :)
|
@pytorchmergebot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
This pull request introduces a standardized YAML-based configuration system for transformer attention benchmarks, making it easier to run and manage comprehensive performance tests. It adds example configs, and a wrapper script to convert YAML configs into CLI arguments for the benchmark runner.
Next Steps:
CI Enablement: This change would further lead to running the attention ops in CI for regression tracking.
Developer flow: (Run locally)
python score_mod.py --config configs/config_test.yamlEnabling CI run: #165915