-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Autocast support for scaled_dot_product_attention #91066
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91066
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit ac9af55: NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D42085525 |
1a2f9f6 to
1f20034
Compare
|
This pull request was exported from Phabricator. Differential Revision: D42085525 |
1 similar comment
|
This pull request was exported from Phabricator. Differential Revision: D42085525 |
1f20034 to
9e7d077
Compare
9e7d077 to
e00d209
Compare
|
This pull request was exported from Phabricator. Differential Revision: D42085525 |
|
@pytorchbot merge -f |
|
❌ 🤖 pytorchbot command failed: Try |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 2 additional jobs have failed, first few of them are: trunk ,trunk / cuda11.6-py3.10-gcc7-sm86 / test (default, 3, 4, linux.g5.4xlarge.nvidia.gpu) Details for Dev Infra teamRaised by workflow job |
|
This pull request was exported from Phabricator. Differential Revision: D42085525 |
e00d209 to
4e47c39
Compare
|
This pull request was exported from Phabricator. Differential Revision: D42085525 |
Summary: Pull Request resolved: pytorch#91066 Autocast support for scaled_dot_product_attention Test Plan: sandcastle and guthub cicd Differential Revision: D42085525 fbshipit-source-id: afc6746775c747a7e0829821bc5ef4a10945445f
4e47c39 to
f044877
Compare
|
This pull request was exported from Phabricator. Differential Revision: D42085525 |
Summary: Pull Request resolved: pytorch#91066 Autocast support for scaled_dot_product_attention Test Plan: sandcastle and guthub cicd Differential Revision: D42085525 fbshipit-source-id: 95aa9268f83b714f13e9c09ebb207aab14fc01f0
f044877 to
f726889
Compare
test/test_transformers.py
Outdated
| def test_flash_fail_fp32t(self): | ||
| device = 'cuda' | ||
| dtype = torch.float | ||
| size = (2, 2, 8, 5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flash doesn't accept head dim size of 5 so I don't this is failing for dtype 32 which we do already test with. If you decorate with auto cast does this enable flash?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. good catch. I had cut and pasted another test. (Lolz, the one right above that was showing that this size does not work. :eyeroll:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hah, I was wondering about weird sizes, but I don't know by heart what flash does and doesn't support
|
@pytorchbot merge |
|
This PR needs to be approved by an authorized maintainer before merge. |
f726889 to
4fd8443
Compare
|
This pull request was exported from Phabricator. Differential Revision: D42085525 |
drisspg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Summary: Pull Request resolved: pytorch#91066 Autocast support for scaled_dot_product_attention Test Plan: sandcastle and guthub cicd Reviewed By: drisspg Differential Revision: D42085525 fbshipit-source-id: 58adc0daf573bdb628f040babb123e8100c7fbd3
4fd8443 to
ac9af55
Compare
|
This pull request was exported from Phabricator. Differential Revision: D42085525 |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: The following mandatory check(s) failed (Rule Dig deeper by viewing the failures on hud Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -f "ignore cancelled check" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: Autocast support for scaled_dot_product_attention
Test Plan: sandcastle and guthub cicd
Differential Revision: D42085525
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5