-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Create a quantized version ReLU function for CUDA #85502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85502
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit ed6bd36: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
is this PR ready for review? |
Yes. |
Can you use standard format for summary, e.g. #84984? i.e. include a Summary and Test Plan at least |
Sure, I changed and will use a standard format for future PRs. |
|
@pytorchbot merge |
|
@pytorchbot successfully started a merge job. Check the current status here. |
|
Hey @fufeisi. |
|
@pytorchbot revert -m "Sorry, reverting as 10.2 builds on trunk broke due to this change, see https://hud.pytorch.org/pytorch/pytorch/commit/93a53ff4d92c883d87cc7aee35af719039b481a8" -c nosignal Running with land checks would prevent a revert like this in the future, for more details see https://github.com/pytorch/pytorch/wiki/Bot-commands. |
|
@pytorchbot successfully started a revert job. Check the current status here. |
|
@fufeisi your PR has been successfully reverted. |
This reverts commit 93a53ff. Reverted #85502 on behalf of https://github.com/janeyx99 due to Sorry, reverting as 10.2 builds on trunk broke due to this change, see https://hud.pytorch.org/pytorch/pytorch/commit/93a53ff4d92c883d87cc7aee35af719039b481a8
Summary: this and #85670 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py Previous PR that has been reverted: #85502. Pull Request resolved: #85669 Approved by: https://github.com/dzdang
…d_cuda_. (#85670) Summary: this and #85669 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py Previous PR that has been reverted: #85502. Pull Request resolved: #85670 Approved by: https://github.com/dzdang, https://github.com/z-a-f
Summary: this is to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py Pull Request resolved: #85502 Approved by: https://github.com/dzdang
This reverts commit 93a53ff. Reverted #85502 on behalf of https://github.com/janeyx99 due to Sorry, reverting as 10.2 builds on trunk broke due to this change, see https://hud.pytorch.org/pytorch/pytorch/commit/93a53ff4d92c883d87cc7aee35af719039b481a8
Summary: this and #85670 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py Previous PR that has been reverted: #85502. Pull Request resolved: #85669 Approved by: https://github.com/dzdang
…d_cuda_. (#85670) Summary: this and #85669 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py Previous PR that has been reverted: #85502. Pull Request resolved: #85670 Approved by: https://github.com/dzdang, https://github.com/z-a-f
Summary:
this is to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda.
Test Plan:
python test/test_quantization.py