-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[CUDA] Drop CUDA 10 support #89582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CUDA] Drop CUDA 10 support #89582
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/89582
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit 39a85f4: FLAKY - The following jobs failed but were likely due to flakiness present on master:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
There's some legacy code wrapped with the pytorch/caffe2/utils/math_gpu.cu Line 725 in 41c3b41
|
aten/src/ATen/Dispatch.h
Outdated
| }) | ||
| #endif | ||
|
|
||
| // Workaround for C10_UNUSED because CUDA 10.2 and below fails to handle unused |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this comment also
aten/src/ATen/Dispatch.h
Outdated
| #if defined(__CUDACC__) && CUDA_VERSION < 11000 | ||
| #define C10_UNUSED_DISPATCH_CUDA_WORKAROUND | ||
| #else | ||
| #define C10_UNUSED_DISPATCH_CUDA_WORKAROUND C10_UNUSED |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably all C10_UNUSED_DISPATCH... can be replaced by C10_UNUSED now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: can we just replace C10_UNUSED_DISPATCH_CUDA_WORKAROUND with C10_UNUSED everywhere?
| template <typename T> | ||
| C10_HOST_DEVICE constexpr thrust::complex<T> | ||
| cuda101bug_cast_c10_complex_to_thrust_complex(const c10::complex<T>& x) { | ||
| #if defined(CUDA_VERSION) && (CUDA_VERSION < 10020) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similarly here, cuda101bug... uses should just be replaced with static_cast, as the comment suggests
|
Hmm, what macro should be used instead of |
|
CC @jithunnair-amd who might know more about the last issue |
|
I've left the version guards in the |
aten/src/ATen/Dispatch.h
Outdated
| #if defined(__CUDACC__) && CUDA_VERSION < 11000 | ||
| #define C10_UNUSED_DISPATCH_CUDA_WORKAROUND | ||
| #else | ||
| #define C10_UNUSED_DISPATCH_CUDA_WORKAROUND C10_UNUSED |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: can we just replace C10_UNUSED_DISPATCH_CUDA_WORKAROUND with C10_UNUSED everywhere?
|
@pytorchmergebot merge -f "ROCM failure appears unrelated" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Follow-up of #89582 to drop flags like `CUDA11OrLater` in tests. Note that in some places it appears that `TEST_WITH_ROCM` is _implicitly_ guarded against via the `CUDA11OrLater` version check, based on my best-guess of how `torch.version.cuda` would behave in ROCM builds, so I've added `not TEST_WITH_ROCM` in cases where ROCM wasn't previously explicitly allowed. CC @ptrblck @malfet @ngimel Pull Request resolved: #92605 Approved by: https://github.com/ngimel
CC @ptrblck @ngimel @malfet