Skip to content

Conversation

@xuhdev
Copy link
Collaborator

@xuhdev xuhdev commented Jun 20, 2019

Previously randperm supports Half type only on CUDA. This commit adds Half support for the CPU version. Precision check for floating point type is also added to ensure that integers can be properly represented.

@pytorchbot pytorchbot added module: cuda Related to torch.cuda, and CUDA support in general module: operators labels Jun 20, 2019
@xuhdev
Copy link
Collaborator Author

xuhdev commented Jun 20, 2019

Inspired by my closed PR #22033

@xuhdev xuhdev force-pushed the randperm-type branch 2 times, most recently from ff7d180 to 0120438 Compare June 20, 2019 22:29
@ngimel
Copy link
Collaborator

ngimel commented Jun 20, 2019

Build errors are to be expected, half support in thrust functions is spotty (having c10::Half define math operations helps, but not always). Frankly I don't see big problems with the way half was supported before.

@xuhdev
Copy link
Collaborator Author

xuhdev commented Jun 20, 2019

@ngimel Thanks for the clarification. You are probably referring to the cuda version, but this PR also adds half for CPU version. Checks for floating point precision are also added. I can remove code changes in CUDA as you suggested.

@xuhdev
Copy link
Collaborator Author

xuhdev commented Jun 20, 2019

@ngimel Updated (some cleanup stays).

@xuhdev xuhdev changed the title Support Half type in randperm. Support Half type for randperm. Jun 20, 2019
@xuhdev xuhdev force-pushed the randperm-type branch 2 times, most recently from 984fbfb to cdf39cf Compare June 21, 2019 03:35
@xuhdev
Copy link
Collaborator Author

xuhdev commented Jun 22, 2019

Closing this now in favor of splited PRs #22102 and #22103

@xuhdev xuhdev closed this Jun 22, 2019
@xuhdev xuhdev deleted the randperm-type branch June 22, 2019 06:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: cuda Related to torch.cuda, and CUDA support in general open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants