Skip to content

Conversation

@xuhdev
Copy link
Collaborator

@xuhdev xuhdev commented Jul 24, 2019

This resolves two issues in one shot:

  • sub shouldn't be available for bool type.
  • When sub is applied to an unsupported type, the current error messages
    shows "add_cpu/add_cuda is not implemented for [type]". They should be
    "sub_cpu/sub_cuda" instead.

@pytorchbot pytorchbot added module: cpu CPU specific problem (e.g., perf, algorithm) module: cuda Related to torch.cuda, and CUDA support in general module: operators labels Jul 24, 2019
@xuhdev xuhdev requested review from colesbury, gchanan and izdeby July 24, 2019 20:58
Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should have a real error message that indicates we don't plan to support bool; as it is currently it can seem like a mistake.

@xuhdev xuhdev force-pushed the op/sub-dispatch branch 2 times, most recently from 3162916 to 7daa610 Compare July 25, 2019 02:11
@xuhdev xuhdev requested a review from gchanan July 25, 2019 02:11
Copy link
Contributor

@izdeby izdeby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you, please, add tests?

Copy link
Member

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems unnecessarily complicated to me:

The check for subtraction on bool data dtypes should just be in BinaryOps.cpp (not in the kernels). A

TORCH_CHECK(iter.dtype() != kBool, "sub is not supported on bool tensors");

should suffice.

Please do not template these kernels on "bool add".

@xuhdev xuhdev force-pushed the op/sub-dispatch branch from 7daa610 to ed33508 Compare July 26, 2019 18:05
@xuhdev xuhdev changed the title Explicitly run dispatch in sub_kernel Add TORCH_CHECK to disable sub for bool tensors Jul 26, 2019
@xuhdev xuhdev force-pushed the op/sub-dispatch branch 2 times, most recently from 7013b6a to 32f9ae4 Compare July 26, 2019 18:09
@xuhdev xuhdev requested review from colesbury and izdeby July 26, 2019 18:09
@xuhdev xuhdev force-pushed the op/sub-dispatch branch from 32f9ae4 to 417da4f Compare July 26, 2019 18:36
@xuhdev xuhdev requested a review from izdeby July 26, 2019 18:37
@xuhdev xuhdev force-pushed the op/sub-dispatch branch 4 times, most recently from d0570da to 6b5e9c4 Compare July 26, 2019 22:39
@xuhdev
Copy link
Collaborator Author

xuhdev commented Jul 27, 2019

@pytorchbot retest this please

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@izdeby has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@colesbury colesbury added the merge-this-please Was marked for merge with @pytorchbot merge this please label Jul 29, 2019
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't look correct. ~ is not the correct suggestion for boolTensor - boolTensor (there are two boolean operands!), but it is temporarily for "1 - boolTensor", which is intended to invert the values.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about this:

if both operands are bool:
   your error message, but with `^` instead as the suggestion.
if one operand is bool:
   "Subtraction, the `-` operator, with a bool tensor is not supported.  If you are trying to invert a mask, use the `~` or `bitwise_not()` operator instead."

@colesbury colesbury added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed merge-this-please Was marked for merge with @pytorchbot merge this please labels Jul 29, 2019
@xuhdev xuhdev closed this Jul 29, 2019
@xuhdev xuhdev force-pushed the op/sub-dispatch branch from 6b5e9c4 to 505fa83 Compare July 29, 2019 19:14
@colesbury
Copy link
Member

@xuhdev are you going to adjust the error message for 1 - bool_tensor?

@xuhdev
Copy link
Collaborator Author

xuhdev commented Jul 29, 2019

@colesbury I think I messed git on my side. Will reopen a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: cpu CPU specific problem (e.g., perf, algorithm) module: cuda Related to torch.cuda, and CUDA support in general open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants