-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[ONNX] Update ONNX export of torch.where to support ByteTensor as input. #42264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Update ONNX export of torch.where to support ByteTensor as input. #42264
Conversation
💊 CI failures summary and remediationsAs of commit 9530e39 (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
neginraoof
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
|
I will look at the test failure. |
bf5bb2e to
e15638f
Compare
|
@bzinodev - All the legit test failures are fixed and are passing. Also, rebased the PR. The remaining failures (doc build and test) are unrelated to this PR. |
|
@bzinodev - I feel that this is ready for your review and merge. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
torch.wheresupportsByteTensorandBoolTensortypes for the first input argument (conditionpredicate). Currently, ONNX exporter assumes that the first argument isBoolTensor. This PR updates the export fortorch.whereto correctly support export when first argument is aByteTensor.