-
Notifications
You must be signed in to change notification settings - Fork 26.3k
dispatch max_pools with no indices, expose max_pools to torch namespace #19449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
torch/onnx/symbolic.py
Outdated
| padding = padding + tuple(numpy.add(padding_ceil, padding)) | ||
| else: | ||
| padding = padding * 2 | ||
| r, indices = g.op("MaxPool", input, outputs=2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the second output of onnx MaxPool is optional, so in case return_indices is false we can generate onnx maxpool op with only one output.
torch/onnx/symbolic.py
Outdated
| return _unimplemented(name, "dilation") | ||
| if stride is None: | ||
| stride = kernel_size | ||
| padding = tuple(tule_fn(padding)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo?
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
bddppq
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…ce (#19449) Summary: in functional interfaces we do boolean dispatch, but all to max_pool\*d_with_indices. This change it to emit max_pool\*d op instead when it's not necessary to expose with_indices ops to different backends (for jit). It also bind max_pool\*d to the torch namespace, which is the same behavior with avg_pool\*d Pull Request resolved: pytorch/pytorch#19449 Differential Revision: D15016839 Pulled By: wanchaol fbshipit-source-id: f77cd5f0bcd6d8534c1296d89b061023a8288a2c
…ce (pytorch#19449) Summary: in functional interfaces we do boolean dispatch, but all to max_pool\*d_with_indices. This change it to emit max_pool\*d op instead when it's not necessary to expose with_indices ops to different backends (for jit). It also bind max_pool\*d to the torch namespace, which is the same behavior with avg_pool\*d Pull Request resolved: pytorch#19449 Differential Revision: D15016839 Pulled By: wanchaol fbshipit-source-id: f77cd5f0bcd6d8534c1296d89b061023a8288a2c
in functional interfaces we do boolean dispatch, but all to max_pool*d_with_indices. This change it to emit max_pool*d op instead when it's not necessary to expose with_indices ops to different backends (for jit).
It also bind max_pool*d to the torch namespace, which is the same behavior with avg_pool*d