-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add include_self flag to scatter_reduce #74607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add include_self flag to scatter_reduce #74607
Conversation
[ghstack-poisoned]
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit dd1cbe1 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`
- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`
- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`
- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`
- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`
- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`
- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)`
- Add an argument `include_self` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)`
- Add an argument `include_self` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)`
- Add an argument `include_self` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
Next step is to move this logic into the kernel
[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)`
- Add an argument `include_self` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`
Next step is to move this logic into the kernel
[ghstack-poisoned]
|
@pytorchbot merge this please |
|
Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:simonhollis,davidxili,xinyang0,Jack-Khuu,sidneyfletcher, ... |
|
@pytorchbot merge this please |
|
Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:shz117,chaekit,frankseide,anirbanraywork,kavoor, ... |
|
@pytorchbot merge this |
|
Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:lessw2020,bilalsal,sluks,brianjo,wenleix, ... |
|
@pytorchbot merge this |
|
Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:z-a-f,kit1980,jnkwok1,tktrungna,janeyx99, ... |
|
@pytorchbot merge this |
|
Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:andrewconnors,aazzolini,larryliu0820,ziky90,bradleyhd, ... |
|
@pytorchbot merge this |
|
Hey @mikaylagawarecki. |
Summary: Pull Request resolved: #74607 Approved by: https://github.com/cpuhrsch Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/e9a8e6f74ac037ed3a16b99d0bd48bdaafc73825 Reviewed By: b0noI Differential Revision: D35404309 Pulled By: mikaylagawarecki fbshipit-source-id: 8cf9158b04344fc583dae80b6829019713f5366d
Stack from ghstack:
Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)include_selfwhich indicates whether the value in theselfTensor at a given position is included in the reduction with the elements fromsrcscattered to that position. ForI_self = {all indices of self}I_src= {all indices of src}S = {indices of self modified by scatter}self_indices_to_src_indices : I_self --> I_srcmaps indices inselfto a tuple of indices insrcscattered to that index ofselfThen for
s ∈ Sandt ∈ I\Swheninclude_input=Falseself[s] = reduction_op(src[self_indices_to_src_indices[s]])self[t] = self[t]and when
include_input=True(regular scatter(reduce=op) behavior)self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])self[t] = self[t]The
optional_outcase of pytorch_scatter.scatter can then be handled bytorch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)Next step is to move this logic into the kernel