Skip to content

Conversation

@mikaylagawarecki
Copy link
Contributor

@mikaylagawarecki mikaylagawarecki commented Mar 23, 2022

Stack from ghstack:

Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)

  • Add an argument include_self which indicates whether the value in the self Tensor at a given position is included in the reduction with the elements from src scattered to that position. For
    I_self = {all indices of self}
    I_src= {all indices of src}
    S = {indices of self modified by scatter}
    self_indices_to_src_indices : I_self --> I_src maps indices in self to a tuple of indices in src scattered to that index of self
    Then for s ∈ S and t ∈ I\S when include_input=False
    self[s] = reduction_op(src[self_indices_to_src_indices[s]])
    self[t] = self[t]
    and when include_input=True (regular scatter(reduce=op) behavior)
    self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])
    self[t] = self[t]

The optional_out case of pytorch_scatter.scatter can then be handled by
torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)

Next step is to move this logic into the kernel

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 23, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit dd1cbe1 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`

- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`

- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`

- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`

- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`

- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

[ghstack-poisoned]
@mikaylagawarecki mikaylagawarecki marked this pull request as ready for review March 24, 2022 17:40
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_input=True)`

- Add an argument `include_input` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)`

- Add an argument `include_self` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)`

- Add an argument `include_self` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)`

- Add an argument `include_self` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

Next step is to move this logic into the kernel

[ghstack-poisoned]
`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce, *, bool include_self=True)`

- Add an argument `include_self` which indicates whether the value in the `self` Tensor at a given position is included in the reduction with the elements from `src` scattered to that position. For
`I_self = {all indices of self}`
`I_src= {all indices of src}`
`S = {indices of self modified by scatter}`
`self_indices_to_src_indices : I_self --> I_src` maps indices in `self` to a tuple of indices in `src` scattered to that index of `self`
Then  for `s ∈ S` and `t ∈ I\S` when `include_input=False`
`self[s] = reduction_op(src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`
and when `include_input=True` (regular scatter(reduce=op) behavior)
`self[s] = reduction_op(self[s], src[self_indices_to_src_indices[s]])`
`self[t] = self[t]`

The [`optional_out` case of pytorch_scatter.scatter](https://github.com/rusty1s/pytorch_scatter/blob/master/csrc/scatter.cpp#L32) can then be handled by 
`torch.zeros(shape).scatter_reduce_(dim, index, src, reduce, include_input=False)`

Next step is to move this logic into the kernel

[ghstack-poisoned]
@mikaylagawarecki
Copy link
Contributor Author

@pytorchbot merge this please

@pytorchmergebot
Copy link
Collaborator

Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:simonhollis,davidxili,xinyang0,Jack-Khuu,sidneyfletcher, ...
Raised by https://github.com/pytorch/pytorch/actions/runs/2092961119

@mikaylagawarecki
Copy link
Contributor Author

@pytorchbot merge this please

@pytorchmergebot
Copy link
Collaborator

Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:shz117,chaekit,frankseide,anirbanraywork,kavoor, ...
Raised by https://github.com/pytorch/pytorch/actions/runs/2092988702

@mikaylagawarecki
Copy link
Contributor Author

@pytorchbot merge this

@pytorchmergebot
Copy link
Collaborator

Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:lessw2020,bilalsal,sluks,brianjo,wenleix, ...
Raised by https://github.com/pytorch/pytorch/actions/runs/2093029394

@mikaylagawarecki
Copy link
Contributor Author

@pytorchbot merge this

@pytorchmergebot
Copy link
Collaborator

Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:z-a-f,kit1980,jnkwok1,tktrungna,janeyx99, ...
Raised by https://github.com/pytorch/pytorch/actions/runs/2096968164

@cpuhrsch
Copy link
Contributor

cpuhrsch commented Apr 5, 2022

@pytorchbot merge this

@pytorchmergebot
Copy link
Collaborator

Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:andrewconnors,aazzolini,larryliu0820,ziky90,bradleyhd, ...
Raised by https://github.com/pytorch/pytorch/actions/runs/2097220915

@malfet
Copy link
Contributor

malfet commented Apr 5, 2022

@pytorchbot merge this

@github-actions
Copy link
Contributor

github-actions bot commented Apr 5, 2022

Hey @mikaylagawarecki.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@mikaylagawarecki mikaylagawarecki added the release notes: sparse release notes category label Apr 5, 2022
@cpuhrsch cpuhrsch added the topic: new features topic category label Apr 5, 2022
facebook-github-bot pushed a commit that referenced this pull request Apr 7, 2022
Summary:
Pull Request resolved: #74607

Approved by: https://github.com/cpuhrsch

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/e9a8e6f74ac037ed3a16b99d0bd48bdaafc73825

Reviewed By: b0noI

Differential Revision: D35404309

Pulled By: mikaylagawarecki

fbshipit-source-id: 8cf9158b04344fc583dae80b6829019713f5366d
@facebook-github-bot facebook-github-bot deleted the gh/mikaylagawarecki/48/head branch April 8, 2022 23:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants