Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Oct 23, 2018

Reopen of #11253 after fixing bug in index_select

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

ssnl added 23 commits October 23, 2018 18:55
`output_differentiability` in derivatives.yaml. Also relax the
check that gradient formulas need to use all grad outputs. It is
well possible that to compute a particular grad_input[i], only
part of all grad_ourputs are needed.

add sparse get_values and make it back-prop-able

Make get_values back-prop-able

make indices and values view functions

Make all sparse_coo ctors dispatch to a native function,
_sparse_new_with_dims_and_tensor. Remove the dispatch mechaism
on native_* native ctors, e.g., native_sparse_coo_tensor. Now
all the code lives in functions like sparse_coo_tensor.

Make sparse coo ctor a view function

Make _newFlattenedIndices a native function
Implement sparse_constructor_backward

Get rid of NNZ optimization
Move native/sparse/SparseUtils.h to SparseTensorUtils.h

add getter docs

make _set_coalesced a native fn and call it _coalesced_

sparseDims -> sparse_dim; denseDims -> dense_dim

update test_print expect because I fixed _indices output to not have grad_fn now

infer type first

get_indices -> indices; get_values -> values

purge options from sparse_coo_tensor with indices and values tensors

Fix coalesced tests; update prints; use type dispatch for size only ctor

Update note; support nondiff views; update prints

workaround for sparse views and inplace ops
Add has_* for TensorOptions
Fix Python sparse_coo_tensor entry
Fix a CUDA coalesce error; add tests
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ssnl ssnl deleted the sp_val branch October 24, 2018 17:02
zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 24, 2018
Summary:
Reopen of #11253 after fixing bug in index_select
Pull Request resolved: pytorch/pytorch#13001

Differential Revision: D10514987

Pulled By: SsnL

fbshipit-source-id: 399a83a1d3246877a3523baf99aaf1ce8066f33f
- name: clone(Tensor self)
self: grad

- name: coalesce(Tensor self)

This comment was marked as off-topic.

if output_var in differentiable_output_vars:
# If `GradMode::is_enabled()` is False, this is a
# non-differentiable view. Gradients should not flow through.
is_differentiable = 'true'
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD Here is_differentiable is always true regardless of GradMode (sorry about the comments, I forgot to update them). While in #12502 this is 'as_view(self, {}, GradMode::is_enabled())'.format(call).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright. Thanks for the info !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants