Skip to content

Conversation

@nairbv
Copy link
Collaborator

@nairbv nairbv commented May 2, 2019

Adds minimal support for sparse half embedding on CPU (backward() and sum()). Previous PR only allowed sum().backward() of sparse half embedding on CUDA. We can add more operators if this is useful, but currently skips most CPU operator implementations that depend on blas or vec256 since they don't have float16 support.

After 19695,

>>> a = torch.nn.Embedding(3, 4, sparse=True).half()
>>> a(torch.LongTensor([1, 0])).backward(torch.ones(2,4).half())

Gave: RuntimeError: _th_index_select not supported on CPUType for Half
and a(torch.LongTensor([1, 0]).sum() gave an error that sum_cpu was not available.

Builds on: #19695
Don't need to re-review commits before 29a72ea

@pytorchbot pytorchbot added module: cpu CPU specific problem (e.g., perf, algorithm) module: internals Related to internal abstractions in c10 and ATen module: nn Related to torch.nn module: operators module: pybind Related to our Python bindings / interactions with other Python libraries module: sparse Related to torch.sparse labels May 2, 2019
@nairbv
Copy link
Collaborator Author

nairbv commented May 8, 2019

We decided against CPU Half sum support due to slowness of half-add operations while working on #19695

@nairbv nairbv closed this May 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: cpu CPU specific problem (e.g., perf, algorithm) module: internals Related to internal abstractions in c10 and ATen module: nn Related to torch.nn module: pybind Related to our Python bindings / interactions with other Python libraries module: sparse Related to torch.sparse

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants