Skip to content

Conversation

@borguz
Copy link
Contributor

@borguz borguz commented May 11, 2018

Edited clip_grad so that it works with a mix of sparse and dense tensors.
Added sparse argument to pre-trained embedding loading.


@classmethod
def from_pretrained(cls, embeddings, freeze=True):
def from_pretrained(cls, embeddings, freeze=True, sparse=False):

This comment was marked as off-topic.

if clip_coef < 1:
for p in parameters:
p.grad.data.mul_(clip_coef.item())
p.grad.data.mul_(float(clip_coef))

This comment was marked as off-topic.

@soumith soumith merged commit 5f96a2d into pytorch:master May 11, 2018
onnxbot added a commit to onnxbot/onnx-fb-universe that referenced this pull request May 11, 2018
@borguz borguz deleted the sparse_embedding branch May 11, 2018 14:54
weiyangfb pushed a commit to weiyangfb/pytorch that referenced this pull request Jun 11, 2018
* Add sparse gradient option to pretrained embedding

* Add sparse gradient option to pretrained embedding

* Trailing white space
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants