Skip to content

Conversation

@li-roy
Copy link
Contributor

@li-roy li-roy commented Jun 19, 2018

Parameter was inheriting Tensor's __reduce_ex__(), which caused pickling to not work properly.

Fixes #8077.

cc: @zou3519

def __reduce_ex__(self, proto):
tensor = torch._utils._rebuild_tensor_v2(self.storage(), self.storage_offset(), tuple(self.size()),
self.stride(), self.requires_grad, self._backward_hooks)
return (Parameter, (tensor, self.requires_grad))

This comment was marked as off-topic.

def __repr__(self):
return 'Parameter containing:\n' + super(Parameter, self).__repr__()

def __reduce_ex__(self, proto):

This comment was marked as off-topic.

This comment was marked as off-topic.

def __reduce_ex__(self, proto):
tensor = torch._utils._rebuild_tensor_v2(self.storage(), self.storage_offset(), tuple(self.size()),
self.stride(), self.requires_grad, self._backward_hooks)
return (Parameter, (tensor, self.requires_grad))

This comment was marked as off-topic.

import cPickle as pickle
else:
import pickle
a = torch.nn.Parameter(torch.randn(5, 5))

This comment was marked as off-topic.

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would be good to merge if a new test is added.

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks!

@soumith soumith merged commit 8e4fe5d into pytorch:master Jun 20, 2018
petrex pushed a commit to petrex/pytorch that referenced this pull request Jun 20, 2018
* upstream/master: (92 commits)
  more formatting (pytorch#8701)
  Fix pytorch#8692 (pytorch#8699)
  Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (pytorch#8345)
  Shard test_nn to reduce runtime for each test target (pytorch#8678)
  Create at::tensor (pytorch#8475)
  Clarify mp note about sharing a tensor's grad field. (pytorch#8688)
  Add owner rule for cpp_extension.py (pytorch#8700)
  fix formatting in :math: in fold docstring (pytorch#8696)
  Some 0-sized dimension support, port catArray away from resizeLegacy. (pytorch#8666)
  Implement flatten function (pytorch#8578)
  Created Tensor::to functions (pytorch#8643)
  Add a warning in gradcheck if inputs precision < float64 (pytorch#8663)
  Fix parsing of floating point defaults in python_arg_parser (pytorch#8681)
  Export ProcessGroupGloo options to Python (pytorch#8664)
  Fix build error in pybind_state_ideep (pytorch#8684)
  Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (pytorch#8676)
  Improve win-build.sh for local build (pytorch#8674)
  don't do unnecessary copies for bernoulli_ (pytorch#8682)
  Use parallel if get_num_threads 0 (pytorch#8677)
  Fix serialization for Parameters (pytorch#8633)
  ...
@elanmart elanmart mentioned this pull request Aug 1, 2018
nehz added a commit to nehz/pytorch that referenced this pull request Sep 10, 2018
Extending from pytorch#8633, this allows `requires_grad` to be restored when serialized using `keep_vars=True`.
Potentially rename `keep_vars` as I am guessing this is a reference to the old `Variable` ?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants