Skip to content

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Jun 20, 2018

Before, the note makes it sound like if tensor is shared, then tensor.grad will also be shared. This is not the case if tensor.grad is None.

When a :class:`~torch.Tensor` is sent to another process, the
:attr:`~torch.Tensor` data is shared. If :attr:`torch.Tensor.grad` is
not ``None``, it is also shared. If :attr:`torch.Tensor.grad` is ``None``,
it is not shared and each shared copy of the :class:`~torch.Tensor`

This comment was marked as off-topic.

This comment was marked as off-topic.

not ``None``, it is also shared. After a :class:`~torch.Tensor` without
a :attr:`torch.Tensor.grad` field is sent to the other process, it
creates a standard process-specific ``.grad`` :class:`~torch.Tensor` that
is not automatically shared across all processes like how the

This comment was marked as off-topic.

@zou3519 zou3519 merged commit b4cd9f2 into pytorch:master Jun 20, 2018
petrex pushed a commit to petrex/pytorch that referenced this pull request Jun 20, 2018
* upstream/master: (92 commits)
  more formatting (pytorch#8701)
  Fix pytorch#8692 (pytorch#8699)
  Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (pytorch#8345)
  Shard test_nn to reduce runtime for each test target (pytorch#8678)
  Create at::tensor (pytorch#8475)
  Clarify mp note about sharing a tensor's grad field. (pytorch#8688)
  Add owner rule for cpp_extension.py (pytorch#8700)
  fix formatting in :math: in fold docstring (pytorch#8696)
  Some 0-sized dimension support, port catArray away from resizeLegacy. (pytorch#8666)
  Implement flatten function (pytorch#8578)
  Created Tensor::to functions (pytorch#8643)
  Add a warning in gradcheck if inputs precision < float64 (pytorch#8663)
  Fix parsing of floating point defaults in python_arg_parser (pytorch#8681)
  Export ProcessGroupGloo options to Python (pytorch#8664)
  Fix build error in pybind_state_ideep (pytorch#8684)
  Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (pytorch#8676)
  Improve win-build.sh for local build (pytorch#8674)
  don't do unnecessary copies for bernoulli_ (pytorch#8682)
  Use parallel if get_num_threads 0 (pytorch#8677)
  Fix serialization for Parameters (pytorch#8633)
  ...
petrex pushed a commit to petrex/pytorch that referenced this pull request Jun 21, 2018
* Clarify mp note about sharing a tensor's grad field.

* Address comments

* Address comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants