-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Clarify mp note about sharing a tensor's grad field. #8688
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
soumith
reviewed
Jun 20, 2018
| When a :class:`~torch.Tensor` is sent to another process, the | ||
| :attr:`~torch.Tensor` data is shared. If :attr:`torch.Tensor.grad` is | ||
| not ``None``, it is also shared. If :attr:`torch.Tensor.grad` is ``None``, | ||
| it is not shared and each shared copy of the :class:`~torch.Tensor` |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
reviewed
Jun 20, 2018
| not ``None``, it is also shared. After a :class:`~torch.Tensor` without | ||
| a :attr:`torch.Tensor.grad` field is sent to the other process, it | ||
| creates a standard process-specific ``.grad`` :class:`~torch.Tensor` that | ||
| is not automatically shared across all processes like how the |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
soumith
approved these changes
Jun 20, 2018
petrex
pushed a commit
to petrex/pytorch
that referenced
this pull request
Jun 20, 2018
* upstream/master: (92 commits) more formatting (pytorch#8701) Fix pytorch#8692 (pytorch#8699) Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (pytorch#8345) Shard test_nn to reduce runtime for each test target (pytorch#8678) Create at::tensor (pytorch#8475) Clarify mp note about sharing a tensor's grad field. (pytorch#8688) Add owner rule for cpp_extension.py (pytorch#8700) fix formatting in :math: in fold docstring (pytorch#8696) Some 0-sized dimension support, port catArray away from resizeLegacy. (pytorch#8666) Implement flatten function (pytorch#8578) Created Tensor::to functions (pytorch#8643) Add a warning in gradcheck if inputs precision < float64 (pytorch#8663) Fix parsing of floating point defaults in python_arg_parser (pytorch#8681) Export ProcessGroupGloo options to Python (pytorch#8664) Fix build error in pybind_state_ideep (pytorch#8684) Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (pytorch#8676) Improve win-build.sh for local build (pytorch#8674) don't do unnecessary copies for bernoulli_ (pytorch#8682) Use parallel if get_num_threads 0 (pytorch#8677) Fix serialization for Parameters (pytorch#8633) ...
petrex
pushed a commit
to petrex/pytorch
that referenced
this pull request
Jun 21, 2018
* Clarify mp note about sharing a tensor's grad field. * Address comments * Address comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Before, the note makes it sound like if
tensoris shared, thentensor.gradwill also be shared. This is not the case iftensor.gradis None.