Skip to content

Conversation

@colesbury
Copy link
Member

Previously, running DataParallel in no_grad mode would change the
requires_grad property of the network's parameters to False. The issue
is that Broadcast returns aliases of the inputs for the source device.
In no_grad mode, it would deatch these inputs in-place.

Fixes #5851

Previously, running DataParallel in no_grad mode would change the
requires_grad property of the network's parameters to False. The issue
is that Broadcast returns aliases of the inputs for the source device.
In no_grad mode, it would deatch these inputs in-place.
@ezyang ezyang merged commit d11b7fb into pytorch:master Mar 19, 2018
soumith added a commit that referenced this pull request Mar 19, 2018
soumith added a commit that referenced this pull request Mar 19, 2018
…simple") moving average" (#5892)

* Revert "Port ATen and JIT C++ tests to Catch2 (#5788)"

This reverts commit 6f80023.

* Revert "Fix error message for cat-ing zero-dim tensors (#5819)"

This reverts commit cf2e176.

* Revert "Softmax symbolic should account for negative dim (#5846)"

This reverts commit ba64724.

* Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (#5855)"

This reverts commit 22ef8e5.

* Revert "Don't modify requires_grad when running DataParallel in no_grad mode (#5880)"

This reverts commit d11b7fb.

* Revert "fix some methods not showing up in doc (#5882)"

This reverts commit 24fca0e.

* Revert "ReduceOps cleanup and set_num_threads (#5723)"

This reverts commit 84400d5.

* Revert "introduce shape_as_tensor and reshape_from_variable_shape (#5824)"

This reverts commit f446b82.

* Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (#5766)"

This reverts commit 99b1f6c.
jekbradbury pushed a commit to jekbradbury/pytorch that referenced this pull request Mar 21, 2018
…ytorch#5880)

Previously, running DataParallel in no_grad mode would change the
requires_grad property of the network's parameters to False. The issue
is that Broadcast returns aliases of the inputs for the source device.
In no_grad mode, it would deatch these inputs in-place.

Fixes pytorch#5851
jekbradbury pushed a commit to jekbradbury/pytorch that referenced this pull request Mar 21, 2018
…simple") moving average" (pytorch#5892)

* Revert "Port ATen and JIT C++ tests to Catch2 (pytorch#5788)"

This reverts commit 6f80023.

* Revert "Fix error message for cat-ing zero-dim tensors (pytorch#5819)"

This reverts commit cf2e176.

* Revert "Softmax symbolic should account for negative dim (pytorch#5846)"

This reverts commit ba64724.

* Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (pytorch#5855)"

This reverts commit 22ef8e5.

* Revert "Don't modify requires_grad when running DataParallel in no_grad mode (pytorch#5880)"

This reverts commit d11b7fb.

* Revert "fix some methods not showing up in doc (pytorch#5882)"

This reverts commit 24fca0e.

* Revert "ReduceOps cleanup and set_num_threads (pytorch#5723)"

This reverts commit 84400d5.

* Revert "introduce shape_as_tensor and reshape_from_variable_shape (pytorch#5824)"

This reverts commit f446b82.

* Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (pytorch#5766)"

This reverts commit 99b1f6c.
@colesbury colesbury deleted the broadcast branch March 23, 2018 15:48
@cksajil cksajil mentioned this pull request May 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants