Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Jun 20, 2018

Partially fixes #8626
fixes #8649
fixes the immediate issue in #8577 , but we should make these things a hard error. @ezyang commented that this is doable in setUp and tearDown

@colesbury

@ssnl ssnl force-pushed the reshape_bwd branch 5 times, most recently from 2d1bd9c to c25be47 Compare June 21, 2018 22:05
@ssnl
Copy link
Collaborator Author

ssnl commented Jun 23, 2018

@fehiepsi this will fix the gradcheck :)

@ssnl ssnl changed the title [wip] Fix as_strided_backward Fix as_strided_backward Jun 23, 2018
check for input overlapping too
[doc] clarify gradcheck behabior when input is overlapping
longer note

# The actual implementations live in Declarations.cwrap. These are just to
# provide default values for storage_offset=self.storage_offset()
- func: as_strided(Tensor self, IntList size, IntList stride) -> Tensor

This comment was marked as off-topic.

self: grad
src: grad.gather(dim, index)

- name: select(Tensor self, int64_t dim, int64_t index)

This comment was marked as off-topic.

src.as_strided(sizes, strides, storage_offset - base.storage_offset()).copy_(grad);
return src;

// NOTE [ as_strided Backward ]

This comment was marked as off-topic.

This check will likely fail if :attr:`input` is of less precision, e.g.,
``FloatTensor``.
.. warning::

This comment was marked as off-topic.

@fehiepsi
Copy link
Contributor

Thanks for notifying me @ssnl!


// NOTE [ as_strided Backward ]
//
// `storage_offset` is ignored for simplicity in this note. If you just want the

This comment was marked as off-topic.

@ssnl
Copy link
Collaborator Author

ssnl commented Jun 25, 2018

Since all references of these notes are within this file, I'll just keep them here. When we actually pull in mem_overlap.c, the second note can go away (hopefully). It's also unclear which folder this note should live as this file is a template (but the note is not). Functions.cpp is quite long anyways.

@ssnl ssnl merged commit 838fb87 into pytorch:master Jun 25, 2018
@ssnl ssnl deleted the reshape_bwd branch June 25, 2018 22:17
petrex pushed a commit to ROCm/pytorch that referenced this pull request Jun 26, 2018
* upstream/master: (42 commits)
  [c10d] No default device for ProcessGroupGloo (pytorch#8888)
  Fix default values for affine= in the docstrings of InstanceNormXd (pytorch#8895)
  Stop making dynamic allocations of PinnedMemoryAllocator. (pytorch#8896)
  [C++ API]  Rework optimization package (pytorch#8815)
  Mention MPICH_MAX_THREAD_SAFETY=multiple. (pytorch#8580)
  Unify isViewable, handle n-dimensional empty tensors. (pytorch#8883)
  Add pos_weight argument to nn.BCEWithLogitsLoss (pytorch#5660) (pytorch#6856)
  [build] Enable clang-specific warnings only when using clang (pytorch#8869)
  Fix cmake cudnn autodetection (pytorch#8891)
  [c10d] Fix link order for building C++ tests (pytorch#8889)
  directly add_subdirectory(nanopb) from torch CMakeLists (pytorch#8870)
  [C++ API] Bag of fixes (pytorch#8843)
  [build] Raise in cmake when seeing NVCC{9/9.1} + GCC6 combo (pytorch#8863)
  Create avg_pool1d in ATen (pytorch#8880)
  throw error when grid_sample is passed unsupported mode (pytorch#8884)
  Allow autograd to work even when the shape of values cannot be determined (pytorch#8641)
  Make at::Tensor::to() const (pytorch#8839)
  [auto] Update onnx to 458c521 - Fix typo (onnx/onnx#1143) onnx/onnx@458c521
  [Caffe2] Fix gradient_check on in-place ops (pytorch#8828)
  Fix as_strided_backward (pytorch#8721)
  ...
}

Tensor slice_backward(Tensor grad, IntList input_sizes, int64_t dim, int64_t start, int64_t end, int64_t step) {
auto grad_input = at::zeros(input_sizes, grad.type());

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@fehiepsi fehiepsi mentioned this pull request Jul 6, 2018
3 tasks
eellison pushed a commit to eellison/pytorch that referenced this pull request Jul 10, 2018
* make as_strided safer

* patching as_strided; and stop using it in backward

* Test a simple case in as_strided_backward

* a long note

* remove boundary checks of as_strided; implement slow path

* wip

* fix as_strided backward when input is overlapping

check for input overlapping too
[doc] clarify gradcheck behabior when input is overlapping
longer note

* fix a deprecation warning in test_autograd

* nits
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[pytorch] torch.autograd.gradcheck failed with torch.stack, torch.cat as_strided_backward in expanded case & dynamically created grad_fn for views

5 participants