Skip to content

Conversation

@gchanan
Copy link
Contributor

@gchanan gchanan commented Jun 26, 2018

  1. Unifies the two isViewable functions in ATen and TH.
  2. Handle n-dimensional empty tensors in the implementation
  3. Clarify some comments.

This requires an extra copy in the TH case, but that will go away.

1) Unifies the two isViewable functions in ATen and TH.
2) Handle n-dimensional empty tensors in the implementation
3) Clarify some comments.

This requires an extra copy in the TH case, but that will go away.
@ssnl
Copy link
Collaborator

ssnl commented Jun 26, 2018

Can we unify the THC version somehow too? https://github.com/pytorch/pytorch/blob/master/aten/src/THC/generic/THCTensor.cpp#L242

@gchanan
Copy link
Contributor Author

gchanan commented Jun 26, 2018

why is there a THC version :(.

@ssnl
Copy link
Collaborator

ssnl commented Jun 26, 2018

Yeah sorry I forgot about that one earlier. :( We basically duplicated everything for TH & THC, so...

@gchanan
Copy link
Contributor Author

gchanan commented Jun 26, 2018

@pytorchbot retest this please.

@ezyang
Copy link
Contributor

ezyang commented Jun 26, 2018

We should probably unify THTensor/THCTensor doubletime, but there seem to be some yaks in the way...

// On a high level,
// 1. separate tensor->size into chunks of dimensions, where the dimensions are
// ``contiguous'' in each chunk, i.e., stride[i] = size[i+1] * stride[i+1]
// 2. view_size must be able to be separated into same number of chunks as tensor->size was separated into,

This comment was marked as off-topic.

This comment was marked as off-topic.

// 2. view_size must be able to be separated into same number of chunks as tensor->size was separated into,
// where each chunk of view_size has matching ``numel'', i.e., number of subspaces,
// as the corresponding chunk of tensor->size.
at::optional<std::vector<int64_t>>

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Always hard to tell what has changed when you move code around and make changes at the same time :)

@gchanan
Copy link
Contributor Author

gchanan commented Jun 26, 2018

@pytorchbot retest this please.

@gchanan gchanan merged commit 31327dd into pytorch:master Jun 26, 2018
petrex pushed a commit to ROCm/pytorch that referenced this pull request Jun 26, 2018
* upstream/master: (42 commits)
  [c10d] No default device for ProcessGroupGloo (pytorch#8888)
  Fix default values for affine= in the docstrings of InstanceNormXd (pytorch#8895)
  Stop making dynamic allocations of PinnedMemoryAllocator. (pytorch#8896)
  [C++ API]  Rework optimization package (pytorch#8815)
  Mention MPICH_MAX_THREAD_SAFETY=multiple. (pytorch#8580)
  Unify isViewable, handle n-dimensional empty tensors. (pytorch#8883)
  Add pos_weight argument to nn.BCEWithLogitsLoss (pytorch#5660) (pytorch#6856)
  [build] Enable clang-specific warnings only when using clang (pytorch#8869)
  Fix cmake cudnn autodetection (pytorch#8891)
  [c10d] Fix link order for building C++ tests (pytorch#8889)
  directly add_subdirectory(nanopb) from torch CMakeLists (pytorch#8870)
  [C++ API] Bag of fixes (pytorch#8843)
  [build] Raise in cmake when seeing NVCC{9/9.1} + GCC6 combo (pytorch#8863)
  Create avg_pool1d in ATen (pytorch#8880)
  throw error when grid_sample is passed unsupported mode (pytorch#8884)
  Allow autograd to work even when the shape of values cannot be determined (pytorch#8641)
  Make at::Tensor::to() const (pytorch#8839)
  [auto] Update onnx to 458c521 - Fix typo (onnx/onnx#1143) onnx/onnx@458c521
  [Caffe2] Fix gradient_check on in-place ops (pytorch#8828)
  Fix as_strided_backward (pytorch#8721)
  ...
ezyang added a commit to ezyang/pytorch that referenced this pull request Jun 28, 2018
@ssnl ssnl mentioned this pull request Jun 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants