Skip to content

Conversation

@velikodniy
Copy link
Contributor

affine=False in the sources but the docs said that the default value is True

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@soumith soumith merged commit 74d2d56 into pytorch:master Jun 26, 2018
petrex pushed a commit to ROCm/pytorch that referenced this pull request Jun 26, 2018
* upstream/master: (42 commits)
  [c10d] No default device for ProcessGroupGloo (pytorch#8888)
  Fix default values for affine= in the docstrings of InstanceNormXd (pytorch#8895)
  Stop making dynamic allocations of PinnedMemoryAllocator. (pytorch#8896)
  [C++ API]  Rework optimization package (pytorch#8815)
  Mention MPICH_MAX_THREAD_SAFETY=multiple. (pytorch#8580)
  Unify isViewable, handle n-dimensional empty tensors. (pytorch#8883)
  Add pos_weight argument to nn.BCEWithLogitsLoss (pytorch#5660) (pytorch#6856)
  [build] Enable clang-specific warnings only when using clang (pytorch#8869)
  Fix cmake cudnn autodetection (pytorch#8891)
  [c10d] Fix link order for building C++ tests (pytorch#8889)
  directly add_subdirectory(nanopb) from torch CMakeLists (pytorch#8870)
  [C++ API] Bag of fixes (pytorch#8843)
  [build] Raise in cmake when seeing NVCC{9/9.1} + GCC6 combo (pytorch#8863)
  Create avg_pool1d in ATen (pytorch#8880)
  throw error when grid_sample is passed unsupported mode (pytorch#8884)
  Allow autograd to work even when the shape of values cannot be determined (pytorch#8641)
  Make at::Tensor::to() const (pytorch#8839)
  [auto] Update onnx to 458c521 - Fix typo (onnx/onnx#1143) onnx/onnx@458c521
  [Caffe2] Fix gradient_check on in-place ops (pytorch#8828)
  Fix as_strided_backward (pytorch#8721)
  ...
@velikodniy velikodniy deleted the fix_instance_norm_docs branch June 20, 2019 14:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants