Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Jan 8, 2018

  1. Switched to conv1d in stft.
  2. Added code to do param expansion in _convolution in ATen, fixing [ppc64le] test_cuda.py fails with newly added test test_stft #4464
  3. @myleott noticed a bug in Fix setting using running stats in InstanceNorm*d #4444 where names of a attr and a method collide in instancenorm.py. This fixes it.
  4. Made an err msg in check_input_shape_forward clearer to prevent confusing err msg like
Expected 4-dimensional input for 4-dimensional weight [10], 
but got input of size [11, 1, 32, 32] instead

from https://discuss.pytorch.org/t/autograd-grad-dimension-error/12083.

@ssnl
Copy link
Collaborator Author

ssnl commented Jan 8, 2018

@pytorchbot test this please

This comment was marked as off-topic.

This comment was marked as off-topic.

@ezyang
Copy link
Contributor

ezyang commented Jan 9, 2018

I'm a bit sad that we have to insert the tests manually even though our autogenerated Python binding code is clever enough to handle this automatically. Consistency would suggest the C++ binding code should handle this too :/

@ssnl ssnl force-pushed the instancenorm_stft branch from fcde227 to ac00a33 Compare January 9, 2018 03:59
@ssnl ssnl force-pushed the instancenorm_stft branch from ac00a33 to 636b833 Compare January 9, 2018 03:59
@ssnl
Copy link
Collaborator Author

ssnl commented Jan 9, 2018

@pytorchbot retest this please

@ssnl
Copy link
Collaborator Author

ssnl commented Jan 9, 2018

Switched to using a simple function!

auto weight = weight_r;
auto bias = bias_r;
auto k = input.ndimension();
size_t dim = k - 2;

This comment was marked as off-topic.

}

static inline std::vector<int64_t> convolution_expand_param_if_needed(
IntList &list_param, std::string param_name, size_t expected_dim) {

This comment was marked as off-topic.

This comment was marked as off-topic.

@ssnl
Copy link
Collaborator Author

ssnl commented Jan 10, 2018

@pytorchbot retest this please

@ssnl
Copy link
Collaborator Author

ssnl commented Jan 10, 2018

@ezyang xenial-py3 CI runs fail twice with "nvidia-container-cli: initialization error: cuda error: no cuda-capable device is detected", should I retest again?

@ssnl
Copy link
Collaborator Author

ssnl commented Jan 10, 2018

@pytorchbot retest this please

@ssnl
Copy link
Collaborator Author

ssnl commented Jan 10, 2018

Made an err msg in check_input_shape_forward clearer to prevent confusing err msg like

Expected 4-dimensional input for 4-dimensional weight [10], 
but got input of size [11, 1, 32, 32] instead

from https://discuss.pytorch.org/t/autograd-grad-dimension-error/12083.

@ssnl ssnl force-pushed the instancenorm_stft branch 3 times, most recently from 22662f3 to 6b2d1f8 Compare January 10, 2018 19:30
@ssnl ssnl force-pushed the instancenorm_stft branch from 6b2d1f8 to 6493e0d Compare January 10, 2018 20:16
@ezyang ezyang merged commit 0ac58d5 into pytorch:master Jan 10, 2018
@ssnl ssnl deleted the instancenorm_stft branch January 12, 2018 15:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants