Skip to content

Conversation

@gchanan
Copy link
Contributor

@gchanan gchanan commented Jun 19, 2018

…im().

Currently, THTensor_(nDimension) goes to dim(), which makes it difficult to move individual usages over to the new API.
Instead, let's create a THTensor
(_nDimension) going to dim() and THTensor(nDimension) going to _dim(). To do this, we will redirect all current
calls and move them over as we did for _dim() and dim().

…im().

Currently, THTensor_(nDimension) goes to _dim(), which makes it difficult to move individual usages over to the new API.
Instead, let's create a THTensor_(_nDimension) going to _dim() and THTensor_(nDimension) going to _dim().  To do this, we will redirect all current
calls and move them over as we did for _dim() and dim().
@gchanan gchanan merged commit 695fd98 into pytorch:master Jun 20, 2018
petrex pushed a commit to petrex/pytorch that referenced this pull request Jun 20, 2018
* upstream/master: (92 commits)
  more formatting (pytorch#8701)
  Fix pytorch#8692 (pytorch#8699)
  Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (pytorch#8345)
  Shard test_nn to reduce runtime for each test target (pytorch#8678)
  Create at::tensor (pytorch#8475)
  Clarify mp note about sharing a tensor's grad field. (pytorch#8688)
  Add owner rule for cpp_extension.py (pytorch#8700)
  fix formatting in :math: in fold docstring (pytorch#8696)
  Some 0-sized dimension support, port catArray away from resizeLegacy. (pytorch#8666)
  Implement flatten function (pytorch#8578)
  Created Tensor::to functions (pytorch#8643)
  Add a warning in gradcheck if inputs precision < float64 (pytorch#8663)
  Fix parsing of floating point defaults in python_arg_parser (pytorch#8681)
  Export ProcessGroupGloo options to Python (pytorch#8664)
  Fix build error in pybind_state_ideep (pytorch#8684)
  Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (pytorch#8676)
  Improve win-build.sh for local build (pytorch#8674)
  don't do unnecessary copies for bernoulli_ (pytorch#8682)
  Use parallel if get_num_threads 0 (pytorch#8677)
  Fix serialization for Parameters (pytorch#8633)
  ...
petrex pushed a commit to petrex/pytorch that referenced this pull request Jun 21, 2018
…im(). (pytorch#8676)

Currently, THTensor_(nDimension) goes to _dim(), which makes it difficult to move individual usages over to the new API.
Instead, let's create a THTensor_(_nDimension) going to _dim() and THTensor_(nDimension) going to _dim().  To do this, we will redirect all current
calls and move them over as we did for _dim() and dim().
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants