-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Create at::tensor #8475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create at::tensor #8475
Conversation
colesbury
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of the new ScalarList type, since it forces a copy of it's inputs and it's only use seems to be to construct tensors.
Can you use at::ArrayRef<int64_t> and at::ArrayRef<double> directly instead? These tensor(...) overloads seem like they should be added directly to at:: namespace and not go thorough generated code. I don't think they'll be useful in Python, for tracing/JIT, or for autograd.
|
Those are both good ideas |
|
@colesbury I ran into a lot of trouble around ambiguous constructors for Instead I've made the methods templated. This way, at least you can write |
|
@pytorchbot retest this please |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
|
This deserves some tests. Currently, if you call an unsupported method you'll get a linker error which isn't ideal. In my local testing, Here are some use cases that I think should work: I would suggest the following to avoid the ambiguous overloads: I would suggest that the I would suggest that the initializer_list overloads default to kLong for |
22ef21e to
763eed8
Compare
|
@colesbury @ezyang please take another look. I used the Added lots of tests. My local test case: prints |
|
@pytorchbot retest this please |
07f1e85 to
687824e
Compare
|
Hey @goldsborough, is there any particular reason you didn't go the "add overloads for initializer list" route? |
|
Isn't that the effectively what I did? There's one overload of initializer_list per scalar type. This way there is 0 ambiguity and everything works. Or did you mean something else? |
|
So, I got deceived by my C++ heuristic that says that if you can do it with overloads, don't use a template; I couldn't actually construct an example where it mattered if tensor was templated or not. So... carry on! |
| Tensor tensor(ArrayRef<T> values, const TensorOptions& options) { | ||
| auto result = at::empty(values.size(), options); | ||
| for (size_t i = 0; i < values.size(); ++i) { | ||
| result[i] = values[i]; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| TEST_CASE("Tensor/ContainsCorrectValueForSingleValue") { | ||
| auto tensor = at::tensor(123); | ||
| REQUIRE(tensor.numel() == 1); | ||
| REQUIRE(tensor.dtype() == at::kInt); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
|
|
||
| #define TENSOR(T, _1, _2) \ | ||
| Tensor tensor(ArrayRef<T> values, const TensorOptions& options) { \ |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
* upstream/master: (92 commits) more formatting (pytorch#8701) Fix pytorch#8692 (pytorch#8699) Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (pytorch#8345) Shard test_nn to reduce runtime for each test target (pytorch#8678) Create at::tensor (pytorch#8475) Clarify mp note about sharing a tensor's grad field. (pytorch#8688) Add owner rule for cpp_extension.py (pytorch#8700) fix formatting in :math: in fold docstring (pytorch#8696) Some 0-sized dimension support, port catArray away from resizeLegacy. (pytorch#8666) Implement flatten function (pytorch#8578) Created Tensor::to functions (pytorch#8643) Add a warning in gradcheck if inputs precision < float64 (pytorch#8663) Fix parsing of floating point defaults in python_arg_parser (pytorch#8681) Export ProcessGroupGloo options to Python (pytorch#8664) Fix build error in pybind_state_ideep (pytorch#8684) Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (pytorch#8676) Improve win-build.sh for local build (pytorch#8674) don't do unnecessary copies for bernoulli_ (pytorch#8682) Use parallel if get_num_threads 0 (pytorch#8677) Fix serialization for Parameters (pytorch#8633) ...
PyTorch has
torch.tensor, which is a convenient way of creating a tensor from a list of values. This PR adds the equivalent to ATen/C++.It kind of depends on #7869, but putting it here already. The most important file is
aten/src/ATen/ScalarList.hwhich can already be reviewed.I've also named it
valuesfor now because factory methods still dispatch overType, andType::tensoralready exists. After the other PR lands I'll rename ittensor.@colesbury @gchanan @ezyang