Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Dec 6, 2017

Implements #3653 .

On a high level, a tensor with size can be viewed as a view_size if we

  1. separate dimensions in size into chunks where each chunk is "contiguous" within itself.
  2. can separate view_size into same number of chunks, where each chunk pair has matching "numel", i.e. number of subspace.

Copying from the doc change in this PR:

For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions :math:d, d+1, \dots, d+k that satisfy the following contiguity-like condition that \forall i = 0, \dots, k-1,stride[i] = stride[i+1] \times size[i+1]

Otherwise, :func:contiguous needs to be called before the tensor can be viewed.

The diagram in the added test case might help understanding this as well.

@ssnl
Copy link
Collaborator Author

ssnl commented Dec 6, 2017

cc @vadimkantorov . This should satisfy your request, right? Let me know if it is not the case.

@vadimkantorov
Copy link
Contributor

It seems so.

I'm a surprised this is satisfied because @colesbury suggested a new function reshape would be solving this.

Will there be an easy way to check if two shapes are compatible? So that I can call contiguous() if I can't get away with view

@vadimkantorov
Copy link
Contributor

vadimkantorov commented Dec 6, 2017

I can now see there's isViewable and that view throws if it returns False. Should this function be exposed to Python as well? So that the user doesn't have to rely on exceptions for control flow

@vadimkantorov
Copy link
Contributor

Will this PR enable also that example torch.Tensor(1,1).expand(1,2).view(-1)?

@ssnl
Copy link
Collaborator Author

ssnl commented Dec 7, 2017

@vadimkantorov : Yes it enables that example. Thanks for bringing that up! It helped me identify and fix a bug with zero-strided tensors in my previous code :). I just added tests for that case.

Hmmm about reshape, I didn't notice that comment before. It seems that it should do same thing as the view in this PR. If we decided to use reshape name, I can update this PR.

@ssnl ssnl closed this Dec 7, 2017
@ssnl ssnl reopened this Dec 7, 2017
@vadimkantorov
Copy link
Contributor

The original example in #3653 and torch.Tensor(1,1).expand(1,2).view(-1) are different in that the original example doesn't change numel() while the second is supposed to?

Please check with @colesbury for this design and semantics :)

@ssnl
Copy link
Collaborator Author

ssnl commented Dec 7, 2017

They both don't change numel. In the second example, we have

view:       2
size:    1, 2
stride:  1, 0

The "contiguous" chunk here is all dimensions, i.e., [1, 2]. "numel" in this chunk is exactly the same as "numel" of the view chunk [2].

@colesbury
Copy link
Member

Nice!

We should still add reshape(), which is like view() but will make a copy if isViewable() is False.

If you decide to implement reshape() later, I think you can move both reshape() and view() to NativeFunctions in ATen (example). The advantage of that is you only need on implementation of stride/size calculations for all types.

@soumith soumith merged commit e0d5d1b into pytorch:master Dec 18, 2017
@ssnl ssnl deleted the view_noncontig branch December 18, 2017 16:22
@soumith soumith added 0.3.1 and removed 0.3.1 labels Feb 4, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants