Skip to content

Conversation

@goelhardik
Copy link
Contributor

Fixes #657
Tested locally.

@apaszke
Copy link
Contributor

apaszke commented Feb 20, 2017

I think it'd be better to add annotations to arguments in cwrap, or use the before_call attribute, instead of reimplementing the whole function.

@goelhardik
Copy link
Contributor Author

This is the cleanest way I could figure out. Can't think of any other way to modify the dims using just before_call. Does this look better?

@apaszke
Copy link
Contributor

apaszke commented Feb 23, 2017

This looks better. Can you please add a test for it?

@goelhardik
Copy link
Contributor Author

A test asserting that transpose with positive and negative dimensions give the same result is good enough?

@apaszke
Copy link
Contributor

apaszke commented Feb 24, 2017

Yeah. Just do a for loop from 0 to nDim, and use positive and negative values to ensure that they yield the same result.

@goelhardik
Copy link
Contributor Author

Great. Added a test.

@apaszke
Copy link
Contributor

apaszke commented Feb 25, 2017

@pytorchbot test this please

@fmassa
Copy link
Member

fmassa commented Feb 25, 2017

The builds are failing for CUDA. Maybe you should add LIBRARY_STATE in the beginning of the function calls so that it can be CUDA-compatible?

@goelhardik
Copy link
Contributor Author

Adding LIBRARY_STATE is not fixing this. For CUDA, it is trying to pass the LIBRARY_STATE as the first argument to my function THTensor_(transpose_neg), which does not accept that.

I think either this needs to go to the backend libs as a wrapper like newTranspose or maybe a new implementation of the fix using before_call etc.

Copy link
Member

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure you can't do this with before_call? That seems like it would be much simpler.

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith
Copy link
Contributor

soumith commented Mar 3, 2017

@pytorchbot test this please

@soumith soumith merged commit c93c884 into pytorch:master Mar 3, 2017
@soumith
Copy link
Contributor

soumith commented Mar 3, 2017

Thanks a lot Hardik!

@goelhardik goelhardik deleted the issue-630 branch March 6, 2017 22:13
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this pull request Sep 20, 2021
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
* [sync BN]

support non-uniform batch size across process group.

TODO: test should be added once cleaned up.

* updating unit tests

* new unit tests for different inputs

* cleaning
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Transpose using negative dimension

6 participants