Skip to content

Conversation

@ngimel
Copy link
Collaborator

@ngimel ngimel commented Jul 22, 2020

This fixes #41473 for discontiguous input, mask and out. Tests to follow. Reverting #33269 is not a great solution because I'm told masked_select was needed for printing complex tensors.
cc @gchanan , @zou3519, @ezyang

Comment on lines 719 to 720
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be a no-op for contiguous tensors, but would create a copy for non-contiguous ones, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And why the same fix does not need to be applied to TensorConfig() created on line 737 for parallel kernel?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to @zou3519 only serial kernel is affected, and I didn't figure out yesterday how to fix the kernel itself. I'll take a look now.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I root-caused why its serial kernel only, so now I'm sending discontiguous inputs to parallel kernel that can handle them without making additional copies.

@gchanan
Copy link
Contributor

gchanan commented Jul 22, 2020

why don't we have tests for printing complex tensors? The tests for #41828 pass.

@malfet
Copy link
Contributor

malfet commented Jul 22, 2020

why don't we have tests for printing complex tensors? The tests for #41828 pass.
We do, see

pytorch/test/test_torch.py

Lines 3911 to 3921 in fced54a

# test complex tensor
# complex tensor print uses two formatters, one for real values
# and the other for imag values. this is consistent with numpy
x = torch.tensor([2.3 + 4j, 7 + 6j])
self.assertEqual(x.__repr__(), str(x))
self.assertExpectedInline(str(x), '''tensor([2.3000+4.j, 7.0000+6.j])''')
# test scientific notation for complex tensors
x = torch.tensor([1e28 + 2j , -1e-28j])
self.assertEqual(x.__repr__(), str(x))
self.assertExpectedInline(str(x), '''tensor([1.0000e+28+2.0000e+00j, -0.0000e+00-1.0000e-28j])''')

And for strided ones

pytorch/test/test_torch.py

Lines 4052 to 4076 in fced54a

x = torch.ones(100, 2, 2, 10) * (1 + 1j)
y = x.as_strided(size=(100, 2, 10), stride=(2 * 2 * 10, 2 * 10, 1))
self.assertEqual(str(y), y.__repr__())
expected_str = '''\
tensor([[[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j],
[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j]],
[[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j],
[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j]],
[[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j],
[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j]],
...,
[[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j],
[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j]],
[[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j],
[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j]],
[[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j],
[1.+1.j, 1.+1.j, 1.+1.j, ..., 1.+1.j, 1.+1.j, 1.+1.j]]])\
'''
self.assertExpectedInline(str(y), expected_str)

@ngimel
Copy link
Collaborator Author

ngimel commented Jul 22, 2020

Ok, maybe we don't need masked_select for complex printing after all, I just remember @anjali411 wanted original PR for something complex-replated.

@gchanan
Copy link
Contributor

gchanan commented Jul 22, 2020

yep, I checked with @anjali411 and she said we split the real and imaginary parts before printing, so we should be good.

@dr-ci
Copy link

dr-ci bot commented Jul 22, 2020

💊 CI failures summary and remediations

As of commit 535949c (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 18 times.

@ezyang
Copy link
Contributor

ezyang commented Jul 22, 2020

Thanks for the fix. Is malfet doing the full review? Add me if you want me to review too.

@ngimel
Copy link
Collaborator Author

ngimel commented Jul 23, 2020

@malfet it's ready for review now. Unfortunately it's harder to review, as after revert I have to bring original porting PR back, but now it has a test that explicitly tests for discontiguous mask/value/out, so if that passes, should be good to go.
Unlike before, broadcasted mask/value now will also go through parallel kernel (because they are not contiguous), so the perf would be somewhat worse for those cases. Before broadcasted cases could use serial kernel, because TensorIterator normally does not reorder dimensions in this case. We don't have an API to query TensorIterator if dimensions are reordered, and I did not think it makes sense to bring it just for this case.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome fix to a couple silent errors!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ngimel merged this pull request in 7a57088.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

'torch.masked_select' behaves differently depending on the device of inputs.

6 participants