Skip to content

Conversation

@mruberry
Copy link
Collaborator

@mruberry mruberry commented Aug 10, 2020

This PR:

  • updates test_op_normalization.py, which verifies that aliases are correctly translated in the JIT
  • adds torch.linalg.det as an alias for torch.det
  • moves the torch.linalg.outer alias to torch.outer (to be consistent with NumPy)

The torch.linalg.outer alias was put the linalg namespace erroneously as a placeholder since it's a "linear algebra op" according to NumPy but is actually still in the main NumPy namespace.

The updates to test_op_normalization are necessary. Previously it was using method_tests to generate tests, and method_tests assumes test suites using it also use the device generic framework, which test_op_normalization did not. For example, some ops require decorators like skipCPUIfNoLapack, which only works in device generic test classes. Moving test_op_normalization to the device generic framework also lets these tests run on CPU and CUDA.

Continued reliance on method_tests() is excessive since the test suite is only interested in testing aliasing, and a simpler and more readable AliasInfo class is used for the required information. An example impedance mismatch between method_tests and the new tests, for example, was how to handle ops in namespaces like torch.linalg.det. In the future this information will likely be folded into a common 'OpInfo' registry in the test suite.

The actual tests performed are similar to what they were previously: a scripted and traced version of the op is run and the test verifies that both graphs do not contain the alias name and do contain the aliased name.

The guidance for adding an alias has been updated accordingly.

cc @mattip

Note:

@ngimel suggests:

  • deprecating and then removing the torch.ger name
  • reviewing the implementation of torch.outer

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Aug 10, 2020
Mike Ruberry added 2 commits August 10, 2020 01:17
@dr-ci
Copy link

dr-ci bot commented Aug 10, 2020

💊 CI failures summary and remediations

As of commit 38afbba (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 16 times.

@mruberry mruberry requested a review from ngimel August 10, 2020 16:18
@mruberry mruberry added module: tests Issues related to tests (not the torch.testing module) module: numpy Related to numpy support, and also numpy compatibility of our operators module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul labels Aug 10, 2020
is_inplace = info.alias_name.endswith('_')

# NOTE: this workaround necessary to satisfy the JIT, which
# does allow Python aliasing or direct calling of torch.Tensor
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't

script = fn_template.format(alias_name=info.alias_name, args=arg_string)
else:
# Checks that scripting converts aliases
# NOTE: scripting doesn't support splatting args, so this
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this note up since splatting happens in both cases

call = get_call(method_name, 'method', actuals, kwargs)
script = script_template.format(', '.join(formals), call)
CU = torch.jit.CompilationUnit(script)
torch._C._jit_check_alias_annotation(CU.the_method.graph, tuple(tensors), method_name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is no longer being called - it's not checking op normalization of aliasing, it's checking alias annotations of schemas

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha! I wanted to check on that. I'll revert this change.

@mruberry mruberry requested a review from eellison August 11, 2020 06:25
Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

op alias changes LGTM

self.decorators = decorators

alias_infos = (
AliasInfo('absolute', torch.absolute, 'abs',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's kind of nice to reuse the op registry instead of creating a new one...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is! But check this out: #41662.

So now we can update this to use the fancy new op registry.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in bee174d.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: numpy Related to numpy support, and also numpy compatibility of our operators module: tests Issues related to tests (not the torch.testing module) oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants