Skip to content

Conversation

@frank-wei
Copy link
Contributor

Summary:
as titled

  1. support logical_and, logical_not
  2. replace eq,gt,lt with python operator in acc_ops due to the fact that torch op needs input to be torch.Tensor but python op does not
  3. add more test cases
  4. add individual ne op without using combination of existing ops since there are limitations. For ex, in lowering it to equal+logical_not. It will fail the last test case in test_ne.py. The failure reason is that logical_not needs the input to be a tensor.
    Also we can not use equal+operator.not since not is not tracable in FX with the error "symbolically traced variables cannot be used as inputs to control flow"
    We also can not use equal+operator.invert since operator.invert(True)=-2

Test Plan:
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_ne
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_logical_and
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_unary_ops

Reviewed By: 842974287

Differential Revision: D35232917

Summary:
as titled
1. support logical_and, logical_not
2. replace eq,gt,lt with python operator in acc_ops due to the fact that torch op needs input to be torch.Tensor but python op does not
3. add more test cases
4. add individual ne op without using combination of existing ops since there are limitations. For ex, in lowering it to equal+logical_not. It will fail the last test case in test_ne.py. The failure reason is that logical_not needs the input to be a tensor.
Also we can not use equal+operator.not since not is not tracable in FX with the error "symbolically traced variables cannot be used as inputs to control flow"
We also can not use equal+operator.invert since operator.invert(True)=-2

Test Plan:
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_ne
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_logical_and
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_unary_ops

Reviewed By: 842974287

Differential Revision: D35232917

fbshipit-source-id: 60368a6f73071403394a369ed94e319a9dd86c29
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Apr 7, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 51e9ec3 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35232917

facebook-github-bot pushed a commit that referenced this pull request Apr 7, 2022
Summary:
Pull Request resolved: #75444

as titled
1. support logical_and, logical_not
2. replace eq,gt,lt with python operator in acc_ops due to the fact that torch op needs input to be torch.Tensor but python op does not
3. add more test cases
4. add individual ne op without using combination of existing ops since there are limitations. For ex, in lowering it to equal+logical_not. It will fail the last test case in test_ne.py. The failure reason is that logical_not needs the input to be a tensor.
Also we can not use equal+operator.not since not is not tracable in FX with the error "symbolically traced variables cannot be used as inputs to control flow"
We also can not use equal+operator.invert since operator.invert(True)=-2

(Note: this ignores all push blocking failures!)

Test Plan:
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_ne
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_logical_and
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_unary_ops

Reviewed By: 842974287

Differential Revision: D35232917

fbshipit-source-id: d4601a6883c977caa263f67b9db86cbc862d4780
@github-actions
Copy link
Contributor

github-actions bot commented Apr 7, 2022

Hey @frank-wei.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

frank-wei pushed a commit to pytorch/TensorRT that referenced this pull request Jun 4, 2022
Summary:
X-link: pytorch/pytorch#75444

as titled
1. support logical_and, logical_not
2. replace eq,gt,lt with python operator in acc_ops due to the fact that torch op needs input to be torch.Tensor but python op does not
3. add more test cases
4. add individual ne op without using combination of existing ops since there are limitations. For ex, in lowering it to equal+logical_not. It will fail the last test case in test_ne.py. The failure reason is that logical_not needs the input to be a tensor.
Also we can not use equal+operator.not since not is not tracable in FX with the error "symbolically traced variables cannot be used as inputs to control flow"
We also can not use equal+operator.invert since operator.invert(True)=-2

(Note: this ignores all push blocking failures!)

Reviewed By: 842974287

Differential Revision: D35232917

fbshipit-source-id: d4601a6883c977caa263f67b9db86cbc862d4780
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants