Skip to content

Conversation

@anjali411
Copy link
Contributor

@anjali411 anjali411 commented Dec 8, 2022

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 8, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/90463

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 Failures

As of commit 7acb0be:

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base 81b5eff:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Fixes #88843

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Dec 8, 2022
@anjali411 anjali411 requested a review from ngimel December 8, 2022 12:29
variables.ConstantVariable(i) for i in range(self.ndim - 1, -1, -1)
]
args = [variables.TupleVariable(args_list)]
result = out.call_method(tx, "permute", args, {})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than manually reimplement the semantics of T/H here, as the old code did, I think a quicker and less error prone strategy is to just generate a call to torch.Tensor.H.get, which is a function call and thus can be represented in FX IR.

>>> torch.Tensor.H.__get__(torch.randn(2, 2))
tensor([[ 0.6645, -2.2927],
        [ 0.7483, -0.5508]])

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, I didn't know about this, maybe it'll even fix the error with 0d input that T has now.

Copy link
Contributor Author

@anjali411 anjali411 Dec 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds like the right thing to do. I tried:

from .builder import wrap_fx_proxy
result= wrap_fx_proxy(
        tx,
        tx.output.create_proxy(
            "call_function",
            torch.Tensor.T.__get__,
            (self.as_proxy(),),
            {},
        ),
        **options,
    )

but fx chokes at __module__ query on it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we fix FX? It needs to recognize that it has a method wrapper, and look at the module of the class of the method to format the name

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fwiw, in #91840 I tested recording these just as getattr fx nodes. (does getattr have any significant difference from using a call to torch.Tensor.H.get?) The issue I was running into was that at the point of TensorVariable.var_getattr() we don't know whether this is a tensor attribute or a tensor method.

e.g. on #91840 I was running into errors with x.fill_(), which normally falls back to a GetAttrVariable when you try to access an unknown attribute, and then once you try to call it, it turns into a TensorVariable.call_method

Fixes #88843

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Dec 9, 2022
@albanD albanD removed their request for review December 28, 2022 11:16
Fixes #88843

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
Fixes #88843

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
Fixes #88843

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Jan 8, 2023
anjali411 added a commit that referenced this pull request Jan 9, 2023
…)"

This reverts commit 84266ae.

ghstack-source-id: 49754d5
Pull Request resolved: #91897
anjali411 added a commit that referenced this pull request Jan 10, 2023
…T, mH (#90463)""

This reverts commit 84266ae.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Jan 10, 2023
…ributes: T, H, mT, mH (#90463)""

This reverts commit 84266ae.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Jan 10, 2023
…T, mH (#90463)""

This reverts commit 84266ae.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Jan 10, 2023
…)"

This reverts commit 84266ae.

ghstack-source-id: 6a35b88
Pull Request resolved: #91897
pytorchmergebot pushed a commit that referenced this pull request Jan 10, 2023
@anjali411 anjali411 closed this Jan 10, 2023
lezcano added a commit that referenced this pull request Jan 13, 2023
As discussed with @ngimel, this is not only not documented,
but it's also an unnecessary edge case. See #90463 (comment)

[ghstack-poisoned]
lezcano added a commit that referenced this pull request Jan 13, 2023
As discussed with ngimel, this is not only not documented,
but also an unnecessary edge case. See #90463 (comment)

[ghstack-poisoned]
lezcano added a commit that referenced this pull request Jan 13, 2023
As discussed with ngimel, this is not only not documented,
but it's also an unnecessary edge case. See #90463 (comment)

ghstack-source-id: ae0ae52
Pull Request resolved: #92143
lezcano added a commit that referenced this pull request Jan 15, 2023
As discussed with ngimel, this is not only not documented,
but also an unnecessary edge case. See #90463 (comment)

[ghstack-poisoned]
lezcano added a commit that referenced this pull request Jan 15, 2023
As discussed with ngimel, this is not only not documented,
but it's also an unnecessary edge case. See #90463 (comment)

ghstack-source-id: 36eb8d4
Pull Request resolved: #92143
lezcano added a commit that referenced this pull request Jan 15, 2023
As discussed with ngimel, this is not only not documented,
but also an unnecessary edge case. See #90463 (comment)

[ghstack-poisoned]
lezcano added a commit that referenced this pull request Jan 15, 2023
As discussed with ngimel, this is not only not documented,
but it's also an unnecessary edge case. See #90463 (comment)

ghstack-source-id: b4e85ca
Pull Request resolved: #92143
pytorchmergebot pushed a commit that referenced this pull request Jan 17, 2023
As discussed with @ngimel, this is not only not documented,
but also an unnecessary edge case. See #90463 (comment)
Pull Request resolved: #92143
Approved by: https://github.com/ngimel
davidberard98 added a commit that referenced this pull request Feb 1, 2023
…with a getattr proxy node"


**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This leaves room for error, in case the implementation in [variables/tensor.py](https://github.com/pytorch/pytorch/blob/21c7c7c72fd13f476e08b84c45cbca3ea3f41b04/torch/_dynamo/variables/tensor.py#L160) differs from the value resulting from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](#90463 (comment)) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects.
* Note: although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py).
* Note: prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017).


**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 @EikanWang jgong5 @Guobing-Chen chunyuan-w @XiaobingSuper zhuhaozhe blzheng @Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 1, 2023
…roxy node"


**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This leaves room for error, in case the implementation in [variables/tensor.py](https://github.com/pytorch/pytorch/blob/21c7c7c72fd13f476e08b84c45cbca3ea3f41b04/torch/_dynamo/variables/tensor.py#L160) differs from the value resulting from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](#90463 (comment)) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects.
* Note: although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py).
* Note: prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017).


**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 @EikanWang jgong5 @Guobing-Chen chunyuan-w @XiaobingSuper zhuhaozhe blzheng @Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 1, 2023
…with a getattr proxy node"


**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](https://github.com/pytorch/pytorch/blob/21c7c7c72fd13f476e08b84c45cbca3ea3f41b04/torch/_dynamo/variables/tensor.py#L160) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](#90463 (comment)) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 @EikanWang jgong5 @Guobing-Chen chunyuan-w @XiaobingSuper zhuhaozhe blzheng @Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 1, 2023
…roxy node"


**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](https://github.com/pytorch/pytorch/blob/21c7c7c72fd13f476e08b84c45cbca3ea3f41b04/torch/_dynamo/variables/tensor.py#L160) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](#90463 (comment)) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 @EikanWang jgong5 @Guobing-Chen chunyuan-w @XiaobingSuper zhuhaozhe blzheng @Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 1, 2023
…with a getattr proxy node"


**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](https://github.com/pytorch/pytorch/blob/21c7c7c72fd13f476e08b84c45cbca3ea3f41b04/torch/_dynamo/variables/tensor.py#L160) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](#90463 (comment)) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 @EikanWang jgong5 @Guobing-Chen chunyuan-w @XiaobingSuper zhuhaozhe blzheng @Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 1, 2023
…roxy node"


**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](https://github.com/pytorch/pytorch/blob/21c7c7c72fd13f476e08b84c45cbca3ea3f41b04/torch/_dynamo/variables/tensor.py#L160) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](#90463 (comment)) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 @EikanWang jgong5 @Guobing-Chen chunyuan-w @XiaobingSuper zhuhaozhe blzheng @Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 1, 2023
…with a getattr proxy node"


**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](https://github.com/pytorch/pytorch/blob/21c7c7c72fd13f476e08b84c45cbca3ea3f41b04/torch/_dynamo/variables/tensor.py#L160) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](#90463 (comment)) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 @EikanWang jgong5 @Guobing-Chen chunyuan-w @XiaobingSuper zhuhaozhe blzheng @Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 1, 2023
…roxy node"


**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](https://github.com/pytorch/pytorch/blob/21c7c7c72fd13f476e08b84c45cbca3ea3f41b04/torch/_dynamo/variables/tensor.py#L160) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](#90463 (comment)) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 @EikanWang jgong5 @Guobing-Chen chunyuan-w @XiaobingSuper zhuhaozhe blzheng @Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Feb 1, 2023
…91840)

**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](https://github.com/pytorch/pytorch/blob/21c7c7c72fd13f476e08b84c45cbca3ea3f41b04/torch/_dynamo/variables/tensor.py#L160) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](#90463 (comment)) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

Pull Request resolved: #91840
Approved by: https://github.com/ezyang
@facebook-github-bot facebook-github-bot deleted the gh/anjali411/235/head branch June 8, 2023 15:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants