Skip to content

Conversation

@mcr229
Copy link
Contributor

@mcr229 mcr229 commented Nov 9, 2022

Stack from ghstack (oldest at bottom):

Executor Class

Executor object used to wrap our xnn_runtime object. The ideal flow of this object looks as such:

executor.set_inputs(vector<tensor> inputs, vector<tensor> outputs)
executor.forward()

This will likely be returned by our delegate compile and given over to execute in order to run inference using the xnn runtime

Executorch Considerations
#include <ATen/Functions.h>
#include <ATen/Utils.h>

These Aten functions are included in order to use at::Tensor when setting the inputs, this will change when used for Executorch because we will be switching from at::Tensor to whatever tensor abstraction is used for ET. Seems like they have the same call for .data_ptr<float>(), so realistically all logic here will be the same.

ATen/Utils is used for TORCH_CHECK. We will switch to ET_CHECK_MESSAGE for executorch.

Differential Revision: D40733121

# Executor Class

Executor object used to wrap our xnn_runtime object. The ideal flow of this object looks as such:

```
executor.set_inputs(vector<tensor> inputs, vector<tensor> outputs)
executor.forward()
```

This will likely be returned by our delegate compile and given over to execute in order to run inference using the xnn runtime

##### Executorch Considerations
```
#include <ATen/Functions.h>
#include <ATen/Utils.h>
```
These Aten functions are included in order to use at::Tensor when setting the inputs, this will change when used for Executorch because we will be switching from at::Tensor to whatever tensor abstraction is used for ET. Seems like they have the same call for `.data_ptr<float>()`, so realistically all logic here will be the same.

ATen/Utils is used for TORCH_CHECK. We will switch to ET_CHECK_MESSAGE for executorch.

Differential Revision: [D40733121](https://our.internmc.facebook.com/intern/diff/D40733121/)

[ghstack-poisoned]
@pytorch-bot pytorch-bot bot added the release notes: jit release notes category label Nov 9, 2022
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 9, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/88778

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 Failures

As of commit 3dc5bea:

The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

# Executor Class

Executor object used to wrap our xnn_runtime object. The ideal flow of this object looks as such:

```
executor.set_inputs(vector<tensor> inputs, vector<tensor> outputs)
executor.forward()
```

This will likely be returned by our delegate compile and given over to execute in order to run inference using the xnn runtime

##### Executorch Considerations
```
#include <ATen/Functions.h>
#include <ATen/Utils.h>
```
These Aten functions are included in order to use at::Tensor when setting the inputs, this will change when used for Executorch because we will be switching from at::Tensor to whatever tensor abstraction is used for ET. Seems like they have the same call for `.data_ptr<float>()`, so realistically all logic here will be the same.

ATen/Utils is used for TORCH_CHECK. We will switch to ET_CHECK_MESSAGE for executorch.

Differential Revision: [D40733121](https://our.internmc.facebook.com/intern/diff/D40733121/)

[ghstack-poisoned]
mcr229 added a commit that referenced this pull request Nov 9, 2022
Pull Request resolved: #88778

# Executor Class

Executor object used to wrap our xnn_runtime object. The ideal flow of this object looks as such:

```
executor.set_inputs(vector<tensor> inputs, vector<tensor> outputs)
executor.forward()
```

This will likely be returned by our delegate compile and given over to execute in order to run inference using the xnn runtime

##### Executorch Considerations
```
#include <ATen/Functions.h>
#include <ATen/Utils.h>
```
These Aten functions are included in order to use at::Tensor when setting the inputs, this will change when used for Executorch because we will be switching from at::Tensor to whatever tensor abstraction is used for ET. Seems like they have the same call for `.data_ptr<float>()`, so realistically all logic here will be the same.

ATen/Utils is used for TORCH_CHECK. We will switch to ET_CHECK_MESSAGE for executorch.
ghstack-source-id: 173215553

Differential Revision: [D40733121](https://our.internmc.facebook.com/intern/diff/D40733121/)
Copy link
Contributor

@digantdesai digantdesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 10, 2022
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Dec 10, 2022
# Executor Class

Executor object used to wrap our xnn_runtime object. The ideal flow of this object looks as such:

```
executor.set_inputs(vector<tensor> inputs, vector<tensor> outputs)
executor.forward()
```

This will likely be returned by our delegate compile and given over to execute in order to run inference using the xnn runtime

##### Executorch Considerations
```
#include <ATen/Functions.h>
#include <ATen/Utils.h>
```
These Aten functions are included in order to use at::Tensor when setting the inputs, this will change when used for Executorch because we will be switching from at::Tensor to whatever tensor abstraction is used for ET. Seems like they have the same call for `.data_ptr<float>()`, so realistically all logic here will be the same.

ATen/Utils is used for TORCH_CHECK. We will switch to ET_CHECK_MESSAGE for executorch.

Differential Revision: [D40733121](https://our.internmc.facebook.com/intern/diff/D40733121/)
Pull Request resolved: pytorch#88778
Approved by: https://github.com/digantdesai
@facebook-github-bot facebook-github-bot deleted the gh/mcr229/18/head branch June 8, 2023 17:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: jit release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants