Skip to content

Conversation

@IvanKobzarev
Copy link
Contributor

@IvanKobzarev IvanKobzarev commented May 27, 2020

Stack from ghstack:

Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator,

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: D21962425

@dr-ci
Copy link

dr-ci bot commented May 27, 2020

💊 CI failures summary and remediations

As of commit f968a65 (more details on the Dr. CI page):



🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

Since your merge base is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


ci.pytorch.org: 2 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 45 times.

…n] addmm support non-vulkan inputs"


Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

[ghstack-poisoned]
VulkanTensor t = vtensor_from_vulkan(self.is_vulkan() ? self : self.vulkan());
VulkanTensor m1 =
vtensor_from_vulkan(mat1.is_vulkan() ? mat1 : mat1.vulkan());
VulkanTensor m2 =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const is preferred to protect against the inadvertent changes below.

Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425)

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425)

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425)

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425)

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425)

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425)

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425)

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425)

[ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator, 

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

@IvanKobzarev merged this pull request in 71372b4.

@facebook-github-bot facebook-github-bot deleted the gh/IvanKobzarev/22/head branch June 15, 2020 14:15
xwang233 pushed a commit to xwang233/pytorch that referenced this pull request Jun 20, 2020
Summary:
Pull Request resolved: pytorch#39078

Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator,

if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend

Test Plan: Imported from OSS

Differential Revision: D21962425

Pulled By: IvanKobzarev

fbshipit-source-id: 8222edd31dfb14b326d15e6fec5c8778783479df
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants