-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[vulkan] addmm support non-vulkan inputs #39078
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit f968a65 (more details on the Dr. CI page):
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
…n] addmm support non-vulkan inputs" Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend [ghstack-poisoned]
| VulkanTensor t = vtensor_from_vulkan(self.is_vulkan() ? self : self.vulkan()); | ||
| VulkanTensor m1 = | ||
| vtensor_from_vulkan(mat1.is_vulkan() ? mat1 : mat1.vulkan()); | ||
| VulkanTensor m2 = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const is preferred to protect against the inadvertent changes below.
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425) [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425) [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425) [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425) [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425) [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425) [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425) [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425) [ghstack-poisoned]
Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Differential Revision: [D21962425](https://our.internmc.facebook.com/intern/diff/D21962425) [ghstack-poisoned]
|
@IvanKobzarev merged this pull request in 71372b4. |
Summary: Pull Request resolved: pytorch#39078 Adding support of non-vulkan inputs for addmm operator: if it is not on vulkan - converting to it inside operator, if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend Test Plan: Imported from OSS Differential Revision: D21962425 Pulled By: IvanKobzarev fbshipit-source-id: 8222edd31dfb14b326d15e6fec5c8778783479df
Stack from ghstack:
Adding support of non-vulkan inputs for addmm operator:
if it is not on vulkan - converting to it inside operator,
if we run torchscript pretrained model - weights of linear op will be on CPU, we need this to run mobilenetV2 on Vulkan backend
Differential Revision: D21962425