Skip to content

Conversation

@Xia-Weiwen
Copy link
Collaborator

@Xia-Weiwen Xia-Weiwen commented Nov 11, 2022

Stack from ghstack (oldest at bottom):

Summary
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused linear-tanh op for onednn backend, which will be used for int8 inference with onednn backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

Test Plan
python test_quantization.py TestQuantizedLinear

cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @leslie-fang-intel @VitalyFedyunin @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 11, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/88879

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit fc5ce60:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lg, please add "Summary" and "Test Plan" as well

@kimishpatel
Copy link
Contributor

what is the motivation behind tanh linear fusion?

@Xia-Weiwen
Copy link
Collaborator Author

what is the motivation behind tanh linear fusion?

Fusing activations with root ops can reduce overhead and improve inference performance. Linear-tanh is found in models like CGAN.

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]
cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]
cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]
cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]
@Xia-Weiwen Xia-Weiwen marked this pull request as ready for review November 21, 2022 00:43
@Xia-Weiwen Xia-Weiwen added the intel This tag is for PR from Intel label Nov 21, 2022

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]
@Xia-Weiwen Xia-Weiwen added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 25, 2022

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]

**Summary**
Post op fusion can reduce data movement overhead and improve inference performance. This PR adds fused `linear-tanh` op for `onednn` backend, which will be used for int8 inference with `onednn` backend. Linear-tanh is found in models like CGAN.
Cannot call this op with other quantization backends otherwise an error is thrown.

**Test Plan**
python test_quantization.py TestQuantizedLinear

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 leslie-fang-intel VitalyFedyunin mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]
@Xia-Weiwen
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/Xia-Weiwen/5/head branch June 8, 2023 14:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request intel This tag is for PR from Intel Merged module: cpu CPU specific problem (e.g., perf, algorithm) oncall: quantization Quantization support in PyTorch open source release notes: quantization release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants