Skip to content

Conversation

@Xia-Weiwen
Copy link
Collaborator

@Xia-Weiwen Xia-Weiwen commented Jan 29, 2023

Summary
X86 quantization backend (qengine) with oneDNN kernels has not been validated on OS other than Linux. So, let it fall back to fbgemm if OS is not Linux. This makes sure the behavior is the same on Windows/Mac as the previous default fbgemm qengine on x86 CPUs.

Test plan
CI checks.

cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 29, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/93218

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 7 Pending

As of commit 548c627:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: quantization release notes category label Jan 29, 2023
@github-actions github-actions bot added the module: cpu CPU specific problem (e.g., perf, algorithm) label Jan 29, 2023
@Xia-Weiwen Xia-Weiwen requested a review from jgong5 January 29, 2023 07:33
@Xia-Weiwen Xia-Weiwen force-pushed the x86_qengine_windows_macos branch from 38db670 to 44593f5 Compare January 29, 2023 07:43
@Xia-Weiwen Xia-Weiwen added the intel This tag is for PR from Intel label Jan 30, 2023
@Xia-Weiwen Xia-Weiwen marked this pull request as ready for review January 30, 2023 00:24
@Xia-Weiwen
Copy link
Collaborator Author

Hi @jerryzh168. Could you please review this PR? Thanks.
For x86 backend, we plan to use onednn kernels only on Linux as we have not validated the performance yet on other OSs.

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, please feel free to land the onednn only changes directly. or is this blocked by a review from pytorch dev?

@Xia-Weiwen
Copy link
Collaborator Author

Xia-Weiwen commented Feb 1, 2023

thanks, please feel free to land the onednn only changes directly. or is this blocked by a review from pytorch dev?

Yes. I think such PRs need approval from Meta side to land. For example, #91934 cannot be merged with approval from our team only (merged failed). And this PR is related to x86 backend, not actually onednn only I think, so we think it also needs approval from Meta side.

@Xia-Weiwen Xia-Weiwen force-pushed the x86_qengine_windows_macos branch from 44593f5 to 548c627 Compare February 1, 2023 01:03
@Xia-Weiwen
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 1, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@Xia-Weiwen Xia-Weiwen deleted the x86_qengine_windows_macos branch November 13, 2024 06:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request intel This tag is for PR from Intel Merged module: cpu CPU specific problem (e.g., perf, algorithm) open source release notes: quantization release notes category

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

5 participants