-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[ONNX] Add full checker mode in torch.onnx.export #83186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful links
✅ No Failures (0 Pending)As of commit a5ceffa (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Why: 1. Currently, the full_check mode in `_C._check_onnx_proto` does nothing, as `onnx::shape_inference::InferShapes(model)` is included in `onnx::checker::check_model(model)` 2. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code. 3. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run. [ghstack-poisoned]
Why: 1. Currently, the full_check mode in `_C._check_onnx_proto` does nothing, as `onnx::shape_inference::InferShapes(model)` is included in `onnx::checker::check_model(model)` 2. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code. 3. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run. CI should pass after #83201 [ghstack-poisoned]
Why: 1. Currently, the full_check mode in `_C._check_onnx_proto` does nothing, as `onnx::shape_inference::InferShapes(model)` is included in `onnx::checker::check_model(model)` 2. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code. 3. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run. CI should pass after #83201 [ghstack-poisoned]
Why: 1. Currently, the full_check mode in `_C._check_onnx_proto` does nothing, as `onnx::shape_inference::InferShapes(model)` is included in `onnx::checker::check_model(model)` 2. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code. 3. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run. [ghstack-poisoned]
Why: ONNX has mismatch checker usage between cpp and python and it's later fixed by onnx/onnx#4386. And since `torch.onnx.export` is using cpp checker for graph-level check,this improvement should be added. Also, this version bump enables #83186 [ghstack-poisoned]
Why: ONNX has mismatch checker usage between cpp and python and it's later fixed by onnx/onnx#4386. And since `torch.onnx.export` is using cpp checker for graph-level check,this improvement should be added. Also, this version bump enables #83186 [ghstack-poisoned]
Why: 1. Currently, the full_check mode in `_C._check_onnx_proto` does nothing, as `onnx::shape_inference::InferShapes(model)` is included in `onnx::checker::check_model(model)` 2. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code. 3. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run. [ghstack-poisoned]
Why: ONNX has mismatch checker usage between cpp and python and it's later fixed by onnx/onnx#4386. And since `torch.onnx.export` is using cpp checker for graph-level check,this improvement should be added. Also, this version bump enables #83186 [ghstack-poisoned]
Why: ONNX has mismatch checker usage between cpp and python and it's later fixed by onnx/onnx#4386. And since `torch.onnx.export` is using cpp checker for graph-level check,this improvement should be added. Also, this version bump enables #83186 [ghstack-poisoned]
Why: ONNX had mismatch checker usage between cpp and python and it's later fixed by onnx/onnx#4386. And since `torch.onnx.export` is using cpp checker for graph-level check with older version of ONNX,this improvement should be added. Also, this version bump enables #83186 Updated 12/5/2022: This PR includes ONNX 1.13.0 pre-release (https://github.com/onnx/onnx/tree/rel-1.13.0) cc VitalyFedyunin jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 gujinghui PenghuiCheng jianyuh min-jean-cho yanbing-j Guobing-Chen Xia-Weiwen mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang chunyuan-w zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 desertfire [ghstack-poisoned]
Why: ONNX had mismatch checker usage between cpp and python and it's later fixed by onnx/onnx#4386. And since `torch.onnx.export` is using cpp checker for graph-level check with older version of ONNX,this improvement should be added. Also, this version bump enables #83186 Updated 12/5/2022: This PR includes ONNX 1.13.0 pre-release (https://github.com/onnx/onnx/tree/rel-1.13.0) cc VitalyFedyunin jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 gujinghui PenghuiCheng jianyuh min-jean-cho yanbing-j Guobing-Chen Xia-Weiwen mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang chunyuan-w zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 desertfire [ghstack-poisoned]
Fix #82589 Depends: #83201 Why: 1. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code. 2. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run. 3. This PR doesn't change the original behavior of `check_onnx_proto`, but add a warning message for those models which can't pass strict shape type inference, saying the models would fail on onnxruntime. [ghstack-poisoned]
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Fix #82589 Why: 1. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code. 2. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run. 3. This PR doesn't change the original behavior of `check_onnx_proto`, but add a warning message for those models which can't pass strict shape type inference, saying the models would fail on onnxruntime. cc EikanWang jgong5 wenzhe-nrv sanchitintel [ghstack-poisoned]
Why: ONNX had mismatch checker usage between cpp and python and it's later fixed by onnx/onnx#4386. And since torch.onnx.export is using cpp checker for graph-level check with older version of ONNX,this improvement should be added. Also, this version bump enables #83186 Updated 12/5/2022: This PR includes ONNX 1.13.0 pre-release (https://github.com/onnx/onnx/tree/rel-1.13.0) [ghstack-poisoned]
Why: ONNX had mismatch checker usage between cpp and python and it's later fixed by onnx/onnx#4386. And since torch.onnx.export is using cpp checker for graph-level check with older version of ONNX,this improvement should be added. Also, this version bump enables #83186 Updated 12/5/2022: This PR includes ONNX 1.13.0 pre-release (https://github.com/onnx/onnx/tree/rel-1.13.0) [ghstack-poisoned]
ONNX had mismatch checker usage between cpp and python and it's later fixed by onnx/onnx#4386. And since `torch.onnx.export` is using cpp checker for graph-level check with older version of ONNX,this improvement should be added. Also, this version bump enables #83186 Updated 12/5/2022: This PR includes ONNX 1.13.0 release (https://github.com/onnx/onnx/tree/rel-1.13.0) For [CVE-2022-25882](https://nvd.nist.gov/vuln/detail/CVE-2022-25882) Pull Request resolved: #90332 Approved by: https://github.com/kit1980, https://github.com/malfet
Fix #82589 Why: 1. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code. 2. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run. 3. This PR doesn't change the original behavior of `check_onnx_proto`, but add a warning message for those models which can't pass strict shape type inference, saying the models would fail on onnxruntime. cc EikanWang jgong5 wenzhe-nrv sanchitintel [ghstack-poisoned]
|
@pytorchbot merge -g |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: GraphQL query query ($owner: String!, $name: String!, $number: Int!, $cursor: String!) { Details for Dev Infra teamRaised by workflow job |
Hi @kit1980, Do you know what issue this might be? |
|
Looks like GitHub hiccup? |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
| } | ||
|
|
||
| void check_onnx_proto(const std::string& proto_string, bool full_check) { | ||
| void check_onnx_proto(const std::string& proto_string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This changes public API, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it does. It's one of TORCH API. Is there anything else I should do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to follow https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy
It basically says first emit a warning for 2 releases, then can delete.
But for this function specifically I'm not sure if it's even used by anyone.
We may need to revert if there are complains.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Users should most likely be using the onnx API from onnx repo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added bc breaking labels.
Stack from ghstack (oldest at bottom):
Fix #82589
Why:
onnx::checker::check_modelfunction as it turns on strict_mode inonnx::shape_inference::InferShapes()which I think that was the intention of this part of code.check_onnx_proto, but add a warning message for those models which can't pass strict shape type inference, saying the models would fail on onnxruntime.cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @gchanan