-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[PrimTorch] Reference for linalg.matrix_norm #81113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. [ghstack-poisoned]
🔗 Helpful links
✅ No Failures (0 Pending)As of commit 222f67c (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. ghstack-source-id: db85492 Pull Request resolved: #81113
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. [ghstack-poisoned]
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. ghstack-source-id: 5898f73 Pull Request resolved: #81113
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. [ghstack-poisoned]
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. ghstack-source-id: 49e2ebd Pull Request resolved: #81113
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. [ghstack-poisoned]
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. [ghstack-poisoned]
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. [ghstack-poisoned]
| TORCH_CHECK(A.dim() >= 2, | ||
| "linalg.matrix_norm: input tensor must be a matrix or a batch of matrices"); | ||
| void _linalg_matrix_norm_checks(const Tensor& A, std::vector<int64_t>& dim, optional<ScalarType> opt_dtype, bool low_precision) { | ||
| // A |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stray comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this function I perform the checks for A, dim and dtype. This is just a comment indicating what are the preconditions in each input, but yeha, I agree that I could've been a bit more verbose :D
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. [ghstack-poisoned]
As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. [ghstack-poisoned]
Summary: As per title. I corrected a thing or two from my previous implementation to make for better errors in some weird edge-cases and have a more clear understanding of when does this function support low_precision types and when it doesn't. We also use the optimisation for bfloat16 within `vector_norm` within this function. Pull Request resolved: #81113 Approved by: https://github.com/ngimel Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/c5330183ca453d4b0e9d3b4ad46a5adb128eb4b1 Reviewed By: jeanschmidt Differential Revision: D38067128 Pulled By: jeanschmidt fbshipit-source-id: 873d82bafbb33a3257871fc0da0d774b4e9ed8fd
Stack from ghstack:
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.
We also use the optimisation for bfloat16 within
vector_normwithinthis function.
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @lezcano @ezyang @ngimel @peterbell10