Skip to content

Conversation

@lezcano
Copy link
Collaborator

@lezcano lezcano commented Jul 8, 2022

Stack from ghstack:

As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within vector_norm within
this function.

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @lezcano @ezyang @ngimel @peterbell10

As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 8, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit 222f67c (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

lezcano added a commit that referenced this pull request Jul 8, 2022
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

ghstack-source-id: db85492
Pull Request resolved: #81113
@lezcano lezcano changed the title [PrimTorch] Reference for matrix_norm [PrimTorch] Reference for linalg.matrix_norm Jul 8, 2022
@lezcano lezcano added module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul release notes: composability release notes category topic: not user facing topic category labels Jul 8, 2022
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

[ghstack-poisoned]
lezcano added a commit that referenced this pull request Jul 9, 2022
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

ghstack-source-id: 5898f73
Pull Request resolved: #81113
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

[ghstack-poisoned]
lezcano added a commit that referenced this pull request Jul 9, 2022
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

ghstack-source-id: 49e2ebd
Pull Request resolved: #81113
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

[ghstack-poisoned]
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

[ghstack-poisoned]
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

[ghstack-poisoned]
TORCH_CHECK(A.dim() >= 2,
"linalg.matrix_norm: input tensor must be a matrix or a batch of matrices");
void _linalg_matrix_norm_checks(const Tensor& A, std::vector<int64_t>& dim, optional<ScalarType> opt_dtype, bool low_precision) {
// A
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stray comment?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this function I perform the checks for A, dim and dtype. This is just a comment indicating what are the preconditions in each input, but yeha, I agree that I could've been a bit more verbose :D

lezcano added 2 commits July 21, 2022 11:29
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

[ghstack-poisoned]
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Jul 22, 2022
Summary:
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.

We also use the optimisation for bfloat16 within `vector_norm` within
this function.

Pull Request resolved: #81113
Approved by: https://github.com/ngimel

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/c5330183ca453d4b0e9d3b4ad46a5adb128eb4b1

Reviewed By: jeanschmidt

Differential Revision: D38067128

Pulled By: jeanschmidt

fbshipit-source-id: 873d82bafbb33a3257871fc0da0d774b4e9ed8fd
@facebook-github-bot facebook-github-bot deleted the gh/Lezcano/114/head branch July 25, 2022 14:18
@kit1980 kit1980 added the Merged label Mar 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: primTorch open source release notes: composability release notes category topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants