-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Implement NaN-propagating max/min on Vec256. #13399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
colesbury
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty ambivalent about naming these Vec256::min and Vec256::max as they differ in semantics from std::min and std::max. So far we've tried to match the semantics of either the standard library or the underlying vector instructions to keep things simple to reason about.
I'd prefer a different name for the NaN propagating min/max.
|
@colesbury Depressingly, there's not a good precedent to follow here.
Based on prior discussion, my understanding is that we want to follow the NumPy example in PyTorch of propagating NaNs by default. In my opinion the best option, if we don't want to keep the existing name, is to standardize on the IEEE 754 operator names, since all of the others are overloaded across architectures and languages. |
|
@resistor yeah, Mostly, I'm thinking of operations that need to be written for both scalar types (e.g. float) and the vector types: pytorch/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp Lines 20 to 21 in f9c0a08
Along these lines, it may also be worth adding a implementation for scalar types. Something like: template <class T>
inline T minimum(const T& a, const T& b) { ... }With the idea being that one could call vec256::minimum(float, float) and vec256::minimum(Vec256, Vec256) and it would perform the same operation. An example of this is |
65dc2b0 to
e144429
Compare
|
@colesbury ping |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@resistor has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Pull Request resolved: pytorch/pytorch#13399 Differential Revision: D13199957 Pulled By: resistor fbshipit-source-id: 1565e079b13c5d4f42f2033830a7c997b7d824bc
No description provided.