-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add polygamma where n >= 2 #42499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add polygamma where n >= 2 #42499
Conversation
💊 CI failures summary and remediationsAs of commit 5f9d526 (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
🚧 1 ongoing upstream failure:These were probably caused by upstream breakages that are not fixed yet:
🚧 2 fixed upstream failures:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
|
Hi @albanD . I mention you because I finished implementing polygamma things but ci makes some errors that I cannot fix. |
|
Hi, Yes this issue was on master :/ You can rebase on top of master to have it disappear (note that you can see the CI status for the master branch here: https://ezyang.github.io/pytorch-ci-hud/build2/pytorch-master). I'll take a look at the code tomorrow. |
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, Thanks for the PR!
Some high level questions:
- Does the doc need an update to state what now works? Also if we only test 0,1,2,3 it might be worth mentioning it.
- Does the autograd test need to be updated to test higher values of n as well? And checking it, I can't find it in the generic autograd tests, so is the backward actually checked at all?
|
Thanks for your kind review!
It must work for arbitrary integer values higher than 0, 1, 2, 3... I omitted these cases because as far as i know, main purpose of only up to third orders of gamma functions are commonly used in deep learning(especially variational inference involving gamma distribution) It's okay to add tests to check higher n (but IMHO tests should be handled carefully because it may give slightly (up to 1e-8~10) different values because of numerical rounding.
I added. I'm not sure grad tests should be placed there (around line 18703 at And I'd like to say sorry I made dirty commits. |
Don't worry about this. |
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Can you rebase on top of master please to make sure that all the onnx test are passing fine?
|
Okay, CI will be happy with this now |
|
Perfect! Thanks! |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
#40980
I have a few questions during implementing Polygamma function...
so, I made PR prior to complete it.
is it okay for me to use cephes code with this same copyright notice(already in the Pytorch codebases)
There is no linting in internal Aten library. (as far as I know, I read https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md)
How do I'm sure my code will follow appropriate guidelines of this library..?
Actually, there's a digamma, trigamma function already
digamma is needed, however, trigamma function becomes redundant if polygamma function is added.
it is okay for trigamma to be there or should be removed?
btw, CPU version works fine with 3-rd order polygamma(it's what we need to play with variational inference with beta/gamma distribution) now and I'm going to finish GPU version soon.