-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Publicly expose _LRScheduler to LRScheduler #88503
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/88503
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 1ea11ec: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
soulitzer
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool!
|
This PR has been accepted with the accept2ship label. Attempting to merge now. @pytorchbot merge -l |
Merge startedThe Your change will be merged once all checks on your PR pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 additional jobs have failed, first few of them are: trunk Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@vadimkantorov Thanks for bringing to attention the relevant LRScheduler issues, unfortunately, we don't have anyone focused + working on them today. |
Summary: After pytorch/pytorch#88503 , these isinstance checks no longer pass. Example: https://github.com/pytorch/tnt/actions/runs/3434539858/jobs/5725945075 TODO: support older torch versions that don't have `LRScheduler` and need `_LRScheduler` Differential Revision: D41177335 fbshipit-source-id: 0fc2009fc5fd7289e95a37d7b417476d7c4f2b61
Following #88503, we should also update the pyi file Pull Request resolved: #88818 Approved by: https://github.com/soulitzer
Summary: Pull Request resolved: meta-pytorch#271 After pytorch/pytorch#88503 , these isinstance checks no longer pass. Example: https://github.com/pytorch/tnt/actions/runs/3434539858/jobs/5725945075 Reviewed By: janeyx99 Differential Revision: D41177335 fbshipit-source-id: 243b2fb4cdb06561588b56a9a9f08a7144c6ae9b
Summary: Pull Request resolved: meta-pytorch#271 After pytorch/pytorch#88503 , these isinstance checks no longer pass. Example: https://github.com/pytorch/tnt/actions/runs/3434539858/jobs/5725945075 Reviewed By: janeyx99 Differential Revision: D41177335 fbshipit-source-id: c09398e06bebc2e9362f369143d4d8dd7c21e6e6
Summary: Pull Request resolved: #271 After pytorch/pytorch#88503 , these isinstance checks no longer pass. Example: https://github.com/pytorch/tnt/actions/runs/3434539858/jobs/5725945075 Reviewed By: daniellepintz, janeyx99 Differential Revision: D41177335 fbshipit-source-id: a06cf2f1c34eb0e697df716d8ac6e393359e0c4c
Fixes pytorch#61232 Pull Request resolved: pytorch#88503 Approved by: https://github.com/soulitzer
Following pytorch#88503, we should also update the pyi file Pull Request resolved: pytorch#88818 Approved by: https://github.com/soulitzer
Summary: pytorch/pytorch#88503 introduces the public version `LRScheduler`, however `isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler)` doesn't work anymore because of https://github.com/pytorch/pytorch/blob/1ea11ecb2eea99eb552603b7cf5fbfc59659832d/torch/optim/lr_scheduler.py#L166-L169. It's a bit tricky to make it BC compatible for torch version <= 1.13. V1 of this diff uses try catch block to import the `LRScheduler` and make it available in `detectron2.solver`, then the whole D2 (facebookresearch@11528ce) uses this version of `LRScheduler`. There're two drawbacks though: - it adds a little mental burden to figure out what's D2 (facebookresearch@11528ce083dc9ff83ee3a8f9086a1ef54d2a402f)'s `LRScheduler`, previously it's clear that the `LRScheduler`/`_LRScheduler` is from `torch`. - it has a name collision with `hooks.LRScheduler`, eg. in the `hooks.py` I have to do `LRScheduler as _LRScheduler`. But I couldn't found a better solution, maybe use try catch block in every file? Differential Revision: D42111273 fbshipit-source-id: 613cd35a1e0bdb9b8edd852042df047aba021d87
Summary: Pull Request resolved: #4709 pytorch/pytorch#88503 introduces the public version `LRScheduler`, however `isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler)` doesn't work anymore because of https://github.com/pytorch/pytorch/blob/1ea11ecb2eea99eb552603b7cf5fbfc59659832d/torch/optim/lr_scheduler.py#L166-L169. It's a bit tricky to make it BC compatible for torch version <= 1.13. V1 of this diff uses try catch block to import the `LRScheduler` and make it available in `detectron2.solver`, then the whole D2 (11528ce) uses this version of `LRScheduler`. There're two drawbacks though: - it adds a little mental burden to figure out what's D2 (11528ce083dc9ff83ee3a8f9086a1ef54d2a402f)'s `LRScheduler`, previously it's clear that the `LRScheduler`/`_LRScheduler` is from `torch`. - it has a name collision with `hooks.LRScheduler`, eg. in the `hooks.py` I have to do `LRScheduler as _LRScheduler`. But I couldn't found a better solution, maybe use try catch block in every file? Reviewed By: sstsai-adl Differential Revision: D42111273 fbshipit-source-id: 0269127de1ba3ef90225c5dfe085bb209f6cf4d1
Summary: Introduce `torchtnt.utils.lr_scheduler.TLRScheduler` to manage compatibility of torch with expose of `LRScheduler` pytorch/pytorch#88503 It seems that issue persist still with `1.13.1` (see #285). Replace the get_version logic with try/except. `_LRscheduler`. Fixes #285 `https://github.com/pytorch/tnt/issues/285` Pull Request resolved: #286 Reviewed By: hudeven Differential Revision: D42220767 Pulled By: ananthsub fbshipit-source-id: 3b434ba4e0c8efa1a81ab5a88ae58765bb23020e
Fixes #61232