Skip to content

Conversation

@isuruf
Copy link
Collaborator

@isuruf isuruf commented May 23, 2025

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented May 23, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/154239

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 7da28c1 with merge base 53057fc (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

isuruf added a commit that referenced this pull request May 23, 2025
ghstack-source-id: 808d53f
Pull Request resolved: #154239
@isuruf isuruf requested review from amjames and eellison May 29, 2025 22:28
@isuruf isuruf added the release notes: nn release notes category label Jun 3, 2025
# are_deterministic_algorithms_enabled.
if not torch.jit.is_scripting():
if torch.are_deterministic_algorithms_enabled() and (
input.is_cuda or input.is_xpu
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is not input.is_cpu what we want ?

# Two levels are necessary to prevent TorchScript from touching
# are_deterministic_algorithms_enabled.
if not torch.jit.is_scripting():
if torch.are_deterministic_algorithms_enabled() and (
Copy link
Contributor

@eellison eellison Jun 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that it super matters, but the determinstic_algorithms call is slower than the not input.is_cpu. should we switch them ?

import timeit; import torch as t; x = t.tensor([1]); print(f"are_deterministic_algorithms_enabled: {timeit.timeit(lambda: t.are_deterministic_algorithms_enabled(), number=1000000):.6f}s vs is_cpu: {timeit.timeit(lambda: not x.is_cpu, number=1000000):.6f}s")
are_deterministic_algorithms_enabled: 0.108508s vs is_cpu: 0.062671s

I guess for cpu, it's better for the deterministic_algorithms_enabled call to go first, and for cuda the other way.. it's fine as is just thinking aloud.

):
# Use slow decomp whose backward will be in terms of index_put
# importlib is required because the import cannot be top level
# (cycle) and cannot be nested (TS doesn't support)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💀

[ghstack-poisoned]
isuruf added a commit that referenced this pull request Jun 16, 2025
ghstack-source-id: c4e21e1
Pull Request resolved: #154239
@isuruf
Copy link
Collaborator Author

isuruf commented Jun 17, 2025

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jun 17, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / win-vs2022-cpu-py3 / test (default, 3, 3, lf.windows.4xlarge.nonephemeral)

Details for Dev Infra team Raised by workflow job

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

)
if input.dim() == 5 and mode == "trilinear":
assert align_corners is not None
# Two levels are necessary to prevent TorchScript from touching
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this will also still fail with TS? (which is fair, just want to confirm that the fallback below will fail somehow gracefully :D

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure. I kept the same structure as the upsample bilinear code path.

@isuruf
Copy link
Collaborator Author

isuruf commented Jun 26, 2025

@pytorchbot rebase -b viable/strict

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Jun 26, 2025
ghstack-source-id: 3e31c45
Pull Request resolved: #154239
@pytorchmergebot
Copy link
Collaborator

Successfully rebased gh/isuruf/143/orig onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via ghstack checkout https://github.com/pytorch/pytorch/pull/154239)

@isuruf
Copy link
Collaborator Author

isuruf commented Jun 26, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@github-actions github-actions bot deleted the gh/isuruf/143/head branch July 27, 2025 02:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged open source release notes: nn release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants