-
Notifications
You must be signed in to change notification settings - Fork 26.3k
prepare removal of deprecated functionality in torch.testing #87969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/87969
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 FailuresAs of commit 90869bd: NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
pmeier
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There were two things that needed manual handling:
-
torch.testing.make_non_contiguousneeded to be replaced with one of the suggested options as was applicable:pytorch/torch/testing/_deprecated.py
Lines 106 to 114 in 166b5d3
@warn_deprecated( "Depending on the use case there a different replacement options:\n\n" "- If you are using `make_non_contiguous` in combination with a creation function to create a noncontiguous tensor " "with random values, use `torch.testing.make_tensor(..., noncontiguous=True)` instead.\n" "- If you are using `make_non_contiguous` with a specific tensor, you can replace this call with " "`torch.repeat_interleave(input, 2, dim=-1)[..., ::2]`.\n" "- If you are using `make_non_contiguous` in the PyTorch test suite, use " "`torch.testing._internal.common_utils.noncontiguous_like` instead." ) -
torch.testing.assert_allclosecouldn't always 1-to-1 replaced with the new-ish (stable since Feb 2022)torch.testing.assert_close. See #61844 for a detailed analysis of the differences.
I've highlighted all places where I did more than a simple replace of deprecated functionality below.
| # The function below is a faithful replica of the former `torch.testing.assert_allclose`. This is only here, | ||
| # because it is used extensively throughout the tests in this package while needing one feature that | ||
| # the new `torch.testing.assert_close` does not offer: comparison between numpy arrays and torch tensors. See | ||
| # https://github.com/pytorch/pytorch/issues/61844 for the reasoning why this feature was removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per comment. This only applies to the tests located in caffe2/python/operator_test/*.
| dist = Poisson(rate_zero) | ||
| dist.log_prob(torch.ones_like(rate_zero)).backward() | ||
| torch.testing.assert_allclose(rate_zero.grad, torch.inf) | ||
| self.assertEqual(rate_zero.grad, torch.inf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comparing scalar tensors to Python scalars is not supported by torch.testing.assert_close. I've opted to use self.assertEqual here since that still includes this type wrangling for BC.
| trace = torch.jit.trace(fn, args) | ||
| self.assertAllFused(trace.graph_for(*args)) | ||
| torch.testing.assert_allclose(fn(*args), trace(*args)) | ||
| torch.testing.assert_close(fn(*args), trace(*args), equal_nan=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In torch.testing.assert_close, equal_nan is False by default. Are NaN's actually ok here or did this pass silently before?
| py_relu_cpu = py_relu.to("cpu") | ||
|
|
||
| torch.testing.assert_allclose(np_relu, py_relu_cpu) | ||
| self.assertEqual(np_relu, py_relu_cpu) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comparing numpy.ndarray's to torch.Tensor's is not supported in torch.testing.assert_close. self.assertEqual still allows it.
| idx = torch.testing.make_tensor( | ||
| num_src, low=0, high=num_dest, dtype=idx_dtype, device=device, noncontiguous=index_noncontig | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to create a noncontiguous tensor right away, we can use torch.testing.make_tensor directly.
| src = torch.randn(num_copy, *other_sizes, device=device) | ||
| if not src_contig: | ||
| src = torch.testing.make_non_contiguous(src) | ||
| src = noncontiguous_like(src) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not using torch.testing.make_tensor here since src samples from a normal distribution and the former would sample from a uniform one. If the intent is to just sample positive as well as negative values, LMK.
| idx = torch.randperm(num_dest, dtype=dtype, device=device).narrow(0, 0, num_copy) | ||
| if not index_contig: | ||
| idx = torch.testing.make_non_contiguous(idx) | ||
| idx = noncontiguous_like(idx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above but for a random permutation.
| fsdp_loss = fsdp_loss.cuda() | ||
| fsdp_unsharded_params = get_full_params(fsdp_model) | ||
| torch.testing.assert_allclose(ref_loss, fsdp_loss) | ||
| torch.testing.assert_close(ref_loss, fsdp_loss, check_dtype=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strict dtype checking is the default for torch.testing.assert_close. Are mismatching dtypes ok here?
_Redo of #86586 with all BC breaking changes granularly placed into separate commits._ --- Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes). [ghstack-poisoned]
_Redo of #86586 with all BC breaking changes granularly placed into separate commits._ --- Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes). [ghstack-poisoned]
_Redo of #86586 with all BC breaking changes granularly placed into separate commits._ --- Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes). [ghstack-poisoned]
_Redo of #86586 with all BC breaking changes granularly placed into separate commits._ --- Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes). [ghstack-poisoned]
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There are several test failures, but I can't imagine they're related to this PR
See #87969 or #86586 for the reasoning. Pull Request resolved: #87970 Approved by: https://github.com/mruberry
See #87969 or #86586 for the reasoning. Pull Request resolved: #87971 Approved by: https://github.com/mruberry
See #87969 or #86586 for the reasoning. Pull Request resolved: #87972 Approved by: https://github.com/mruberry
See #87969 or #86586 for the reasoning. Pull Request resolved: #87973 Approved by: https://github.com/mruberry
See #87969 or #86586 for the reasoning. Pull Request resolved: #87974 Approved by: https://github.com/mruberry
…#87969) _Redo of pytorch#86586 with all BC breaking changes granularly placed into separate commits._ --- Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes). Pull Request resolved: pytorch#87969 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87970 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87971 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87972 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87973 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87974 Approved by: https://github.com/mruberry
…#87969) _Redo of pytorch#86586 with all BC breaking changes granularly placed into separate commits._ --- Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes). Pull Request resolved: pytorch#87969 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87970 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87971 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87972 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87973 Approved by: https://github.com/mruberry
See pytorch#87969 or pytorch#86586 for the reasoning. Pull Request resolved: pytorch#87974 Approved by: https://github.com/mruberry
Stack from ghstack (oldest at bottom):
Redo of #86586 with all BC breaking changes granularly placed into separate commits.
Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the BC policy.