Skip to content

Conversation

@ljk53
Copy link
Contributor

@ljk53 ljk53 commented Jul 10, 2020

Stack from ghstack:

The ultimate goal is to move things that are not gated with if (compute_requires_grad(...))
or if (grad_fn) out from VariableType so that VariableType kernels can be enabled/disabled
based upon GradMode. Then we can merge AutoNonVariableTypeMode and NoGradGuard.

We've moved profiling / tracing logic out from VariableType. One remaining thing that's
not gated with the if-statement is the increment_version call.

However, the gen_variable_type.py does use bits from derivatives.yaml to determine whether
to emit the increment_version call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: D22471643

Differential Revision: D22471643

The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. The only remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)

[ghstack-poisoned]
ljk53 added a commit that referenced this pull request Jul 10, 2020
The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. The only remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)

ghstack-source-id: 107497069
Pull Request resolved: #41269
@dr-ci
Copy link

dr-ci bot commented Jul 10, 2020

💊 CI failures summary and remediations

As of commit 3f32b4e (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 1/2 non-CircleCI failure(s)

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build binary_linux_libtorch_3_7m_cpu_gcc5_4_cxx11-abi_shared-with-deps_build (1/1)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Jul 23 05:33:53 FAILED: caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_misc.cpp.o
Jul 23 05:33:50 compilation terminated. 
Jul 23 05:33:50 [4308/4348] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_module_api.cpp.o 
Jul 23 05:33:50 FAILED: caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_module_api.cpp.o  
SE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -fno-strict-aliasing -Wno-write-strings -Wno-strict-aliasing -pthread -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unknown-pragmas -std=gnu++14 -MD -MT caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_module_api.cpp.o -MF caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_module_api.cpp.o.d -o caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_module_api.cpp.o -c ../test/cpp/jit/test_module_api.cpp 
Jul 23 05:33:50 ../test/cpp/jit/test_module_api.cpp:214:1: fatal error: error writing to /tmp/cciBMy7O.s: No space left on device 
Jul 23 05:33:50  } // namespace torch 
Jul 23 05:33:50  ^ 
Jul 23 05:33:50 compilation terminated. 
Jul 23 05:33:51 [4309/4348] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_save_load.cpp.o 
Jul 23 05:33:53 [4310/4348] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_misc.cpp.o 
Jul 23 05:33:53 FAILED: caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_misc.cpp.o  
-DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -fno-strict-aliasing -Wno-write-strings -Wno-strict-aliasing -pthread -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unknown-pragmas -std=gnu++14 -MD -MT caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_misc.cpp.o -MF caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_misc.cpp.o.d -o caffe2/torch/CMakeFiles/torch_python.dir/__/test/cpp/jit/test_misc.cpp.o -c ../test/cpp/jit/test_misc.cpp 
Jul 23 05:33:53 ../test/cpp/jit/test_misc.cpp:2090:1: fatal error: error writing to /tmp/ccNYke2K.s: No space left on device 
Jul 23 05:33:53  } // namespace torch 
Jul 23 05:33:53  ^ 
Jul 23 05:33:53 compilation terminated. 
Jul 23 05:33:54 [4311/4348] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/distributed/autograd/init.cpp.o 
Jul 23 05:33:54 ninja: build stopped: subcommand failed. 
Jul 23 05:33:54 Traceback (most recent call last): 
Jul 23 05:33:54   File "setup.py", line 734, in <module> 
Jul 23 05:33:54     build_deps() 

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 35 times.

The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. The only remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)

[ghstack-poisoned]
ljk53 added a commit that referenced this pull request Jul 10, 2020
Pull Request resolved: #41269

The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. The only remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.
ghstack-source-id: 107549144

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)
The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. The only remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)

[ghstack-poisoned]
ljk53 added a commit that referenced this pull request Jul 14, 2020
Pull Request resolved: #41269

The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. The only remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.
ghstack-source-id: 107710481

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)
@ezyang ezyang requested a review from albanD July 14, 2020 20:07

body.append(emit_call(env, tie_return_values))
if strategy == 'use_derived':
body.extend(emit_increment_version())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the idea here is, if the op in question is composite, we DON'T emit the increment version (so that the constituent pieces can take care of it)

if not modifies_arguments:
return []
return ['increment_version({});'.format(arg['name']) for arg in differentiable_outputs]
return ['increment_version({});'.format(arg['name']) for arg in returns]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this changed? does it actually matter?
Do we have inplace ops that return multiple things? And if so do we have some that return a mix of differentiable/non-differentiable outputs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should increment version on all tensors that get mutated, not just the differentiable ones. You can save non differentiable tensors as part of backwards formula...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I don't expect that these other outputs are actually modified inplace! We would be bumping the version of a Tensor we don't change inplace.

Also this is BC-breaking right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this changed? does it actually matter?
Do we have inplace ops that return multiple things? And if so do we have some that return a mix of differentiable/non-differentiable outputs?

There are cases where it returns multiple things mixed of differentiable/non-differentiable outputs, e.g.:
ljk53@9ab9e4b#diff-79b1a31c97eee8dda9e0dae02162beecR2986

non-differentiable outputs are usually things like indices.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ho right these out=(val, ind) functions...
Ok!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I don't expect that these other outputs are actually modified inplace! We would be bumping the version of a Tensor we don't change inplace.

Also this is BC-breaking right?

This is a good point - just to clarify: regarding BC breakage, do you mean:

  1. code that uses x._version to read variable version explicitly, or
  2. code broken by falsely incrementing version for tensors that are not actually updated?

I call out "falsely" bumping because it's harmful as the breakage is totally unnecessary compared to truly bumping - in which case it is technically more correct behavior - if truly bumping breaks any code then it probably reveals bugs.

Seems we made effort to differentiate Tensor& and const Tensor& and seems returns at this place only contains Tensor& ones - so I assumed that these params are possibly mutated and bumped their versions. By eyeballing check seems in most cases the returns are indeed mutated, but I haven't verified all cases - do you know whether is it right assumption?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code that uses x._version to read variable version explicitly

This one is ok to break as this is an internal API. You might need to update a couple tests but it should be fine overall (even though we want to avoid breaking it often).

code broken by falsely incrementing version for tensors that are not actually updated?

There are two cases here indeed:

  • Cases where we were not incrementing while we should:
import torch
from torch.utils import checkpoint

a = torch.ones(10, 10, requires_grad=True)

b, ind = a.max(dim=0)

with torch.no_grad():
    if False:
        ind += 1 # Raise an error as expected
    elif True:
        # No error and wrong grad
        t = torch.zeros(10)
        t[2] = 1
        torch.cummax(t, dim=0, out=(torch.Tensor(), ind))
    else:
        pass

b.sum().backward()
print(a.grad)
  • Cases where we should not increment as the second result is not modified inplace. Not sure if this happens in practice. In any case, we should be able to tell from the signature if it is modified or not (the signature is (should) always right!).

@albanD
Copy link
Collaborator

albanD commented Jul 14, 2020

The only remaining thing that's not gated with the if-statement is the increment_version call.

What about the view handling via the view_as() calls? These are not generated for non-differentiable functions either but should still be executed when grad mode is disabled.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

But the differentiability impact the way we call the function (composite or not) and the way we call the function impact if we call increment_version or not.
So they are not really orthogonal?

@ezyang
Copy link
Contributor

ezyang commented Jul 15, 2020

What about the view handling via the view_as() calls? These are not generated for non-differentiable functions either but should still be executed when grad mode is disabled.

Hm, yes, this is probably more long tail stuff we will have to handle.

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good.

Please do add a note about BC-breaking changes this leads to in the main comment so that we can add it to the release doc.

@albanD albanD added the module: bc-breaking Related to a BC-breaking change label Jul 15, 2020
@ljk53
Copy link
Contributor Author

ljk53 commented Jul 15, 2020

Sounds good.

Please do add a note about BC-breaking changes this leads to in the main comment so that we can add it to the release doc.

Thanks for reviewing and approving this PR! I was actually using this PR as an experiment vehicle to trigger CIs so I didn't add any reviewers, guess I should have marked it as WIP, lol...

I might still experiment something else before landing it.

@albanD
Copy link
Collaborator

albanD commented Jul 15, 2020

Thinking more about it, I would agree that increment should not be associated with the differentiability of a function.
Because even if a function is not relevant for gradient computation, it should not lead to silently wrong gradients by side effect.

ljk53 added 3 commits July 15, 2020 23:34
The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. The only remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)

[ghstack-poisoned]
The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. The only remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643)

[ghstack-poisoned]
…iability"

The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. One remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643)

[ghstack-poisoned]
@ljk53 ljk53 changed the title [pytorch] bump up version regardless of differentiability [pytorch] bump up variable version regardless of differentiability Jul 16, 2020
…iability"

The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. One remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643)

[ghstack-poisoned]
ljk53 added a commit that referenced this pull request Jul 22, 2020
Pull Request resolved: #41269

The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. One remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.
ghstack-source-id: 108289718

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)
…iability"

The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. One remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643)

[ghstack-poisoned]
ljk53 added a commit that referenced this pull request Jul 23, 2020
Pull Request resolved: #41269

The ultimate goal is to move things that are not gated with `if (compute_requires_grad(...))`
or `if (grad_fn)` out from VariableType so that VariableType kernels can be enabled/disabled
based upon `GradMode`. Then we can merge `AutoNonVariableTypeMode` and `NoGradGuard`.

We've moved profiling / tracing logic out from VariableType. One remaining thing that's
not gated with the if-statement is the `increment_version` call.

However, the `gen_variable_type.py` does use bits from `derivatives.yaml` to determine whether
to emit the `increment_version` call. If an output is never going to be differentiable (not based
upon runtime property of the variable but based upon static property, e.g. it's integral type)
then it would never emit the increment_version call.

Hypothetically, increment_version for a tensor can be orthogonal to its differentiability.

This PR is to make the change and test its impact. Making this logical simplification would
allow us to move this out from VariableType to aten codegen.
ghstack-source-id: 108318746

Differential Revision: [D22471643](https://our.internmc.facebook.com/intern/diff/D22471643/)
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 01c406c.

@facebook-github-bot facebook-github-bot deleted the gh/ljk53/156/head branch July 27, 2020 14:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: bc-breaking Related to a BC-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants