Skip to content

Commit ae3d0f0

Browse files
author
chenlai
committed
More update on the guidance
Differential Revision: [D34226823](https://our.internmc.facebook.com/intern/diff/D34226823/) ghstack-source-id: 149109630 Pull Request resolved: #72818
1 parent f1a9650 commit ae3d0f0

File tree

1 file changed

+44
-2
lines changed

1 file changed

+44
-2
lines changed

torch/csrc/jit/operator_upgraders/README.md

Lines changed: 44 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,50 @@
11
# Guidance for Operator Developer
22

3-
PyTorch’s operators sometimes require changes to maintain the high quality user experience (UX) that PyTorch is known for. These changes can be backward compatibility (BC) breaking, where older programs will no longer run as expected on the latest version of PyTorch (an old writer / new reader problem) or forward compatibility (FC) breaking, where new programs will not run on older versions of PyTorch (a new writer / old reader problem). An upgrader is a method to use the new operator to mimic the old operator behavior. When a new runtime loads an old model with the old operator, the upgrader will replace the old operator in the model with the new operator. The replacement will only happen for old models, and it does not need to consider the new models. Please refer to the documentation [PyTorch Operator Versioning](https://github.com/pytorch/rfcs/blob/master/RFC-0017-PyTorch-Operator-Versioning.md) for more details.
3+
PyTorch’s operators sometimes require changes to maintain the high quality user experience (UX) that PyTorch is known for. These changes can be backward compatibility (BC) breaking, where older programs will no longer run as expected on the latest version of PyTorch (an old program / new runtime problem) or forward compatibility (FC) breaking, where new programs will not run on older versions of PyTorch (a new writer / old reader problem). This guidance is just for making BC breaking changes to operator. An upgrader is a method to use the new operator to mimic the old operator behavior. When a new runtime loads an old model with the old operator, the upgrader will replace the old operator in the model with the new operator. Upgraders only apply to instances of old operators to make sure they comply with the new operator contract. Please refer to the documentation [PyTorch Operator Versioning](https://github.com/pytorch/rfcs/blob/master/RFC-0017-PyTorch-Operator-Versioning.md) for more details.
4+
5+
If the change to the operator is BC-breaking in either the schema or the semantics way, you will need to write an upgrader to make the change non-BC breaking. In general, you can know your operator is BC breaking, if it fails `test/forward_backward_compatibility/check_forward_backward_compatibility.py `.
6+
7+
### Some examples BC/FC breaking changes
8+
9+
When making changes to the operators, the first thing to identify is if it's BC/FC breaking. Again, we only targetting for BC breaking changes on this guidance. Here are some examples to help understanding what a BC/FC changes may look like:
10+
11+
#### Backward Compatibility Breakage:
12+
13+
- Return types are more generic than the older version
14+
- Old: `foo(Tensor self, int a) -> int`
15+
- New: `foo(Tensor self, int a) -> Scalar`
16+
- Argument types are more specific than the older version
17+
- Old: `foo(Tensor self, Scalar a) -> int`
18+
- New: `foo(Tensor self, int a) -> int`
19+
- Added new arguments don’t have associated default values
20+
- Old: `foo(Tensor self, int a) -> int`
21+
- New: `foo(Tensor self, int a, int b) -> int`
22+
- Internal implementation change even when the schema remains the same
23+
- Deprecating an operator
24+
25+
#### Forward Compatibility Breakage:
26+
27+
- Adding new default argument:
28+
- Adding a new default argument not RIGHT BEFORE the out arguments which can be 0 or more.
29+
- Old: `foo(Tensor self, int a, int b=1, Tensor(a!) out) -> (Tensor(a!))`
30+
- New: `foo(Tensor self, int a, int c=1, int b=1, Tensor(a!) out) -> (Tensor(a!))`
31+
32+
- Adding out argument NOT at the end of the schema.
33+
- Old: `foo(Tensor self, int a, int b=1, Tensor(a!) out) -> (Tensor(a!))`
34+
- New: `foo(Tensor self, int a, Tensor(d!), int b=1, Tensor(a!) out) -> (Tensor(a!), Tensor(d!))`
35+
36+
- Adding default arguments with container types such as ListType or DictType (list or dict).
37+
- Old: `foo(Tensor self, int a, int b=1, Tensor(a!) out) -> (Tensor(a!))`
38+
- New: `foo(Tensor self, int a, int b=1, int[2] c=1, Tensor(a!) out) -> (Tensor(a!))`
39+
- Changing default argument’s name
40+
- This will only work when the default argument always uses the default value (so that serialization will ignore it). In all other cases, it will fail.
41+
- Old: `foo(Tensor self, int a, int b=1, Tensor(a!) out) -> (Tensor(a!))`
42+
- New: `foo(Tensor self, int a, int c=1, Tensor(a!) out) -> (Tensor(a!))`
43+
- Changing default argument’s default value. This will break when this argument is saved with the default value in newer runtime. Older runtime will use its old default value which will lead to wrong output.
44+
- Old: `foo(Tensor self, int a, int b=1, Tensor(a!) out) -> (Tensor(a!))`
45+
- New: `foo(Tensor self, int a, int b=4, Tensor(a!) out) -> (Tensor(a!))`
46+
- Adding new operator
447

5-
After you change to operator either the operator schema is BC-breaking way or the semantics of the operator, you will need to write an “upgrader” to make the change non-BC breaking iff they are used in TorchScript or mobile. In general, you can know your operator is BC breaking, if it fails `test/forward_backward_compatibility/check_forward_backward_compatibility.py `
648

749
The steps to write upgrader:
850

0 commit comments

Comments
 (0)