-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[FSDP()][21/N] Refactor and fix _cast_buffers()
#87935
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/87935
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 88d3c09: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This was referenced Oct 27, 2022
…tation" [ghstack-poisoned]
…tation" [ghstack-poisoned]
This was referenced Nov 1, 2022
…tation" [ghstack-poisoned]
_buffer_name_to_orig_dtype computation_cast_buffers()
awgu
pushed a commit
to awgu/pytorch
that referenced
this pull request
Nov 1, 2022
ghstack-source-id: 6d46f28 Pull Request resolved: pytorch#87935
This PR refactors and fixes `_cast_buffers()`. **Before** Buffers were not correctly cast back to their original dtypes for submodules when using buffer mixed precision. - `_cast_buffers(recurse=False)` incorrectly casts all buffers, including those in submodules. This is because of this outer loop over `self.modules()`: https://github.com/pytorch/pytorch/blob/c40033be162db0f94d37e7ccbd2a89d67f8b8e47/torch/distributed/fsdp/fully_sharded_data_parallel.py#L700 - There was a unit test that checked that buffers were cast as expected (`test_mixed_precision_e2e_full_shard()`). The unit test _coincidentally_ passed because all modules shared the same buffer name `"buffer"`. In `_cast_buffers()`, the `dict` mapping buffer name to original dtype is populated lazily (during `_lazy_init()`). However, the keys are unprefixed: https://github.com/pytorch/pytorch/blob/c40033be162db0f94d37e7ccbd2a89d67f8b8e47/torch/distributed/fsdp/fully_sharded_data_parallel.py#L712-L717 - Thus, even though (1) `_cast_buffers(recurse=False)` was only called on the root and (2) `self._buffer_name_to_orig_dtype` had unprefixed names as keys, the unit test still passed because (1) `_cast_buffers()` still looped over all buffers despite `recurse=False` and (2) all submodules' buffers were named `"buffer"` and had the same original and low-precision dtypes and hence were cast correctly. If we change each submodule to have its own distinct buffer name, then the unit test fails. This PR makes such a change to showcase the progression granted by this PR. **After** This PR separates `_cast_buffers()` into three methods: `_get_buffers_and_dtypes_for_computation()`, `_get_buffers_and_dtypes_for_checkpoint()`, and `_cast_buffers_to_dtype_and_device()`. This is to separate the different use cases (casting for computation and casting for checkpointing) and the corresponding code paths. Plus, the signature for `_cast_buffers_to_dtype_and_device()` makes it clear exactly what buffers are being cast and to what dtype. Both `_get_...()` functions assume that they are called on the root only for now. This coincides with the construction of `_buffer_name_to_orig_dtype` in the FSDP constructor, which loops over all submodules. (This means that for non-root modules, their `_buffer_name_to_orig_dtype` is populated but not used.) The `dict`'s keys are clean since the buffer cast to original dtype happens in a `summon_full_params()` context, which cleans the names. Note: We can try to move `_get_buffers_and_dtypes_for_checkpoint()` into `_state_dict_utils.py` in a follow-up. [ghstack-poisoned]
awgu
pushed a commit
to awgu/pytorch
that referenced
this pull request
Nov 2, 2022
ghstack-source-id: 0633b82 Pull Request resolved: pytorch#87935
This was referenced Nov 2, 2022
kulinseth
pushed a commit
to kulinseth/pytorch
that referenced
this pull request
Nov 5, 2022
This PR refactors and fixes `_cast_buffers()`. **Before** Buffers were not correctly cast back to their original dtypes for submodules when using buffer mixed precision. - `_cast_buffers(recurse=False)` incorrectly casts all buffers, including those in submodules. This is because of this outer loop over `self.modules()`: https://github.com/pytorch/pytorch/blob/c40033be162db0f94d37e7ccbd2a89d67f8b8e47/torch/distributed/fsdp/fully_sharded_data_parallel.py#L700 - There was a unit test that checked that buffers were cast as expected (`test_mixed_precision_e2e_full_shard()`). The unit test _coincidentally_ passed because all modules shared the same buffer name `"buffer"`. In `_cast_buffers()`, the `dict` mapping buffer name to original dtype is populated lazily (during `_lazy_init()`). However, the keys are unprefixed: https://github.com/pytorch/pytorch/blob/c40033be162db0f94d37e7ccbd2a89d67f8b8e47/torch/distributed/fsdp/fully_sharded_data_parallel.py#L712-L717 - Thus, even though (1) `_cast_buffers(recurse=False)` was only called on the root and (2) `self._buffer_name_to_orig_dtype` had unprefixed names as keys, the unit test still passed because (1) `_cast_buffers()` still looped over all buffers despite `recurse=False` and (2) all submodules' buffers were named `"buffer"` and had the same original and low-precision dtypes and hence were cast correctly. If we change each submodule to have its own distinct buffer name, then the unit test fails. This PR makes such a change to showcase the progression granted by this PR. **After** This PR separates `_cast_buffers()` into three methods: `_get_buffers_and_dtypes_for_computation()`, `_get_buffers_and_dtypes_for_checkpoint()`, and `_cast_buffers_to_dtype_and_device()`. This is to separate the different use cases (casting for computation and casting for checkpointing) and the corresponding code paths. Plus, the signature for `_cast_buffers_to_dtype_and_device()` makes it clear exactly what buffers are being cast and to what dtype. Both `_get_...()` functions assume that they are called on the root only for now. This coincides with the construction of `_buffer_name_to_orig_dtype` in the FSDP constructor, which loops over all submodules. (This means that for non-root modules, their `_buffer_name_to_orig_dtype` is populated but not used.) The `dict`'s keys are clean since the buffer cast to original dtype happens in a `summon_full_params()` context, which cleans the names. **Follow-Ups** - We can try to move `_get_buffers_and_dtypes_for_checkpoint()` into `_state_dict_utils.py` in a follow-up. - We may want to move to per-module buffer casting (i.e. do not have the root module cast for all submodules). Pull Request resolved: pytorch#87935 Approved by: https://github.com/mrshenli
kulinseth
pushed a commit
to kulinseth/pytorch
that referenced
this pull request
Dec 10, 2022
This PR refactors and fixes `_cast_buffers()`. **Before** Buffers were not correctly cast back to their original dtypes for submodules when using buffer mixed precision. - `_cast_buffers(recurse=False)` incorrectly casts all buffers, including those in submodules. This is because of this outer loop over `self.modules()`: https://github.com/pytorch/pytorch/blob/c40033be162db0f94d37e7ccbd2a89d67f8b8e47/torch/distributed/fsdp/fully_sharded_data_parallel.py#L700 - There was a unit test that checked that buffers were cast as expected (`test_mixed_precision_e2e_full_shard()`). The unit test _coincidentally_ passed because all modules shared the same buffer name `"buffer"`. In `_cast_buffers()`, the `dict` mapping buffer name to original dtype is populated lazily (during `_lazy_init()`). However, the keys are unprefixed: https://github.com/pytorch/pytorch/blob/c40033be162db0f94d37e7ccbd2a89d67f8b8e47/torch/distributed/fsdp/fully_sharded_data_parallel.py#L712-L717 - Thus, even though (1) `_cast_buffers(recurse=False)` was only called on the root and (2) `self._buffer_name_to_orig_dtype` had unprefixed names as keys, the unit test still passed because (1) `_cast_buffers()` still looped over all buffers despite `recurse=False` and (2) all submodules' buffers were named `"buffer"` and had the same original and low-precision dtypes and hence were cast correctly. If we change each submodule to have its own distinct buffer name, then the unit test fails. This PR makes such a change to showcase the progression granted by this PR. **After** This PR separates `_cast_buffers()` into three methods: `_get_buffers_and_dtypes_for_computation()`, `_get_buffers_and_dtypes_for_checkpoint()`, and `_cast_buffers_to_dtype_and_device()`. This is to separate the different use cases (casting for computation and casting for checkpointing) and the corresponding code paths. Plus, the signature for `_cast_buffers_to_dtype_and_device()` makes it clear exactly what buffers are being cast and to what dtype. Both `_get_...()` functions assume that they are called on the root only for now. This coincides with the construction of `_buffer_name_to_orig_dtype` in the FSDP constructor, which loops over all submodules. (This means that for non-root modules, their `_buffer_name_to_orig_dtype` is populated but not used.) The `dict`'s keys are clean since the buffer cast to original dtype happens in a `summon_full_params()` context, which cleans the names. **Follow-Ups** - We can try to move `_get_buffers_and_dtypes_for_checkpoint()` into `_state_dict_utils.py` in a follow-up. - We may want to move to per-module buffer casting (i.e. do not have the root module cast for all submodules). Pull Request resolved: pytorch#87935 Approved by: https://github.com/mrshenli
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
ciflow/trunk
Trigger trunk jobs on your pull request
release notes: distributed (fsdp)
release notes category
topic: not user facing
topic category
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Stack from ghstack:
fully_shard()onlyFULL_SHARD#88260 [FSDP()][Easy] Makefully_shard()onlyFULL_SHARDfully_shard()abide by@contract! #88235 [FSDP()] Havefully_shard()abide by@contract!_Stateto_FSDPState#88234 [FSDP()][Easy] Rename_Stateto_FSDPStatefully_shard()and move to_composable/#88233 [FSDP()] Rename tofully_shard()and move to_composable/TrainingStatetransition #88232 [FSDP][Easy] Remove unneededTrainingStatetransitionunflat_param_name->fqnfor consistency #88123 [FSDP] Renameunflat_param_name->fqnfor consistency_get_buffer_names()#88122 [FSDP] Simplify_get_buffer_names()torch.no_grad()context when offloading to CPU #88121 [FSDP] Remove unneededtorch.no_grad()context when offloading to CPU_lazy_init()into_fsdp_root_pre_forward()#87941 [FSDP()][26/N] Move_lazy_init()into_fsdp_root_pre_forward()_post_forward_reshard()#87940 [FSDP()][25/N] Add_post_forward_reshard()_lazy_init()#87939 [FSDP()][24/N] Refactor_lazy_init()_cast_buffers()#87935 [FSDP()][21/N] Refactor and fix_cast_buffers()dtypetobuffer_name_to_dtype#87934 [FSDP] Renamedtypetobuffer_name_to_dtypedevicearg from_cast_buffers()#87933 [FSDP] Removedevicearg from_cast_buffers()pre_forward_unshard()#87931 [FSDP()][18/N] Refactorpre_forward_unshard()_fsdp_root_pre_forward()#87930 [FSDP()][17/N] Refactor_fsdp_root_pre_forward()_init_streams()#87928 [FSDP()][15/N] Refactor_init_streams()This PR refactors and fixes
_cast_buffers().Before
Buffers were not correctly cast back to their original dtypes for submodules when using buffer mixed precision.
_cast_buffers(recurse=False)incorrectly casts all buffers, including those in submodules. This is because of this outer loop overself.modules():pytorch/torch/distributed/fsdp/fully_sharded_data_parallel.py
Line 700 in c40033b
test_mixed_precision_e2e_full_shard()). The unit test coincidentally passed because all modules shared the same buffer name"buffer". In_cast_buffers(), thedictmapping buffer name to original dtype is populated lazily (during_lazy_init()). However, the keys are unprefixed:pytorch/torch/distributed/fsdp/fully_sharded_data_parallel.py
Lines 712 to 717 in c40033b
_cast_buffers(recurse=False)was only called on the root and (2)self._buffer_name_to_orig_dtypehad unprefixed names as keys, the unit test still passed because (1)_cast_buffers()still looped over all buffers despiterecurse=Falseand (2) all submodules' buffers were named"buffer"and had the same original and low-precision dtypes and hence were cast correctly.If we change each submodule to have its own distinct buffer name, then the unit test fails. This PR makes such a change to showcase the progression granted by this PR.
After
This PR separates
_cast_buffers()into three methods:_get_buffers_and_dtypes_for_computation(),_get_buffers_and_dtypes_for_checkpoint(), and_cast_buffers_to_dtype_and_device(). This is to separate the different use cases (casting for computation and casting for checkpointing) and the corresponding code paths. Plus, the signature for_cast_buffers_to_dtype_and_device()makes it clear exactly what buffers are being cast and to what dtype.Both
_get_...()functions assume that they are called on the root only for now. This coincides with the construction of_buffer_name_to_orig_dtypein the FSDP constructor, which loops over all submodules. (This means that for non-root modules, their_buffer_name_to_orig_dtypeis populated but not used.) Thedict's keys are clean since the buffer cast to original dtype happens in asummon_full_params()context, which cleans the names.Follow-Ups
_get_buffers_and_dtypes_for_checkpoint()into_state_dict_utils.pyin a follow-up.