Skip to content

Conversation

@HDCharles
Copy link
Contributor

@HDCharles HDCharles commented Mar 28, 2022

Stack from ghstack (oldest at bottom):

Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations
to test_nn.py and also added comments to the test code for
composability.

Test Plan: python test/test_ao_sparsity.py TestComposability
python test/test_nn.py TestNN

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D35240272

Summary: WIP

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Mar 28, 2022
Summary: WIP

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 6c0da00
Pull Request resolved: #74848
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 28, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 0cc5bdc (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Summary: WIP

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to identify all
parametrizations on the original module and then move them (and their
associated parameters) to the new module.

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@HDCharles HDCharles changed the title composability sparsity+QAT [ao][sparsity] comsability for sparsity and QAT convert Mar 28, 2022
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to identify all
parametrizations on the original module and then move them (and their
associated parameters) to the new module.

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to identify all
parametrizations on the original module and then move them (and their
associated parameters) to the new module.

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to identify all
parametrizations on the original module and then move them (and their
associated parameters) to the new module.

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to identify all
parametrizations on the original module and then move them (and their
associated parameters) to the new module.

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Mar 29, 2022
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to identify all
parametrizations on the original module and then move them (and their
associated parameters) to the new module.

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 2153364
Pull Request resolved: #74848
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to identify all
parametrizations on the original module and then move them (and their
associated parameters) to the new module.

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Apr 7, 2022
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations

also added comments to the test code

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 57f9e42
Pull Request resolved: #74848
@HDCharles
Copy link
Contributor Author

@HDCharles has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations

also added comments to the test code

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35240272](https://our.internmc.facebook.com/intern/diff/D35240272)

[ghstack-poisoned]
@HDCharles
Copy link
Contributor Author

@HDCharles has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations

also added comments to the test code

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35240272](https://our.internmc.facebook.com/intern/diff/D35240272)

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Apr 7, 2022
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations

also added comments to the test code

Test Plan: python test/test_ao_sparsity.py TestComposability

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 53b5776
Pull Request resolved: #74848
@HDCharles
Copy link
Contributor Author

@HDCharles has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations
to test_nn.py and also added comments to the test code for
composability.

Test Plan: python test/test_ao_sparsity.py TestComposability
python test/test_nn.py TestNN

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35240272](https://our.internmc.facebook.com/intern/diff/D35240272)

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Apr 8, 2022
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations
to test_nn.py and also added comments to the test code for
composability.

Test Plan: python test/test_ao_sparsity.py TestComposability
python test/test_nn.py TestNN

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 3bf0f42
Pull Request resolved: #74848
@HDCharles
Copy link
Contributor Author

@HDCharles has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@HDCharles HDCharles requested review from albanD and lezcano April 8, 2022 01:56
Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one more thing I forgot! Parametrisations may be many-to-one if the right-inverse returns more than one tensor. You can find examples of these in test_multiple_inputs_parametrization. Could you add a test for this case?


# need to initialize the param in to_module if it doesn't exist already
if not hasattr(to_module, parameter_name):
setattr(to_module, parameter_name, deepcopy(from_module.parametrizations[parameter_name].original))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this branch is not tested, is it? Furthermore, I believe it is incorrect. I think it should be

Suggested change
setattr(to_module, parameter_name, deepcopy(from_module.parametrizations[parameter_name].original))
setattr(to_module, parameter_name, getattr(from_module, parameter_name))

Note that the original parameter may be a tuple if the parametrization is many-to-one, so setting it as an attribute would not be of much use.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, its tested in line 3282 in test_nn.py within test_transfer_parametrizations_and_params, I can split it into another test if you like, it looked like some of the other parametriation tests were grouped in some similar cases so I wasn't sure.

I added your suggested change, and added a test for the many-to-one case which works after some changes.

Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations
to test_nn.py and also added comments to the test code for
composability.

Test Plan: python test/test_ao_sparsity.py TestComposability
python test/test_nn.py TestNN

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35240272](https://our.internmc.facebook.com/intern/diff/D35240272)

[ghstack-poisoned]
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations
to test_nn.py and also added comments to the test code for
composability.

Test Plan: python test/test_ao_sparsity.py TestComposability
python test/test_nn.py TestNN

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35240272](https://our.internmc.facebook.com/intern/diff/D35240272)

[ghstack-poisoned]
@HDCharles
Copy link
Contributor Author

@HDCharles has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@HDCharles HDCharles requested a review from lezcano April 8, 2022 18:35
Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM CI abiding. Thank you for the many improvements and the testing on the parametrisations end!

Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations
to test_nn.py and also added comments to the test code for
composability.

Test Plan: python test/test_ao_sparsity.py TestComposability
python test/test_nn.py TestNN

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35240272](https://our.internmc.facebook.com/intern/diff/D35240272)

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Apr 8, 2022
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations
to test_nn.py and also added comments to the test code for
composability.

Test Plan: python test/test_ao_sparsity.py TestComposability
python test/test_nn.py TestNN

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 0cd91d0
Pull Request resolved: #74848
@HDCharles
Copy link
Contributor Author

@HDCharles has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge this

(Initiating merge automatically since Phabricator Diff has merged)

@github-actions
Copy link
Contributor

Hey @HDCharles.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Apr 11, 2022
Summary:
Pull Request resolved: #74848

The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to identify all
parametrizations on the original module and then move them (and their
associated parameters) to the new module.

Test Plan:
python test/test_ao_sparsity.py TestComposability

Imported from OSS

Reviewed By: malfet

Differential Revision: D35240272

fbshipit-source-id: 08d6a938d5919ba2dfd8490b1c768fafc5b179dd
@facebook-github-bot facebook-github-bot deleted the gh/HDCharles/65/head branch April 15, 2022 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants