Skip to content

Conversation

@Vivswan
Copy link
Contributor

@Vivswan Vivswan commented Jan 6, 2023

Added super init to Module for complex user modules derived from multiple python classes.
And by adding the super init call at the end so it doesn't change any functionality of Module class.

I am working on building a module for simulating analog neural network on PyTorch.
and this small change is really useful for that and we can definitely think of many other useful cases especially for more module or mro hierarchy.

Issues: #28746, #48626, #61662, #74036

 Added super init to Module for user modules derived from multiple python classes
@pytorch-bot
Copy link

pytorch-bot bot commented Jan 6, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91819

Note: Links to docs will display an error until the docs builds have been completed.

❌ 6 Failures

As of commit 77cdda8:

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base 438f12d:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jan 6, 2023

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: Vivswan / name: Vivswan Shah (ec46af7)

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We tried this before and it was reverted: #74036
I can check what is the status now.

@facebook-github-bot
Copy link
Contributor

@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@Vivswan
Copy link
Contributor Author

Vivswan commented Jan 11, 2023

I see (#74036).

I did first try to check what happens when I put the super().__init__() at the beginning of the nn.Module, in that case it does causes a lot of error, I think that was the reason for revert of #74096 as well. That why I added it at the end to not cause any functionally issue with nn.Module and other module dependent on it

I also understand your opinion on providing args and kwargs to the super class.
Till we figure out the proper functionality to do so.

We can probably still add super().__init__() without arguments which will provide usefulness to majority of the developers dealing with this issue.

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, this is BC-breaking for anyone that is currently doing mixin of nn.Module and working around the lack of super() init() manually...
I guess that is ok to break as it is an explicit error

@Vivswan
Copy link
Contributor Author

Vivswan commented Jan 11, 2023

if approved, then it can be in the docs of nn.Module as an important notice that now it does call super.__init__(*,**)

OR

to help everyone with huge library's and any error due to this change.
we can provide an option with which they can stop the call to super().__init__(*,**).
this option can be deprecated in the future.

Something maybe like this:

class Module:
    _call_super: bool = True #deprecated in future
    
    def __init__(self, *args, **kwargs) -> None:
        ......
        ......
        if Module._call_super: 
            super().__init__(*args, **kwargs)

in this way anyone can disable the super().__init__(*,**) just by

import torch
torch.nn.Module._call_super = False

Let me know if you want me to implement this. 🙂

@albanD
Copy link
Collaborator

albanD commented Jan 11, 2023

I like this idea of having a class attribute to control this behavior as a way to make the transition smoother.

Could you add this new class attribute and make it False by default.
That will make this PR not break backward compatibility but allow you to enable this if you need this.

We can do a follow up, BC-breaking, PR that switch the default behavior to True.

default value for call_super_init is False
@Vivswan
Copy link
Contributor Author

Vivswan commented Jan 11, 2023

Added call_super_init: bool = False to nn.Module

Vivswan added a commit to Vivswan/AnalogVNN that referenced this pull request Jan 12, 2023
Signed-off-by: Vivswan Shah <58091053+Vivswan@users.noreply.github.com>
@drisspg drisspg added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jan 12, 2023
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change looks good to me. Could you just add a test in test/test_nn.py to make sure it does the right thing when a subclass sets this flag to True please and then we can merge this!

@facebook-github-bot
Copy link
Contributor

@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@Vivswan
Copy link
Contributor Author

Vivswan commented Jan 18, 2023

Can you give me a link to documentation, how to create tests for pytorch?

@albanD
Copy link
Collaborator

albanD commented Jan 18, 2023

Ho I'm not sure we have doc for that :/
You can add a new function here:

self.assertEqual(m(input).size(), (2, 5))

Call it test_module_super_init and check that you do get the behavior you want.

@albanD
Copy link
Collaborator

albanD commented Jan 20, 2023

The test failures are relevant. Not sure why at a glance...

@Vivswan
Copy link
Contributor Author

Vivswan commented Jan 21, 2023

All the failed test are coming from the following code, since now we do accept *args, **kwargs in nn.Module.

# == Verify passing a nonexistent arg errors out. ==
if check_nonexistent_arg:
with test_cls.assertRaises(TypeError):
m = module_cls(*args, **kwargs, nonexistent_arg='foo')

Either we can change this test
OR
We can throw a RuntimeError if args or kwargs are not empty and call_super_init is false.

@albanD
Copy link
Collaborator

albanD commented Jan 24, 2023

Ho that is interesting haha
Raising an error when call_super_init is False sounds good.
If you could raise the same error (as much as possible, including type and messagee) that is raised today that would be best so that we don't have to update the tests (and thus reduce the chance of breaking user code).

@facebook-github-bot
Copy link
Contributor

@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@albanD
Copy link
Collaborator

albanD commented Feb 1, 2023

Looks good!
Just double checking internally to make sure this doesn't get reverted again and I'll merge it!

@albanD
Copy link
Collaborator

albanD commented Feb 1, 2023

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 1, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / win-vs2019-cuda11.6-py3 / test (default, 3, 5, windows.g5.4xlarge.nvidia.gpu)

Details for Dev Infra team Raised by workflow job

@albanD
Copy link
Collaborator

albanD commented Feb 1, 2023

@pytorchbot merge -f "Flaky test"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@Vivswan Vivswan deleted the patch-1 branch February 4, 2023 18:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants