What's the rationale behind TransformedTargetRegressor estimator cloning mechanism? #31949
Unanswered
MarcBresson
asked this question in
Q&A
Replies: 1 comment 2 replies
-
|
Cloning ensures that fit always runs on a fresh, unfitted copy of the regressor. Without it, a user’s estimator might already be fitted and leak state between runs. Cloning prevents this and keeps training consistent and reproducible. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
In
TransformedTargetRegressor, there is a method_get_regressorthat can clone the given estimator. When the user calls.fitonTransformedTargetRegressor, the passedregressoris cloned and then fitted. However, the tags and metadata routing all happen on the user given estimator (which may have been fitted before).I don't understand why the given estimator is stored in
self.regressor(i.e.self.regressor = regressor), rather than only storing a clone of it (self.regressor = clone(regressor)). What does it bring?I explored that behaviour because I coded a recursive function that iterates over all the attribute of a python object and, because it if not fitted, iterating over my AdaBoost
regressorshows an error because it tries to iterate over the unsetestimators_attribute as defined inBaseEnsemble.__iter__method.Beta Was this translation helpful? Give feedback.
All reactions