sklearn.ensemble
.VotingRegressor¶
-
class
sklearn.ensemble.
VotingRegressor
(estimators, weights=None, n_jobs=None)[source]¶ Prediction voting regressor for unfitted estimators.
New in version 0.21.
A voting regressor is an ensemble meta-estimator that fits base regressors each on the whole dataset. It, then, averages the individual predictions to form a final prediction.
Read more in the User Guide.
- Parameters
- estimatorslist of (string, estimator) tuples
Invoking the
fit
method on theVotingRegressor
will fit clones of those original estimators that will be stored in the class attributeself.estimators_
. An estimator can be set toNone
or'drop'
usingset_params
.- weightsarray-like, shape (n_regressors,), optional (default=`None`)
Sequence of weights (
float
orint
) to weight the occurrences of predicted values before averaging. Uses uniform weights ifNone
.- n_jobsint or None, optional (default=None)
The number of jobs to run in parallel for
fit
.None
means 1 unless in ajoblib.parallel_backend
context.-1
means using all processors. See Glossary for more details.
- Attributes
- estimators_list of regressors
The collection of fitted sub-estimators as defined in
estimators
that are notNone
.- named_estimators_Bunch object, a dictionary with attribute access
Attribute to access any fitted sub-estimators by name.
See also
VotingClassifier
Soft Voting/Majority Rule classifier.
Examples
>>> import numpy as np >>> from sklearn.linear_model import LinearRegression >>> from sklearn.ensemble import RandomForestRegressor >>> from sklearn.ensemble import VotingRegressor >>> r1 = LinearRegression() >>> r2 = RandomForestRegressor(n_estimators=10, random_state=1) >>> X = np.array([[1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36]]) >>> y = np.array([2, 6, 12, 20, 30, 42]) >>> er = VotingRegressor([('lr', r1), ('rf', r2)]) >>> print(er.fit(X, y).predict(X)) [ 3.3 5.7 11.8 19.7 28. 40.3]
Methods
fit
(self, X, y[, sample_weight])Fit the estimators.
fit_transform
(self, X[, y])Fit to data, then transform it.
get_params
(self[, deep])Get the parameters of the ensemble estimator
predict
(self, X)Predict regression target for X.
score
(self, X, y[, sample_weight])Returns the coefficient of determination R^2 of the prediction.
set_params
(self, \*\*params)Setting the parameters for the ensemble estimator
transform
(self, X)Return predictions for X for each estimator.
-
__init__
(self, estimators, weights=None, n_jobs=None)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
fit
(self, X, y, sample_weight=None)[source]¶ Fit the estimators.
- Parameters
- X{array-like, sparse matrix}, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
- yarray-like, shape (n_samples,)
Target values.
- sample_weightarray-like, shape (n_samples,) or None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights.
- Returns
- selfobject
-
fit_transform
(self, X, y=None, **fit_params)[source]¶ Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
- Parameters
- Xnumpy array of shape [n_samples, n_features]
Training set.
- ynumpy array of shape [n_samples]
Target values.
- Returns
- X_newnumpy array of shape [n_samples, n_features_new]
Transformed array.
-
get_params
(self, deep=True)[source]¶ Get the parameters of the ensemble estimator
- Parameters
- deepbool
Setting it to True gets the various estimators and the parameters of the estimators as well
-
predict
(self, X)[source]¶ Predict regression target for X.
The predicted regression target of an input sample is computed as the mean predicted regression targets of the estimators in the ensemble.
- Parameters
- X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples.
- Returns
- yarray of shape (n_samples,)
The predicted values.
-
score
(self, X, y, sample_weight=None)[source]¶ Returns the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
- Parameters
- Xarray-like, shape = (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix instead, shape = (n_samples, n_samples_fitted], where n_samples_fitted is the number of samples used in the fitting for the estimator.
- yarray-like, shape = (n_samples) or (n_samples, n_outputs)
True values for X.
- sample_weightarray-like, shape = [n_samples], optional
Sample weights.
- Returns
- scorefloat
R^2 of self.predict(X) wrt. y.
Notes
The R2 score used when calling
score
on a regressor will usemultioutput='uniform_average'
from version 0.23 to keep consistent withr2_score
. This will influence thescore
method of all the multioutput regressors (except forMultiOutputRegressor
). To specify the default value manually and avoid the warning, please either callr2_score
directly or make a custom scorer withmake_scorer
(the built-in scorer'r2'
usesmultioutput='uniform_average'
).
-
set_params
(self, **params)[source]¶ Setting the parameters for the ensemble estimator
Valid parameter keys can be listed with get_params().
- Parameters
- **paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value) In addition, to setting the parameters of the ensemble estimator, the individual estimators of the ensemble estimator can also be set or replaced by setting them to None.
Examples
# In this example, the RandomForestClassifier is removed clf1 = LogisticRegression() clf2 = RandomForestClassifier() eclf = VotingClassifier(estimators=[(‘lr’, clf1), (‘rf’, clf2)] eclf.set_params(rf=None)