Flow
TEST81cde520b5sklearn.ensemble._weight_boosting.AdaBoostClassifier(estimator=sklearn.tree._classes.DecisionTreeClassifier)

TEST81cde520b5sklearn.ensemble._weight_boosting.AdaBoostClassifier(estimator=sklearn.tree._classes.DecisionTreeClassifier)

Visibility: public Uploaded 04-11-2024 by Continuous Integration sklearn==1.4.2 numpy>=1.19.5 scipy>=1.6.0 joblib>=1.2.0 threadpoolctl>=2.0.0 0 runs
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
  • openml-python python scikit-learn sklearn sklearn_1.4.2
Issue #Downvotes for this reason By


Loading wiki
Help us complete this description Edit
An AdaBoost classifier. An AdaBoost [1]_ classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases. This class implements the algorithm based on [2]_.

Components

estimatorTEST81cde520b5sklearn.tree._classes.DecisionTreeClassifier(1)The base estimator from which the boosted ensemble is built Support for sample weighting is required, as well as proper ``classes_`` and ``n_classes_`` attributes. If ``None``, then the base estimator is :class:`~sklearn.tree.DecisionTreeClassifier` initialized with `max_depth=1` .. versionadded:: 1.2 `base_estimator` was renamed to `estimator`

Parameters

algorithmdefault: "SAMME.R"
estimatorThe base estimator from which the boosted ensemble is built Support for sample weighting is required, as well as proper ``classes_`` and ``n_classes_`` attributes. If ``None``, then the base estimator is :class:`~sklearn.tree.DecisionTreeClassifier` initialized with `max_depth=1` .. versionadded:: 1.2 `base_estimator` was renamed to `estimator`default: {"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": null}}
learning_rateWeight applied to each classifier at each boosting iteration. A higher learning rate increases the contribution of each classifier. There is a trade-off between the `learning_rate` and `n_estimators` parameters Values must be in the range `(0.0, inf)` algorithm : {'SAMME', 'SAMME.R'}, default='SAMME.R' If 'SAMME.R' then use the SAMME.R real boosting algorithm ``estimator`` must support calculation of class probabilities If 'SAMME' then use the SAMME discrete boosting algorithm The SAMME.R algorithm typically converges faster than SAMME, achieving a lower test error with fewer boosting iterations .. deprecated:: 1.4 `"SAMME.R"` is deprecated and will be removed in version 1.6 '"SAMME"' will become the defaultdefault: 1.0
n_estimatorsThe maximum number of estimators at which boosting is terminated In case of perfect fit, the learning procedure is stopped early Values must be in the range `[1, inf)`default: 50
random_stateControls the random seed given at each `estimator` at each boosting iteration Thus, it is only used when `estimator` exposes a `random_state` Pass an int for reproducible output across multiple function calls See :term:`Glossary `.default: null

0
Runs

List all runs
Parameter:
Rendering chart
Rendering table