Issue | #Downvotes for this reason | By |
---|
base_estimator | TEST5f4880a685sklearn.tree._classes.DecisionTreeClassifier(1) | The base estimator from which the boosted ensemble is built Support for sample weighting is required, as well as proper ``classes_`` and ``n_classes_`` attributes. If ``None``, then the base estimator is :class:`~sklearn.tree.DecisionTreeClassifier` initialized with `max_depth=1` |
algorithm | default: "SAMME.R" | |
base_estimator | The base estimator from which the boosted ensemble is built Support for sample weighting is required, as well as proper ``classes_`` and ``n_classes_`` attributes. If ``None``, then the base estimator is :class:`~sklearn.tree.DecisionTreeClassifier` initialized with `max_depth=1` | default: {"oml-python:serialized_object": "component_reference", "value": {"key": "base_estimator", "step_name": null}} |
learning_rate | Weight applied to each classifier at each boosting iteration. A higher learning rate increases the contribution of each classifier. There is a trade-off between the `learning_rate` and `n_estimators` parameters Values must be in the range `(0.0, inf)` algorithm : {'SAMME', 'SAMME.R'}, default='SAMME.R' If 'SAMME.R' then use the SAMME.R real boosting algorithm ``base_estimator`` must support calculation of class probabilities If 'SAMME' then use the SAMME discrete boosting algorithm The SAMME.R algorithm typically converges faster than SAMME, achieving a lower test error with fewer boosting iterations | default: 1.0 |
n_estimators | The maximum number of estimators at which boosting is terminated In case of perfect fit, the learning procedure is stopped early Values must be in the range `[1, inf)` | default: 50 |
random_state | Controls the random seed given at each `base_estimator` at each
boosting iteration
Thus, it is only used when `base_estimator` exposes a `random_state`
Pass an int for reproducible output across multiple function calls
See :term:`Glossary | default: null |