1696 1159 TEST8733992322sklearn.model_selection._search.RandomizedSearchCV(estimator=sklearn.ensemble._forest.RandomForestClassifier) sklearn.RandomizedSearchCV(RandomForestClassifier) sklearn.model_selection._search.RandomizedSearchCV 1 openml==0.14.1,sklearn==0.23.1 Randomized search on hyper parameters. RandomizedSearchCV implements a "fit" and a "score" method. It also implements "predict", "predict_proba", "decision_function", "transform" and "inverse_transform" if they are implemented in the estimator used. The parameters of the estimator used to apply these methods are optimized by cross-validated search over parameter settings. In contrast to GridSearchCV, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from the specified distributions. The number of parameter settings that are tried is given by n_iter. If all parameters are presented as a list, sampling without replacement is performed. If at least one parameter is given as a distribution, sampling with replacement is used. It is highly recommended to use continuous distributions for continuous parameters. 2024-01-10T15:59:54 English sklearn==0.23.1 numpy>=1.13.3 scipy>=0.19.1 joblib>=0.11 threadpoolctl>=2.0.0 cv int {"oml-python:serialized_object": "cv_object", "value": {"name": "sklearn.model_selection._split.StratifiedKFold", "parameters": {"n_splits": "2", "random_state": "null", "shuffle": "true"}}} Determines the cross-validation splitting strategy Possible inputs for cv are: - None, to use the default 5-fold cross validation, - integer, to specify the number of folds in a `(Stratified)KFold`, - :term:`CV splitter`, - An iterable yielding (train, test) splits as arrays of indices For integer/None inputs, if the estimator is a classifier and ``y`` is either binary or multiclass, :class:`StratifiedKFold` is used. In all other cases, :class:`KFold` is used Refer :ref:`User Guide <cross_validation>` for the various cross-validation strategies that can be used here .. versionchanged:: 0.22 ``cv`` default value if None changed from 3-fold to 5-fold error_score 'raise' or numeric NaN Value to assign to the score if an error occurs in estimator fitting If set to 'raise', the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error estimator estimator object {"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": null}} A object of that type is instantiated for each grid point This is assumed to implement the scikit-learn estimator interface Either estimator needs to provide a ``score`` function, or ``scoring`` must be passed iid bool "deprecated" If True, return the average score across folds, weighted by the number of samples in each test set. In this case, the data is assumed to be identically distributed across the folds, and the loss minimized is the total loss per sample, and not the mean loss across the folds .. deprecated:: 0.22 Parameter ``iid`` is deprecated in 0.22 and will be removed in 0.24 n_iter int 5 Number of parameter settings that are sampled. n_iter trades off runtime vs quality of the solution n_jobs int null Number of jobs to run in parallel ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context ``-1`` means using all processors. See :term:`Glossary <n_jobs>` for more details .. versionchanged:: v0.20 `n_jobs` default changed from 1 to None param_distributions dict or list of dicts {"bootstrap": [true, false], "criterion": ["gini", "entropy"], "max_depth": [3, null], "max_features": [1, 2, 3, 4], "min_samples_leaf": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "min_samples_split": [2, 3, 4, 5, 6, 7, 8, 9, 10]} Dictionary with parameters names (`str`) as keys and distributions or lists of parameters to try. Distributions must provide a ``rvs`` method for sampling (such as those from scipy.stats.distributions) If a list is given, it is sampled uniformly If a list of dicts is given, first a dict is sampled uniformly, and then a parameter is sampled using that dict as above pre_dispatch int "2*n_jobs" Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: - None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs - An int, giving the exact number of total jobs that are spawned - A str, giving an expression as a function of n_jobs, as in '2*n_jobs' random_state int or RandomState instance null Pseudo random number generator state used for random uniform sampling from lists of possible values instead of scipy.stats distributions Pass an int for reproducible output across multiple function calls See :term:`Glossary <random_state>` refit bool true Refit an estimator using the best found parameters on the whole dataset For multiple metric evaluation, this needs to be a `str` denoting the scorer that would be used to find the best parameters for refitting the estimator at the end Where there are considerations other than maximum score in choosing a best estimator, ``refit`` can be set to a function which returns the selected ``best_index_`` given the ``cv_results``. In that case, the ``best_estimator_`` and ``best_params_`` will be set according to the returned ``best_index_`` while the ``best_score_`` attribute will not be available The refitted estimator is made available at the ``best_estimator_`` attribute and permits using ``predict`` directly on this ``RandomizedSearchCV`` instance Also for multiple metric evaluation, the attributes ``best_index_``, ``best_score_`` and ``best_params_`` will only be available if ``refit`` is set and all of them will be determined w.r.t this speci... return_train_score bool false If ``False``, the ``cv_results_`` attribute will not include training scores Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performance .. versionadded:: 0.19 .. versionchanged:: 0.21 Default value was changed from ``True`` to ``False`` scoring str null A single str (see :ref:`scoring_parameter`) or a callable (see :ref:`scoring`) to evaluate the predictions on the test set For evaluating multiple metrics, either give a list of (unique) strings or a dict with names as keys and callables as values NOTE that when using custom scorers, each scorer should return a single value. Metric functions returning a list/array of values can be wrapped into multiple scorers that return one value each See :ref:`multimetric_grid_search` for an example If None, the estimator's score method is used verbose integer 0 Controls the verbosity: the higher, the more messages estimator 1697 1159 TEST8733992322sklearn.ensemble._forest.RandomForestClassifier sklearn.RandomForestClassifier sklearn.ensemble._forest.RandomForestClassifier 1 openml==0.14.1,sklearn==0.23.1 A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the `max_samples` parameter if `bootstrap=True` (default), otherwise the whole dataset is used to build each tree. 2024-01-10T15:59:54 English sklearn==0.23.1 numpy>=1.13.3 scipy>=0.19.1 joblib>=0.11 threadpoolctl>=2.0.0 bootstrap bool true Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree ccp_alpha non 0.0 Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than ``ccp_alpha`` will be chosen. By default, no pruning is performed. See :ref:`minimal_cost_complexity_pruning` for details .. versionadded:: 0.22 class_weight null criterion "gini" max_depth int null The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples max_features "auto" max_leaf_nodes int null Grow trees with ``max_leaf_nodes`` in best-first fashion Best nodes are defined as relative reduction in impurity If None then unlimited number of leaf nodes max_samples int or float null If bootstrap is True, the number of samples to draw from X to train each base estimator - If None (default), then draw `X.shape[0]` samples - If int, then draw `max_samples` samples - If float, then draw `max_samples * X.shape[0]` samples. Thus, `max_samples` should be in the interval `(0, 1)` .. versionadded:: 0.22 min_impurity_decrease float 0.0 A node will be split if this split induces a decrease of the impurity greater than or equal to this value The weighted impurity decrease equation is the following:: N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity) where ``N`` is the total number of samples, ``N_t`` is the number of samples at the current node, ``N_t_L`` is the number of samples in the left child, and ``N_t_R`` is the number of samples in the right child ``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum, if ``sample_weight`` is passed .. versionadded:: 0.19 min_impurity_split float null Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf .. deprecated:: 0.19 ``min_impurity_split`` has been deprecated in favor of ``min_impurity_decrease`` in 0.19. The default value of ``min_impurity_split`` has changed from 1e-7 to 0 in 0.23 and it will be removed in 0.25. Use ``min_impurity_decrease`` instead min_samples_leaf int or float 1 The minimum number of samples required to be at a leaf node A split point at any depth will only be considered if it leaves at least ``min_samples_leaf`` training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression - If int, then consider `min_samples_leaf` as the minimum number - If float, then `min_samples_leaf` is a fraction and `ceil(min_samples_leaf * n_samples)` are the minimum number of samples for each node .. versionchanged:: 0.18 Added float values for fractions min_samples_split int or float 2 The minimum number of samples required to split an internal node: - If int, then consider `min_samples_split` as the minimum number - If float, then `min_samples_split` is a fraction and `ceil(min_samples_split * n_samples)` are the minimum number of samples for each split .. versionchanged:: 0.18 Added float values for fractions min_weight_fraction_leaf float 0.0 The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided max_features : {"auto", "sqrt", "log2"}, int or float, default="auto" The number of features to consider when looking for the best split: - If int, then consider `max_features` features at each split - If float, then `max_features` is a fraction and `int(max_features * n_features)` features are considered at each split - If "auto", then `max_features=sqrt(n_features)` - If "sqrt", then `max_features=sqrt(n_features)` (same as "auto") - If "log2", then `max_features=log2(n_features)` - If None, then `max_features=n_features` Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than ``max_features`` features n_estimators int 5 The number of trees in the forest .. versionchanged:: 0.22 The default value of ``n_estimators`` changed from 10 to 100 in 0.22 criterion : {"gini", "entropy"}, default="gini" The function to measure the quality of a split. Supported criteria are "gini" for the Gini impurity and "entropy" for the information gain Note: this parameter is tree-specific n_jobs int null The number of jobs to run in parallel. :meth:`fit`, :meth:`predict`, :meth:`decision_path` and :meth:`apply` are all parallelized over the trees. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary <n_jobs>` for more details oob_score bool false Whether to use out-of-bag samples to estimate the generalization accuracy random_state int or RandomState null Controls both the randomness of the bootstrapping of the samples used when building trees (if ``bootstrap=True``) and the sampling of the features to consider when looking for the best split at each node (if ``max_features < n_features``) See :term:`Glossary <random_state>` for details verbose int 0 Controls the verbosity when fitting and predicting warm_start bool false When set to ``True``, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See :term:`the Glossary <warm_start>` class_weight : {"balanced", "balanced_subsample"}, dict or list of dicts, default=None Weights associated with classes in the form ``{class_label: weight}`` If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}] The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_cl... openml-python python scikit-learn sklearn sklearn_0.23.1 openml-python python scikit-learn sklearn sklearn_0.23.1