26308
1159
TEST6d6eb9f0besklearn.model_selection._search.RandomizedSearchCV(estimator=sklearn.pipeline.Pipeline(ohe=sklearn.preprocessing._encoders.OneHotEncoder,scaler=sklearn.preprocessing._data.StandardScaler,fu=sklearn.pipeline.FeatureUnion(pca=sklearn.decomposition._truncated_svd.TruncatedSVD,fs=sklearn.feature_selection._univariate_selection.SelectPercentile),boosting=sklearn.ensemble._weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree._classes.DecisionTreeClassifier)))
sklearn.RandomizedSearchCV(Pipeline(OneHotEncoder,StandardScaler,FeatureUnion,AdaBoostClassifier))
sklearn.model_selection._search.RandomizedSearchCV
1
openml==0.14.1,sklearn==1.3.2
Randomized search on hyper parameters.
RandomizedSearchCV implements a "fit" and a "score" method.
It also implements "score_samples", "predict", "predict_proba",
"decision_function", "transform" and "inverse_transform" if they are
implemented in the estimator used.
The parameters of the estimator used to apply these methods are optimized
by cross-validated search over parameter settings.
In contrast to GridSearchCV, not all parameter values are tried out, but
rather a fixed number of parameter settings is sampled from the specified
distributions. The number of parameter settings that are tried is
given by n_iter.
If all parameters are presented as a list,
sampling without replacement is performed. If at least one parameter
is given as a distribution, sampling with replacement is used.
It is highly recommended to use continuous distributions for continuous
parameters.
2024-01-15T11:01:52
English
sklearn==1.3.2
numpy>=1.17.3
scipy>=1.5.0
joblib>=1.1.1
threadpoolctl>=2.0.0
cv
int
{"oml-python:serialized_object": "cv_object", "value": {"name": "sklearn.model_selection._split.StratifiedKFold", "parameters": {"n_splits": "5", "random_state": "null", "shuffle": "true"}}}
Determines the cross-validation splitting strategy
Possible inputs for cv are:
- None, to use the default 5-fold cross validation,
- integer, to specify the number of folds in a `(Stratified)KFold`,
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices
For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass, :class:`StratifiedKFold` is used. In all
other cases, :class:`KFold` is used. These splitters are instantiated
with `shuffle=False` so the splits will be the same across calls
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here
.. versionchanged:: 0.22
``cv`` default value if None changed from 3-fold to 5-fold
error_score
'raise' or numeric
NaN
Value to assign to the score if an error occurs in estimator fitting
If set to 'raise', the error is raised. If a numeric value is given,
FitFailedWarning is raised. This parameter does not affect the refit
step, which will always raise the error
estimator
estimator object
{"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": null}}
An object of that type is instantiated for each grid point
This is assumed to implement the scikit-learn estimator interface
Either estimator needs to provide a ``score`` function,
or ``scoring`` must be passed
n_iter
int
10
Number of parameter settings that are sampled. n_iter trades
off runtime vs quality of the solution
n_jobs
int
null
Number of jobs to run in parallel
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details
.. versionchanged:: v0.20
`n_jobs` default changed from 1 to None
param_distributions
dict or list of dicts
{"boosting__base_estimator__max_depth": {"oml-python:serialized_object": "rv_frozen", "value": {"dist": "scipy.stats._discrete_distns.randint_gen", "a": 1, "b": 9, "args": [1, 10], "kwds": {}}}, "boosting__learning_rate": {"oml-python:serialized_object": "rv_frozen", "value": {"dist": "scipy.stats._continuous_distns.uniform_gen", "a": 0.0, "b": 1.0, "args": [0.01, 0.99], "kwds": {}}}, "boosting__n_estimators": [1, 5, 10, 100]}
Dictionary with parameters names (`str`) as keys and distributions
or lists of parameters to try. Distributions must provide a ``rvs``
method for sampling (such as those from scipy.stats.distributions)
If a list is given, it is sampled uniformly
If a list of dicts is given, first a dict is sampled uniformly, and
then a parameter is sampled using that dict as above
pre_dispatch
int
"2*n_jobs"
Controls the number of jobs that get dispatched during parallel
execution. Reducing this number can be useful to avoid an
explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:
- None, in which case all the jobs are immediately
created and spawned. Use this for lightweight and
fast-running jobs, to avoid delays due to on-demand
spawning of the jobs
- An int, giving the exact number of total jobs that are
spawned
- A str, giving an expression as a function of n_jobs,
as in '2*n_jobs'
random_state
int
null
Pseudo random number generator state used for random uniform sampling
from lists of possible values instead of scipy.stats distributions
Pass an int for reproducible output across multiple
function calls
See :term:`Glossary <random_state>`
refit
bool
true
Refit an estimator using the best found parameters on the whole
dataset
For multiple metric evaluation, this needs to be a `str` denoting the
scorer that would be used to find the best parameters for refitting
the estimator at the end
Where there are considerations other than maximum score in
choosing a best estimator, ``refit`` can be set to a function which
returns the selected ``best_index_`` given the ``cv_results``. In that
case, the ``best_estimator_`` and ``best_params_`` will be set
according to the returned ``best_index_`` while the ``best_score_``
attribute will not be available
The refitted estimator is made available at the ``best_estimator_``
attribute and permits using ``predict`` directly on this
``RandomizedSearchCV`` instance
Also for multiple metric evaluation, the attributes ``best_index_``,
``best_score_`` and ``best_params_`` will only be available if
``refit`` is set and all of them will be determined w.r.t this speci...
return_train_score
bool
false
If ``False``, the ``cv_results_`` attribute will not include training
scores
Computing training scores is used to get insights on how different
parameter settings impact the overfitting/underfitting trade-off
However computing the scores on the training set can be computationally
expensive and is not strictly required to select the parameters that
yield the best generalization performance
.. versionadded:: 0.19
.. versionchanged:: 0.21
Default value was changed from ``True`` to ``False``
scoring
str
null
Strategy to evaluate the performance of the cross-validated model on
the test set
If `scoring` represents a single score, one can use:
- a single string (see :ref:`scoring_parameter`);
- a callable (see :ref:`scoring`) that returns a single value
If `scoring` represents multiple scores, one can use:
- a list or tuple of unique strings;
- a callable returning a dictionary where the keys are the metric
names and the values are the metric scores;
- a dictionary with metric names as keys and callables a values
See :ref:`multimetric_grid_search` for an example
If None, the estimator's score method is used
verbose
int
0
Controls the verbosity: the higher, the more messages
estimator
26309
1159
TEST6d6eb9f0besklearn.pipeline.Pipeline(ohe=sklearn.preprocessing._encoders.OneHotEncoder,scaler=sklearn.preprocessing._data.StandardScaler,fu=sklearn.pipeline.FeatureUnion(pca=sklearn.decomposition._truncated_svd.TruncatedSVD,fs=sklearn.feature_selection._univariate_selection.SelectPercentile),boosting=sklearn.ensemble._weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree._classes.DecisionTreeClassifier))
sklearn.Pipeline(OneHotEncoder,StandardScaler,FeatureUnion,AdaBoostClassifier)
sklearn.pipeline.Pipeline
1
openml==0.14.1,sklearn==1.3.2
Pipeline of transforms with a final estimator.
Sequentially apply a list of transforms and a final estimator.
Intermediate steps of the pipeline must be 'transforms', that is, they
must implement `fit` and `transform` methods.
The final estimator only needs to implement `fit`.
The transformers in the pipeline can be cached using ``memory`` argument.
The purpose of the pipeline is to assemble several steps that can be
cross-validated together while setting different parameters. For this, it
enables setting parameters of the various steps using their names and the
parameter name separated by a `'__'`, as in the example below. A step's
estimator may be replaced entirely by setting the parameter with its name
to another estimator, or a transformer removed by setting it to
`'passthrough'` or `None`.
For an example use case of `Pipeline` combined with
:class:`~sklearn.model_selection.GridSearchCV`, refer to
:ref:`sphx_glr_auto_examples_compose_plot_compare_reduction.py`. The
example :ref:`sphx_glr_auto_exampl...
2024-01-15T11:01:52
English
sklearn==1.3.2
numpy>=1.17.3
scipy>=1.5.0
joblib>=1.1.1
threadpoolctl>=2.0.0
memory
str or object with the joblib
null
Used to cache the fitted transformers of the pipeline. The last step
will never be cached, even if it is a transformer. By default, no
caching is performed. If a string is given, it is the path to the
caching directory. Enabling caching triggers a clone of the transformers
before fitting. Therefore, the transformer instance given to the
pipeline cannot be inspected directly. Use the attribute ``named_steps``
or ``steps`` to inspect estimators within the pipeline. Caching the
transformers is advantageous when fitting is time consuming
steps
list of tuple
[{"oml-python:serialized_object": "component_reference", "value": {"key": "ohe", "step_name": "ohe"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "scaler", "step_name": "scaler"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "fu", "step_name": "fu"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "boosting", "step_name": "boosting"}}]
List of (name, transform) tuples (implementing `fit`/`transform`) that
are chained in sequential order. The last transform must be an
estimator
verbose
bool
false
If True, the time elapsed while fitting each step will be printed as it
is completed.
ohe
26310
1159
TEST6d6eb9f0besklearn.preprocessing._encoders.OneHotEncoder
sklearn.OneHotEncoder
sklearn.preprocessing._encoders.OneHotEncoder
1
openml==0.14.1,sklearn==1.3.2
Encode categorical features as a one-hot numeric array.
The input to this transformer should be an array-like of integers or
strings, denoting the values taken on by categorical (discrete) features.
The features are encoded using a one-hot (aka 'one-of-K' or 'dummy')
encoding scheme. This creates a binary column for each category and
returns a sparse matrix or dense array (depending on the ``sparse_output``
parameter)
By default, the encoder derives the categories based on the unique values
in each feature. Alternatively, you can also specify the `categories`
manually.
This encoding is needed for feeding categorical data to many scikit-learn
estimators, notably linear models and SVMs with the standard kernels.
Note: a one-hot encoding of y labels should use a LabelBinarizer
instead.
2024-01-15T11:01:52
English
sklearn==1.3.2
numpy>=1.17.3
scipy>=1.5.0
joblib>=1.1.1
threadpoolctl>=2.0.0
categories
'auto' or a list of array
"auto"
Categories (unique values) per feature:
- 'auto' : Determine categories automatically from the training data
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values within a single feature, and should be sorted in case of
numeric values
The used categories can be found in the ``categories_`` attribute
.. versionadded:: 0.20
drop : {'first', 'if_binary'} or an array-like of shape (n_features,), default=None
Specifies a methodology to use to drop one of the categories per
feature. This is useful in situations where perfectly collinear
features cause problems, such as when feeding the resulting data
into an unregularized linear regression model
However, dropping one category breaks the symmetry of the original
representation and can therefore induce a bias in downstream models,
for instance for penalized linear classification or regression models
drop
null
dtype
number type
{"oml-python:serialized_object": "type", "value": "np.float64"}
Desired dtype of output
handle_unknown : {'error', 'ignore', 'infrequent_if_exist'}, default='error'
Specifies the way unknown categories are handled during :meth:`transform`
- 'error' : Raise an error if an unknown category is present during transform
- 'ignore' : When an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros. In the inverse transform, an unknown category
will be denoted as None
- 'infrequent_if_exist' : When an unknown category is encountered
during transform, the resulting one-hot encoded columns for this
feature will map to the infrequent category if it exists. The
infrequent category will be mapped to the last position in the
encoding. During inverse transform, an unknown category will be
mapped to the category denoted `'infrequent'` if it exists. If the
`'infrequent'` category does not exist, then :meth:`transform` and
...
feature_name_combiner
"concat"
handle_unknown
"ignore"
max_categories
int
null
Specifies an upper limit to the number of output features for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features
.. versionadded:: 1.1
Read more in the :ref:`User Guide <encoder_infrequent_categories>`
feature_name_combiner : "concat" or callable, default="concat"
Callable with signature `def callable(input_feature, category)` that returns a
string. This is used to create feature names to be returned by
:meth:`get_feature_names_out`
`"concat"` concatenates encoded feature name and category with
`feature + "_" + str(category)`.E.g. feature X with values 1, 6, 7 create
feature names `X_1, X_6, X_7`
.. versionadded:: 1.3
min_frequency
int or float
null
Specifies the minimum frequency below which a category will be
considered infrequent
- If `int`, categories with a smaller cardinality will be considered
infrequent
- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent
.. versionadded:: 1.1
Read more in the :ref:`User Guide <encoder_infrequent_categories>`
sparse
bool
"deprecated"
Will return sparse matrix if set True else will return an array
.. deprecated:: 1.2
`sparse` is deprecated in 1.2 and will be removed in 1.4. Use
`sparse_output` instead
sparse_output
bool
true
Will return sparse matrix if set True else will return an array
.. versionadded:: 1.2
`sparse` was renamed to `sparse_output`
scaler
26311
1159
TEST6d6eb9f0besklearn.preprocessing._data.StandardScaler
sklearn.StandardScaler
sklearn.preprocessing._data.StandardScaler
1
openml==0.14.1,sklearn==1.3.2
Standardize features by removing the mean and scaling to unit variance.
The standard score of a sample `x` is calculated as:
z = (x - u) / s
where `u` is the mean of the training samples or zero if `with_mean=False`,
and `s` is the standard deviation of the training samples or one if
`with_std=False`.
Centering and scaling happen independently on each feature by computing
the relevant statistics on the samples in the training set. Mean and
standard deviation are then stored to be used on later data using
:meth:`transform`.
Standardization of a dataset is a common requirement for many
machine learning estimators: they might behave badly if the
individual features do not more or less look like standard normally
distributed data (e.g. Gaussian with 0 mean and unit variance).
For instance many elements used in the objective function of
a learning algorithm (such as the RBF kernel of Support Vector
Machines or the L1 and L2 regularizers of linear models) assume that
all features are centered around 0 ...
2024-01-15T11:01:52
English
sklearn==1.3.2
numpy>=1.17.3
scipy>=1.5.0
joblib>=1.1.1
threadpoolctl>=2.0.0
copy
bool
true
If False, try to avoid a copy and do inplace scaling instead
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned
with_mean
bool
false
If True, center the data before scaling
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory
with_std
bool
true
If True, scale the data to unit variance (or equivalently,
unit standard deviation).
fu
26312
1159
TEST6d6eb9f0besklearn.pipeline.FeatureUnion(pca=sklearn.decomposition._truncated_svd.TruncatedSVD,fs=sklearn.feature_selection._univariate_selection.SelectPercentile)
sklearn.FeatureUnion(TruncatedSVD,SelectPercentile)
sklearn.pipeline.FeatureUnion
1
openml==0.14.1,sklearn==1.3.2
Concatenates results of multiple transformer objects.
This estimator applies a list of transformer objects in parallel to the
input data, then concatenates the results. This is useful to combine
several feature extraction mechanisms into a single transformer.
Parameters of the transformers may be set using its name and the parameter
name separated by a '__'. A transformer may be replaced entirely by
setting the parameter with its name to another transformer, removed by
setting to 'drop' or disabled by setting to 'passthrough' (features are
passed without transformation).
2024-01-15T11:01:52
English
sklearn==1.3.2
numpy>=1.17.3
scipy>=1.5.0
joblib>=1.1.1
threadpoolctl>=2.0.0
n_jobs
int
null
Number of jobs to run in parallel
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details
.. versionchanged:: v0.20
`n_jobs` default changed from 1 to None
transformer_list
list of
[{"oml-python:serialized_object": "component_reference", "value": {"key": "pca", "step_name": "pca"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "fs", "step_name": "fs"}}]
List of transformer objects to be applied to the data. The first
half of each tuple is the name of the transformer. The transformer can
be 'drop' for it to be ignored or can be 'passthrough' for features to
be passed unchanged
.. versionadded:: 1.1
Added the option `"passthrough"`
.. versionchanged:: 0.22
Deprecated `None` as a transformer in favor of 'drop'
transformer_weights
dict
null
Multiplicative weights for features per transformer
Keys are transformer names, values the weights
Raises ValueError if key not present in ``transformer_list``
verbose
bool
false
If True, the time elapsed while fitting each transformer will be
printed as it is completed.
pca
26313
1159
TEST6d6eb9f0besklearn.decomposition._truncated_svd.TruncatedSVD
sklearn.TruncatedSVD
sklearn.decomposition._truncated_svd.TruncatedSVD
1
openml==0.14.1,sklearn==1.3.2
Dimensionality reduction using truncated SVD (aka LSA).
This transformer performs linear dimensionality reduction by means of
truncated singular value decomposition (SVD). Contrary to PCA, this
estimator does not center the data before computing the singular value
decomposition. This means it can work with sparse matrices
efficiently.
In particular, truncated SVD works on term count/tf-idf matrices as
returned by the vectorizers in :mod:`sklearn.feature_extraction.text`. In
that context, it is known as latent semantic analysis (LSA).
This estimator supports two algorithms: a fast randomized SVD solver, and
a "naive" algorithm that uses ARPACK as an eigensolver on `X * X.T` or
`X.T * X`, whichever is more efficient.
2024-01-15T11:01:52
English
sklearn==1.3.2
numpy>=1.17.3
scipy>=1.5.0
joblib>=1.1.1
threadpoolctl>=2.0.0
algorithm
"randomized"
n_components
int
2
Desired dimensionality of output data
If algorithm='arpack', must be strictly less than the number of features
If algorithm='randomized', must be less than or equal to the number of features
The default value is useful for visualisation. For LSA, a value of
100 is recommended
algorithm : {'arpack', 'randomized'}, default='randomized'
SVD solver to use. Either "arpack" for the ARPACK wrapper in SciPy
(scipy.sparse.linalg.svds), or "randomized" for the randomized
algorithm due to Halko (2009)
n_iter
int
5
Number of iterations for randomized SVD solver. Not used by ARPACK. The
default is larger than the default in
:func:`~sklearn.utils.extmath.randomized_svd` to handle sparse
matrices that may have large slowly decaying spectrum
n_oversamples
int
10
Number of oversamples for randomized SVD solver. Not used by ARPACK
See :func:`~sklearn.utils.extmath.randomized_svd` for a complete
description
.. versionadded:: 1.1
power_iteration_normalizer : {'auto', 'QR', 'LU', 'none'}, default='auto'
Power iteration normalizer for randomized SVD solver
Not used by ARPACK. See :func:`~sklearn.utils.extmath.randomized_svd`
for more details
.. versionadded:: 1.1
power_iteration_normalizer
"auto"
random_state
int
null
Used during randomized svd. Pass an int for reproducible results across
multiple function calls
See :term:`Glossary <random_state>`
tol
float
0.0
Tolerance for ARPACK. 0 means machine precision. Ignored by randomized
SVD solver.
fs
26314
1159
TEST6d6eb9f0besklearn.feature_selection._univariate_selection.SelectPercentile
sklearn.SelectPercentile
sklearn.feature_selection._univariate_selection.SelectPercentile
1
openml==0.14.1,sklearn==1.3.2
Select features according to a percentile of the highest scores.
2024-01-15T11:01:52
English
sklearn==1.3.2
numpy>=1.17.3
scipy>=1.5.0
joblib>=1.1.1
threadpoolctl>=2.0.0
percentile
int
30
Percent of features to keep.
score_func
callable
{"oml-python:serialized_object": "function", "value": "sklearn.feature_selection._univariate_selection.f_classif"}
Function taking two arrays X and y, and returning a pair of arrays
(scores, pvalues) or a single array with scores
Default is f_classif (see below "See Also"). The default function only
works with classification tasks
.. versionadded:: 0.18
boosting
26315
1159
TEST6d6eb9f0besklearn.ensemble._weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree._classes.DecisionTreeClassifier)
sklearn.AdaBoostClassifier
sklearn.ensemble._weight_boosting.AdaBoostClassifier
1
openml==0.14.1,sklearn==1.3.2
An AdaBoost classifier.
An AdaBoost [1] classifier is a meta-estimator that begins by fitting a
classifier on the original dataset and then fits additional copies of the
classifier on the same dataset but where the weights of incorrectly
classified instances are adjusted such that subsequent classifiers focus
more on difficult cases.
This class implements the algorithm known as AdaBoost-SAMME [2].
2024-01-15T11:01:52
English
sklearn==1.3.2
numpy>=1.17.3
scipy>=1.5.0
joblib>=1.1.1
threadpoolctl>=2.0.0
algorithm
"SAMME.R"
base_estimator
object
{"oml-python:serialized_object": "component_reference", "value": {"key": "base_estimator", "step_name": null}}
The base estimator from which the boosted ensemble is built
Support for sample weighting is required, as well as proper
``classes_`` and ``n_classes_`` attributes. If ``None``, then
the base estimator is :class:`~sklearn.tree.DecisionTreeClassifier`
initialized with `max_depth=1`
.. deprecated:: 1.2
`base_estimator` is deprecated and will be removed in 1.4
Use `estimator` instead.
estimator
object
null
The base estimator from which the boosted ensemble is built
Support for sample weighting is required, as well as proper
``classes_`` and ``n_classes_`` attributes. If ``None``, then
the base estimator is :class:`~sklearn.tree.DecisionTreeClassifier`
initialized with `max_depth=1`
.. versionadded:: 1.2
`base_estimator` was renamed to `estimator`
learning_rate
float
1.0
Weight applied to each classifier at each boosting iteration. A higher
learning rate increases the contribution of each classifier. There is
a trade-off between the `learning_rate` and `n_estimators` parameters
Values must be in the range `(0.0, inf)`
algorithm : {'SAMME', 'SAMME.R'}, default='SAMME.R'
If 'SAMME.R' then use the SAMME.R real boosting algorithm
``estimator`` must support calculation of class probabilities
If 'SAMME' then use the SAMME discrete boosting algorithm
The SAMME.R algorithm typically converges faster than SAMME,
achieving a lower test error with fewer boosting iterations
n_estimators
int
50
The maximum number of estimators at which boosting is terminated
In case of perfect fit, the learning procedure is stopped early
Values must be in the range `[1, inf)`
random_state
int
null
Controls the random seed given at each `estimator` at each
boosting iteration
Thus, it is only used when `estimator` exposes a `random_state`
Pass an int for reproducible output across multiple function calls
See :term:`Glossary <random_state>`
base_estimator
26316
1159
TEST6d6eb9f0besklearn.tree._classes.DecisionTreeClassifier
sklearn.DecisionTreeClassifier
sklearn.tree._classes.DecisionTreeClassifier
1
openml==0.14.1,sklearn==1.3.2
A decision tree classifier.
2024-01-15T11:01:52
English
sklearn==1.3.2
numpy>=1.17.3
scipy>=1.5.0
joblib>=1.1.1
threadpoolctl>=2.0.0
ccp_alpha
non
0.0
Complexity parameter used for Minimal Cost-Complexity Pruning. The
subtree with the largest cost complexity that is smaller than
``ccp_alpha`` will be chosen. By default, no pruning is performed. See
:ref:`minimal_cost_complexity_pruning` for details
.. versionadded:: 0.22
class_weight
dict
null
Weights associated with classes in the form ``{class_label: weight}``
If None, all classes are supposed to have weight one. For
multi-output problems, a list of dicts can be provided in the same
order as the columns of y
Note that for multioutput (including multilabel) weights should be
defined for each class of every column in its own dict. For example,
for four-class multilabel classification weights should be
[{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of
[{1:1}, {2:5}, {3:1}, {4:1}]
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``
For multi-output, the weights of each column of y will be multiplied
Note that these weights will be multiplied with sample_weight (passed
through the fit method) if sample_weight is specified
criterion
"gini"
max_depth
int
null
The maximum depth of the tree. If None, then nodes are expanded until
all leaves are pure or until all leaves contain less than
min_samples_split samples
max_features
int
null
The number of features to consider when looking for the best split:
- If int, then consider `max_features` features at each split
- If float, then `max_features` is a fraction and
`max(1, int(max_features * n_features_in_))` features are considered at
each split
- If "sqrt", then `max_features=sqrt(n_features)`
- If "log2", then `max_features=log2(n_features)`
- If None, then `max_features=n_features`
Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features
max_leaf_nodes
int
null
Grow a tree with ``max_leaf_nodes`` in best-first fashion
Best nodes are defined as relative reduction in impurity
If None then unlimited number of leaf nodes
min_impurity_decrease
float
0.0
A node will be split if this split induces a decrease of the impurity
greater than or equal to this value
The weighted impurity decrease equation is the following::
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where ``N`` is the total number of samples, ``N_t`` is the number of
samples at the current node, ``N_t_L`` is the number of samples in the
left child, and ``N_t_R`` is the number of samples in the right child
``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
if ``sample_weight`` is passed
.. versionadded:: 0.19
min_samples_leaf
int or float
1
The minimum number of samples required to be at a leaf node
A split point at any depth will only be considered if it leaves at
least ``min_samples_leaf`` training samples in each of the left and
right branches. This may have the effect of smoothing the model,
especially in regression
- If int, then consider `min_samples_leaf` as the minimum number
- If float, then `min_samples_leaf` is a fraction and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node
.. versionchanged:: 0.18
Added float values for fractions
min_samples_split
int or float
2
The minimum number of samples required to split an internal node:
- If int, then consider `min_samples_split` as the minimum number
- If float, then `min_samples_split` is a fraction and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split
.. versionchanged:: 0.18
Added float values for fractions
min_weight_fraction_leaf
float
0.0
The minimum weighted fraction of the sum total of weights (of all
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided
random_state
int
null
Controls the randomness of the estimator. The features are always
randomly permuted at each split, even if ``splitter`` is set to
``"best"``. When ``max_features < n_features``, the algorithm will
select ``max_features`` at random at each split before finding the best
split among them. But the best found split may vary across different
runs, even if ``max_features=n_features``. That is the case, if the
improvement of the criterion is identical for several splits and one
split has to be selected at random. To obtain a deterministic behaviour
during fitting, ``random_state`` has to be fixed to an integer
See :term:`Glossary <random_state>` for details
splitter
"best"