Flow
TEST96288b0561sklearn.pipeline.Pipeline(imputation=sklearn.impute._base.SimpleImputer,hotencoding=sklearn.preprocessing._encoders.OneHotEncoder,variencethreshold=sklearn.feature_selection._variance_threshold.VarianceThreshold,classifier=sklearn.tree._classes.DecisionTreeClassifier)

TEST96288b0561sklearn.pipeline.Pipeline(imputation=sklearn.impute._base.SimpleImputer,hotencoding=sklearn.preprocessing._encoders.OneHotEncoder,variencethreshold=sklearn.feature_selection._variance_threshold.VarianceThreshold,classifier=sklearn.tree._classes.DecisionTreeClassifier)

Visibility: public Uploaded 18-10-2024 by Continuous Integration sklearn==1.4.2 numpy>=1.19.5 scipy>=1.6.0 joblib>=1.2.0 threadpoolctl>=2.0.0 0 runs
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
  • openml-python python scikit-learn sklearn sklearn_1.4.2
Issue #Downvotes for this reason By


Loading wiki
Help us complete this description Edit
A sequence of data transformers with an optional final predictor. `Pipeline` allows you to sequentially apply a list of transformers to preprocess the data and, if desired, conclude the sequence with a final :term:`predictor` for predictive modeling. Intermediate steps of the pipeline must be 'transforms', that is, they must implement `fit` and `transform` methods. The final :term:`estimator` only needs to implement `fit`. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a `'__'`, as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to `'passthrough'` or `None`. For an example use case of `Pipeline` combined with :class:`~s...

Parameters

memoryUsed to cache the fitted transformers of the pipeline. The last step will never be cached, even if it is a transformer. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute ``named_steps`` or ``steps`` to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consumingdefault: null
stepsList of (name of step, estimator) tuples that are to be chained in sequential order. To be compatible with the scikit-learn API, all steps must define `fit`. All non-last steps must also define `transform`. See :ref:`Combining Estimators ` for more detailsdefault: [{"oml-python:serialized_object": "component_reference", "value": {"key": "imputation", "step_name": "imputation"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "hotencoding", "step_name": "hotencoding"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "variencethreshold", "step_name": "variencethreshold"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "classifier", "step_name": "classifier"}}]
verboseIf True, the time elapsed while fitting each step will be printed as it is completed.default: false

0
Runs

List all runs
Parameter:
Rendering chart
Rendering table