Run
31720

Run 31720

Task 96 (Supervised Classification) credit-a Uploaded 30-03-2021 by Continuous Integration
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
Issue #Downvotes for this reason By


Flow

TESTfd0c589a4asklearn.pipeline.Pipeline(transformer=sklearn.compose._column _transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimpu ter=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing .data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openm l.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHo tEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``.
TESTfd0c589a4asklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)_memorynull
TESTfd0c589a4asklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "transformer", "step_name": "transformer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "classifier", "step_name": "classifier"}}]
TESTfd0c589a4asklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)_verbosefalse
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_n_jobsnull
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_remainder"passthrough"
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_sparse_threshold0.3
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformer_weightsnull
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformers[{"oml-python:serialized_object": "component_reference", "value": {"key": "numeric", "step_name": "numeric", "argument_1": [1, 2, 7, 10, 13, 14]}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "nominal", "step_name": "nominal", "argument_1": [0, 3, 4, 5, 6, 8, 9, 11, 12]}}]
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_verbosefalse
TESTfd0c589a4asklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_memorynull
TESTfd0c589a4asklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}]
TESTfd0c589a4asklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_verbosefalse
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_add_indicatorfalse
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_copytrue
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_fill_valuenull
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_missing_valuesNaN
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_strategy"mean"
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_verbose0
TESTfd0c589a4asklearn.preprocessing.data.StandardScaler(1)_copytrue
TESTfd0c589a4asklearn.preprocessing.data.StandardScaler(1)_with_meantrue
TESTfd0c589a4asklearn.preprocessing.data.StandardScaler(1)_with_stdtrue
TESTfd0c589a4asklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_memorynull
TESTfd0c589a4asklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "customimputer", "step_name": "customimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "onehotencoder", "step_name": "onehotencoder"}}]
TESTfd0c589a4asklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_verbosefalse
TESTfd0c589a4aopenml.testing.CustomImputer(1)_add_indicatorfalse
TESTfd0c589a4aopenml.testing.CustomImputer(1)_copytrue
TESTfd0c589a4aopenml.testing.CustomImputer(1)_fill_valuenull
TESTfd0c589a4aopenml.testing.CustomImputer(1)_missing_valuesNaN
TESTfd0c589a4aopenml.testing.CustomImputer(1)_strategy"most_frequent"
TESTfd0c589a4aopenml.testing.CustomImputer(1)_verbose0
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_categorical_featuresnull
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_categoriesnull
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_dropnull
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_handle_unknown"ignore"
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_n_valuesnull
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_sparsetrue
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_class_weightnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_criterion"gini"
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_max_depthnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_max_featuresnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_max_leaf_nodesnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_impurity_decrease0.0
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_impurity_splitnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_samples_leaf1
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_samples_split2
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_weight_fraction_leaf0.0
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_presortfalse
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_random_state62501
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_splitter"best"

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

18 Evaluation measures

0.8288
Per class
Cross-validation details (33% Holdout set)
0.8283
Per class
Cross-validation details (33% Holdout set)
0.6564
Cross-validation details (33% Holdout set)
0.6542
Cross-validation details (33% Holdout set)
0.1718
Cross-validation details (33% Holdout set)
0.4978
Cross-validation details (33% Holdout set)
0.8282
Cross-validation details (33% Holdout set)
227
Per class
Cross-validation details (33% Holdout set)
0.8291
Per class
Cross-validation details (33% Holdout set)
0.8282
Cross-validation details (33% Holdout set)
1.0024
Cross-validation details (33% Holdout set)
0.3451
Cross-validation details (33% Holdout set)
0.5008
Cross-validation details (33% Holdout set)
0.4145
Cross-validation details (33% Holdout set)
0.8276
Cross-validation details (33% Holdout set)
0.8288
Cross-validation details (33% Holdout set)