Run
32156

Run 32156

Task 119 (Supervised Classification) diabetes Uploaded 30-03-2021 by Continuous Integration
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
Issue #Downvotes for this reason By


Flow

TESTa73e12cd73sklearn.pipeline.Pipeline(transformer=sklearn.compose._column _transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimpu ter=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing .data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openm l.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHo tEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``.
TESTa73e12cd73sklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)_memorynull
TESTa73e12cd73sklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "transformer", "step_name": "transformer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "classifier", "step_name": "classifier"}}]
TESTa73e12cd73sklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)_verbosefalse
TESTa73e12cd73sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_n_jobsnull
TESTa73e12cd73sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_remainder"passthrough"
TESTa73e12cd73sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_sparse_threshold0.3
TESTa73e12cd73sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformer_weightsnull
TESTa73e12cd73sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformers[{"oml-python:serialized_object": "component_reference", "value": {"key": "numeric", "step_name": "numeric", "argument_1": [0, 1, 2, 3, 4, 5, 6, 7]}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "nominal", "step_name": "nominal", "argument_1": []}}]
TESTa73e12cd73sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_verbosefalse
TESTa73e12cd73sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_memorynull
TESTa73e12cd73sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}]
TESTa73e12cd73sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_verbosefalse
TESTa73e12cd73sklearn.impute._base.SimpleImputer(1)_add_indicatorfalse
TESTa73e12cd73sklearn.impute._base.SimpleImputer(1)_copytrue
TESTa73e12cd73sklearn.impute._base.SimpleImputer(1)_fill_valuenull
TESTa73e12cd73sklearn.impute._base.SimpleImputer(1)_missing_valuesNaN
TESTa73e12cd73sklearn.impute._base.SimpleImputer(1)_strategy"mean"
TESTa73e12cd73sklearn.impute._base.SimpleImputer(1)_verbose0
TESTa73e12cd73sklearn.preprocessing.data.StandardScaler(1)_copytrue
TESTa73e12cd73sklearn.preprocessing.data.StandardScaler(1)_with_meantrue
TESTa73e12cd73sklearn.preprocessing.data.StandardScaler(1)_with_stdtrue
TESTa73e12cd73sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_memorynull
TESTa73e12cd73sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "customimputer", "step_name": "customimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "onehotencoder", "step_name": "onehotencoder"}}]
TESTa73e12cd73sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_verbosefalse
TESTa73e12cd73openml.testing.CustomImputer(1)_add_indicatorfalse
TESTa73e12cd73openml.testing.CustomImputer(1)_copytrue
TESTa73e12cd73openml.testing.CustomImputer(1)_fill_valuenull
TESTa73e12cd73openml.testing.CustomImputer(1)_missing_valuesNaN
TESTa73e12cd73openml.testing.CustomImputer(1)_strategy"most_frequent"
TESTa73e12cd73openml.testing.CustomImputer(1)_verbose0
TESTa73e12cd73sklearn.preprocessing._encoders.OneHotEncoder(1)_categorical_featuresnull
TESTa73e12cd73sklearn.preprocessing._encoders.OneHotEncoder(1)_categoriesnull
TESTa73e12cd73sklearn.preprocessing._encoders.OneHotEncoder(1)_dropnull
TESTa73e12cd73sklearn.preprocessing._encoders.OneHotEncoder(1)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
TESTa73e12cd73sklearn.preprocessing._encoders.OneHotEncoder(1)_handle_unknown"ignore"
TESTa73e12cd73sklearn.preprocessing._encoders.OneHotEncoder(1)_n_valuesnull
TESTa73e12cd73sklearn.preprocessing._encoders.OneHotEncoder(1)_sparsetrue
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_class_weightnull
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_criterion"gini"
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_max_depthnull
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_max_featuresnull
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_max_leaf_nodesnull
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_min_impurity_decrease0.0
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_min_impurity_splitnull
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_min_samples_leaf1
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_min_samples_split2
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_min_weight_fraction_leaf0.0
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_presortfalse
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_random_state62501
TESTa73e12cd73sklearn.tree.tree.DecisionTreeClassifier(1)_splitter"best"

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

18 Evaluation measures

0.6988
Per class
Cross-validation details (10% Holdout set)
0.7227
Per class
Cross-validation details (10% Holdout set)
0.3994
Cross-validation details (10% Holdout set)
0.3751
Cross-validation details (10% Holdout set)
0.2767
Cross-validation details (10% Holdout set)
0.4589
Cross-validation details (10% Holdout set)
0.7233
Cross-validation details (10% Holdout set)
253
Per class
Cross-validation details (10% Holdout set)
0.7221
Per class
Cross-validation details (10% Holdout set)
0.7233
Cross-validation details (10% Holdout set)
0.9463
Cross-validation details (10% Holdout set)
0.6029
Cross-validation details (10% Holdout set)
0.4813
Cross-validation details (10% Holdout set)
0.526
Cross-validation details (10% Holdout set)
1.093
Cross-validation details (10% Holdout set)
0.6988
Cross-validation details (10% Holdout set)