Run
32004

Run 32004

Task 119 (Supervised Classification) diabetes Uploaded 30-03-2021 by Continuous Integration
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
Issue #Downvotes for this reason By


Flow

TESTab990a0f6esklearn.pipeline.Pipeline(transformer=sklearn.compose._column _transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimpu ter=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing ._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=open ml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneH otEncoder)),classifier=sklearn.tree._classes.DecisionTreeClassifier)(1)Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``.
TESTab990a0f6esklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree._classes.DecisionTreeClassifier)(1)_memorynull
TESTab990a0f6esklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree._classes.DecisionTreeClassifier)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "transformer", "step_name": "transformer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "classifier", "step_name": "classifier"}}]
TESTab990a0f6esklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree._classes.DecisionTreeClassifier)(1)_verbosefalse
TESTab990a0f6esklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_n_jobsnull
TESTab990a0f6esklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_remainder"passthrough"
TESTab990a0f6esklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_sparse_threshold0.3
TESTab990a0f6esklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformer_weightsnull
TESTab990a0f6esklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformers[{"oml-python:serialized_object": "component_reference", "value": {"key": "numeric", "step_name": "numeric", "argument_1": [0, 1, 2, 3, 4, 5, 6, 7]}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "nominal", "step_name": "nominal", "argument_1": []}}]
TESTab990a0f6esklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_verbosefalse
TESTab990a0f6esklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)(1)_memorynull
TESTab990a0f6esklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}]
TESTab990a0f6esklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)(1)_verbosefalse
TESTab990a0f6esklearn.impute._base.SimpleImputer(1)_add_indicatorfalse
TESTab990a0f6esklearn.impute._base.SimpleImputer(1)_copytrue
TESTab990a0f6esklearn.impute._base.SimpleImputer(1)_fill_valuenull
TESTab990a0f6esklearn.impute._base.SimpleImputer(1)_missing_valuesNaN
TESTab990a0f6esklearn.impute._base.SimpleImputer(1)_strategy"mean"
TESTab990a0f6esklearn.impute._base.SimpleImputer(1)_verbose0
TESTab990a0f6esklearn.preprocessing._data.StandardScaler(1)_copytrue
TESTab990a0f6esklearn.preprocessing._data.StandardScaler(1)_with_meantrue
TESTab990a0f6esklearn.preprocessing._data.StandardScaler(1)_with_stdtrue
TESTab990a0f6esklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_memorynull
TESTab990a0f6esklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "customimputer", "step_name": "customimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "onehotencoder", "step_name": "onehotencoder"}}]
TESTab990a0f6esklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_verbosefalse
TESTab990a0f6eopenml.testing.CustomImputer(1)_add_indicatorfalse
TESTab990a0f6eopenml.testing.CustomImputer(1)_copytrue
TESTab990a0f6eopenml.testing.CustomImputer(1)_fill_valuenull
TESTab990a0f6eopenml.testing.CustomImputer(1)_missing_valuesNaN
TESTab990a0f6eopenml.testing.CustomImputer(1)_strategy"most_frequent"
TESTab990a0f6eopenml.testing.CustomImputer(1)_verbose0
TESTab990a0f6esklearn.preprocessing._encoders.OneHotEncoder(1)_categories"auto"
TESTab990a0f6esklearn.preprocessing._encoders.OneHotEncoder(1)_dropnull
TESTab990a0f6esklearn.preprocessing._encoders.OneHotEncoder(1)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
TESTab990a0f6esklearn.preprocessing._encoders.OneHotEncoder(1)_handle_unknown"ignore"
TESTab990a0f6esklearn.preprocessing._encoders.OneHotEncoder(1)_sparsetrue
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_ccp_alpha0.0
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_class_weightnull
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_criterion"gini"
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_max_depthnull
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_max_featuresnull
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_max_leaf_nodesnull
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_min_impurity_decrease0.0
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_min_impurity_splitnull
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_min_samples_leaf1
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_min_samples_split2
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_min_weight_fraction_leaf0.0
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_random_state62501
TESTab990a0f6esklearn.tree._classes.DecisionTreeClassifier(1)_splitter"best"

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

18 Evaluation measures

0.6988
Per class
Cross-validation details (10% Holdout set)
0.7227
Per class
Cross-validation details (10% Holdout set)
0.3994
Cross-validation details (10% Holdout set)
0.3751
Cross-validation details (10% Holdout set)
0.2767
Cross-validation details (10% Holdout set)
0.4589
Cross-validation details (10% Holdout set)
0.7233
Cross-validation details (10% Holdout set)
253
Per class
Cross-validation details (10% Holdout set)
0.7221
Per class
Cross-validation details (10% Holdout set)
0.7233
Cross-validation details (10% Holdout set)
0.9463
Cross-validation details (10% Holdout set)
0.6029
Cross-validation details (10% Holdout set)
0.4813
Cross-validation details (10% Holdout set)
0.526
Cross-validation details (10% Holdout set)
1.093
Cross-validation details (10% Holdout set)
0.6988
Cross-validation details (10% Holdout set)