Run
31695

Run 31695

Task 119 (Supervised Classification) diabetes Uploaded 30-03-2021 by Continuous Integration
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
Issue #Downvotes for this reason By


Flow

TESTfd0c589a4asklearn.pipeline.Pipeline(transformer=sklearn.compose._column _transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimpu ter=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing .data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openm l.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHo tEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``.
TESTfd0c589a4asklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)_memorynull
TESTfd0c589a4asklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "transformer", "step_name": "transformer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "classifier", "step_name": "classifier"}}]
TESTfd0c589a4asklearn.pipeline.Pipeline(transformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),classifier=sklearn.tree.tree.DecisionTreeClassifier)(1)_verbosefalse
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_n_jobsnull
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_remainder"passthrough"
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_sparse_threshold0.3
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformer_weightsnull
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformers[{"oml-python:serialized_object": "component_reference", "value": {"key": "numeric", "step_name": "numeric", "argument_1": [0, 1, 2, 3, 4, 5, 6, 7]}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "nominal", "step_name": "nominal", "argument_1": []}}]
TESTfd0c589a4asklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_verbosefalse
TESTfd0c589a4asklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_memorynull
TESTfd0c589a4asklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}]
TESTfd0c589a4asklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_verbosefalse
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_add_indicatorfalse
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_copytrue
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_fill_valuenull
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_missing_valuesNaN
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_strategy"mean"
TESTfd0c589a4asklearn.impute._base.SimpleImputer(1)_verbose0
TESTfd0c589a4asklearn.preprocessing.data.StandardScaler(1)_copytrue
TESTfd0c589a4asklearn.preprocessing.data.StandardScaler(1)_with_meantrue
TESTfd0c589a4asklearn.preprocessing.data.StandardScaler(1)_with_stdtrue
TESTfd0c589a4asklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_memorynull
TESTfd0c589a4asklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "customimputer", "step_name": "customimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "onehotencoder", "step_name": "onehotencoder"}}]
TESTfd0c589a4asklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_verbosefalse
TESTfd0c589a4aopenml.testing.CustomImputer(1)_add_indicatorfalse
TESTfd0c589a4aopenml.testing.CustomImputer(1)_copytrue
TESTfd0c589a4aopenml.testing.CustomImputer(1)_fill_valuenull
TESTfd0c589a4aopenml.testing.CustomImputer(1)_missing_valuesNaN
TESTfd0c589a4aopenml.testing.CustomImputer(1)_strategy"most_frequent"
TESTfd0c589a4aopenml.testing.CustomImputer(1)_verbose0
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_categorical_featuresnull
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_categoriesnull
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_dropnull
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_handle_unknown"ignore"
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_n_valuesnull
TESTfd0c589a4asklearn.preprocessing._encoders.OneHotEncoder(1)_sparsetrue
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_class_weightnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_criterion"gini"
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_max_depthnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_max_featuresnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_max_leaf_nodesnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_impurity_decrease0.0
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_impurity_splitnull
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_samples_leaf1
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_samples_split2
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_min_weight_fraction_leaf0.0
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_presortfalse
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_random_state62501
TESTfd0c589a4asklearn.tree.tree.DecisionTreeClassifier(1)_splitter"best"

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

18 Evaluation measures

0.6988
Per class
Cross-validation details (10% Holdout set)
0.7227
Per class
Cross-validation details (10% Holdout set)
0.3994
Cross-validation details (10% Holdout set)
0.3751
Cross-validation details (10% Holdout set)
0.2767
Cross-validation details (10% Holdout set)
0.4589
Cross-validation details (10% Holdout set)
0.7233
Cross-validation details (10% Holdout set)
253
Per class
Cross-validation details (10% Holdout set)
0.7221
Per class
Cross-validation details (10% Holdout set)
0.7233
Cross-validation details (10% Holdout set)
0.9463
Cross-validation details (10% Holdout set)
0.6029
Cross-validation details (10% Holdout set)
0.4813
Cross-validation details (10% Holdout set)
0.526
Cross-validation details (10% Holdout set)
1.093
Cross-validation details (10% Holdout set)
0.6988
Cross-validation details (10% Holdout set)