Run
7625

Run 7625

Task 1 (Supervised Classification) anneal Uploaded 02-03-2021 by Continuous Integration
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
  • openml-python Sklearn_0.24.0. study_1007 study_1008 study_1012 study_1015 study_1019 study_1021 study_1023 study_1027 study_1031 study_1032 study_1033 study_1040 study_1041 study_1045 study_1048 study_1051 study_1054 study_1057 study_1060 study_1069 study_1072 study_1077 study_1080 study_1083 study_1086 study_1089 study_1091 study_1097 study_1098 study_1102 study_1104 study_1107 study_1110 study_1114 study_1116 study_1119 study_1121 study_1122 study_1130 study_1131 study_1133 study_1136 study_1138 study_1142 study_1145 study_1147 study_1150 study_1153 study_1159 study_1162 study_1196 study_1200 study_1202 study_1205 study_1208 study_1211 study_1219 study_1221 study_1224 study_1251 study_1254 study_1257 study_1264 study_1278 study_1279 study_1288 study_1291 study_1294 study_1295 study_1309 study_1312 study_1315 study_1318 study_1327 study_1330 study_1333 study_1336 study_1342 study_1345 study_1348 study_1351 study_1359 study_1364 study_1365 study_1386 study_1392 study_1394 study_1398 study_1401 study_1409 study_1413 study_1430 study_1434 study_1437 study_1440 study_1443 study_1446 study_1448 study_1449 study_1450 study_1562 study_1563 study_1564 study_1566 study_2953
Issue #Downvotes for this reason By


Flow

sklearn.pipeline.Pipeline(preprocess=sklearn.compose._column_transformer.Co lumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=skle arn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,stan dardscaler=sklearn.preprocessing._data.StandardScaler)),estimator=sklearn.t ree._classes.DecisionTreeClassifier)(1)Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``.
sklearn.preprocessing._data.StandardScaler(2)_copytrue
sklearn.preprocessing._data.StandardScaler(2)_with_meantrue
sklearn.preprocessing._data.StandardScaler(2)_with_stdtrue
sklearn.tree._classes.DecisionTreeClassifier(3)_ccp_alpha0.0
sklearn.tree._classes.DecisionTreeClassifier(3)_class_weightnull
sklearn.tree._classes.DecisionTreeClassifier(3)_criterion"gini"
sklearn.tree._classes.DecisionTreeClassifier(3)_max_depthnull
sklearn.tree._classes.DecisionTreeClassifier(3)_max_featuresnull
sklearn.tree._classes.DecisionTreeClassifier(3)_max_leaf_nodesnull
sklearn.tree._classes.DecisionTreeClassifier(3)_min_impurity_decrease0.0
sklearn.tree._classes.DecisionTreeClassifier(3)_min_impurity_splitnull
sklearn.tree._classes.DecisionTreeClassifier(3)_min_samples_leaf1
sklearn.tree._classes.DecisionTreeClassifier(3)_min_samples_split2
sklearn.tree._classes.DecisionTreeClassifier(3)_min_weight_fraction_leaf0.0
sklearn.tree._classes.DecisionTreeClassifier(3)_random_state54917
sklearn.tree._classes.DecisionTreeClassifier(3)_splitter"best"
sklearn.preprocessing._encoders.OneHotEncoder(4)_categories"auto"
sklearn.preprocessing._encoders.OneHotEncoder(4)_dropnull
sklearn.preprocessing._encoders.OneHotEncoder(4)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
sklearn.preprocessing._encoders.OneHotEncoder(4)_handle_unknown"ignore"
sklearn.preprocessing._encoders.OneHotEncoder(4)_sparsetrue
sklearn.impute._base.SimpleImputer(5)_add_indicatorfalse
sklearn.impute._base.SimpleImputer(5)_copytrue
sklearn.impute._base.SimpleImputer(5)_fill_valuenull
sklearn.impute._base.SimpleImputer(5)_missing_valuesNaN
sklearn.impute._base.SimpleImputer(5)_strategy"median"
sklearn.impute._base.SimpleImputer(5)_verbose0
sklearn.pipeline.Pipeline(preprocess=sklearn.compose._column_transformer.ColumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)),estimator=sklearn.tree._classes.DecisionTreeClassifier)(1)_memorynull
sklearn.pipeline.Pipeline(preprocess=sklearn.compose._column_transformer.ColumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)),estimator=sklearn.tree._classes.DecisionTreeClassifier)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "preprocess", "step_name": "preprocess"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": "estimator"}}]
sklearn.pipeline.Pipeline(preprocess=sklearn.compose._column_transformer.ColumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)),estimator=sklearn.tree._classes.DecisionTreeClassifier)(1)_verbosefalse
sklearn.compose._column_transformer.ColumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler))(1)_n_jobsnull
sklearn.compose._column_transformer.ColumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler))(1)_remainder"drop"
sklearn.compose._column_transformer.ColumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler))(1)_sparse_threshold0.3
sklearn.compose._column_transformer.ColumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler))(1)_transformer_weightsnull
sklearn.compose._column_transformer.ColumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler))(1)_transformers[{"oml-python:serialized_object": "component_reference", "value": {"key": "cat", "step_name": "cat", "argument_1": {"oml-python:serialized_object": "function", "value": "openml.extensions.sklearn.cat"}}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "cont", "step_name": "cont", "argument_1": {"oml-python:serialized_object": "function", "value": "openml.extensions.sklearn.cont"}}}]
sklearn.compose._column_transformer.ColumnTransformer(cat=sklearn.preprocessing._encoders.OneHotEncoder,cont=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler))(1)_verbosefalse
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)(1)_memorynull
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}]
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)(1)_verbosefalse

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

17 Evaluation measures

0.9895 ± 0.0114
Per class
Cross-validation details (10-fold Crossvalidation)
0.9932 ± 0.008
Per class
Cross-validation details (10-fold Crossvalidation)
0.9832 ± 0.0192
Cross-validation details (10-fold Crossvalidation)
0.9756 ± 0.0268
Cross-validation details (10-fold Crossvalidation)
0.0022 ± 0.0026
Cross-validation details (10-fold Crossvalidation)
0.1343 ± 0.0012
Cross-validation details (10-fold Crossvalidation)
0.9933 ± 0.0078
Cross-validation details (10-fold Crossvalidation)
898
Per class
Cross-validation details (10-fold Crossvalidation)
0.9934 ± 0.0065
Per class
Cross-validation details (10-fold Crossvalidation)
0.9933 ± 0.0078
Cross-validation details (10-fold Crossvalidation)
1.1915 ± 0.0248
Cross-validation details (10-fold Crossvalidation)
0.0166 ± 0.0194
Cross-validation details (10-fold Crossvalidation)
0.2582 ± 0.0024
Cross-validation details (10-fold Crossvalidation)
0.0472 ± 0.0356
Cross-validation details (10-fold Crossvalidation)
0.1828 ± 0.1377
Cross-validation details (10-fold Crossvalidation)