OpenML
Filter results by:
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Encode categorical features as an integer array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Encode categorical features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features.…
0 runs0 likes0 downloads0 reach0 impact
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
1 runs0 likes0 downloads0 reach0 impact
Randomized search on hyper parameters. RandomizedSearchCV implements a "fit" and a "score" method. It also implements "predict", "predict_proba", "decision_function", "transform" and…
1 runs0 likes0 downloads0 reach0 impact
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive…
0 runs0 likes0 downloads0 reach0 impact
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
1 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Dataset representing the XOR operation
0 runs0 likes0 downloads0 reach0 impact
4 instances - 3 features - 0 classes - 0 missing values
Dataset representing the XOR operation
0 runs0 likes0 downloads0 reach0 impact
4 instances - 3 features - 0 classes - 0 missing values
The weather problem is a tiny dataset that we will use repeatedly to illustrate machine learning methods. Entirely fictitious, it supposedly concerns the conditions that are suitable for playing some…
0 runs0 likes0 downloads0 reach0 impact
14 instances - 5 features - 2 classes - 0 missing values
The weather problem is a tiny dataset that we will use repeatedly to illustrate machine learning methods. Entirely fictitious, it supposedly concerns the conditions that are suitable for playing some…
0 runs0 likes0 downloads0 reach0 impact
14 instances - 5 features - 2 classes - 0 missing values
.. _diabetes_dataset: Diabetes dataset ---------------- Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of n = 442…
0 runs0 likes0 downloads0 reach0 impact
442 instances - 11 features - 0 classes - 0 missing values
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.742, f_measure: 0.7188, kappa: 0.3729, kb_relative_information_score: 0.2569, mean_absolute_error: 0.3326, mean_prior_absolute_error: 0.4545, weighted_recall: 0.724, number_of_instances: 768, precision: 0.717, predictive_accuracy: 0.724, prior_entropy: 0.9331, relative_absolute_error: 0.7317, root_mean_prior_squared_error: 0.4766, root_mean_squared_error: 0.4451, root_relative_squared_error: 0.9337, unweighted_recall: 0.6807,
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.9292, f_measure: 0.8723, kappa: 0.7445, kb_relative_information_score: 0.6238, mean_absolute_error: 0.2022, mean_prior_absolute_error: 0.4978, weighted_recall: 0.8722, number_of_instances: 227, precision: 0.8732, predictive_accuracy: 0.8722, prior_entropy: 1.0024, relative_absolute_error: 0.4062, root_mean_prior_squared_error: 0.5008, root_mean_squared_error: 0.3141, root_relative_squared_error: 0.6271, unweighted_recall: 0.8729,
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.9292, f_measure: 0.8723, kappa: 0.7445, kb_relative_information_score: 0.6238, mean_absolute_error: 0.2022, mean_prior_absolute_error: 0.4978, weighted_recall: 0.8722, number_of_instances: 227, precision: 0.8732, predictive_accuracy: 0.8722, prior_entropy: 1.0024, relative_absolute_error: 0.4062, root_mean_prior_squared_error: 0.5008, root_mean_squared_error: 0.3141, root_relative_squared_error: 0.6271, unweighted_recall: 0.8729,
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.929, f_measure: 0.8767, kappa: 0.7536, kb_relative_information_score: 0.6099, mean_absolute_error: 0.2075, mean_prior_absolute_error: 0.4978, weighted_recall: 0.8767, number_of_instances: 227, precision: 0.8789, predictive_accuracy: 0.8767, prior_entropy: 1.0024, relative_absolute_error: 0.4168, root_mean_prior_squared_error: 0.5008, root_mean_squared_error: 0.3196, root_relative_squared_error: 0.6382, unweighted_recall: 0.8779,
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
1 runs0 likes0 downloads0 reach0 impact
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.8423, f_measure: 0.844, kappa: 0.6847, kb_relative_information_score: 0.6833, mean_absolute_error: 0.1559, mean_prior_absolute_error: 0.4948, weighted_recall: 0.8441, number_of_instances: 14980, precision: 0.844, predictive_accuracy: 0.8441, prior_entropy: 0.9924, relative_absolute_error: 0.3152, root_mean_prior_squared_error: 0.4974, root_mean_squared_error: 0.3949, root_relative_squared_error: 0.794, unweighted_recall: 0.8423,
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
0 runs0 likes0 downloads0 reach0 impact
Randomized search on hyper parameters. RandomizedSearchCV implements a "fit" and a "score" method. It also implements "predict", "predict_proba", "decision_function", "transform" and…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive…
0 runs0 likes0 downloads0 reach0 impact
Exhaustive search over specified parameter values for an estimator. Important members are fit, predict. GridSearchCV implements a "fit" and a "score" method. It also implements "predict",…
1 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Encode categorical features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features.…
0 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
Concatenates results of multiple transformer objects. This estimator applies a list of transformer objects in parallel to the input data, then concatenates the results. This is useful to combine…
0 runs0 likes0 downloads0 reach0 impact
Dimensionality reduction using truncated SVD (aka LSA). This transformer performs linear dimensionality reduction by means of truncated singular value decomposition (SVD). Contrary to PCA, this…
0 runs0 likes0 downloads0 reach0 impact
Select features according to a percentile of the highest scores.
0 runs0 likes0 downloads0 reach0 impact
An AdaBoost classifier. An AdaBoost [1] classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the 'multi_class' option is set to 'ovr', and uses the…
1 runs0 likes0 downloads0 reach0 impact
A Bagging classifier. A Bagging classifier is an ensemble meta-estimator that fits base classifiers each on random subsets of the original dataset and then aggregate their individual predictions…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
A Bagging classifier. A Bagging classifier is an ensemble meta-estimator that fits base classifiers each on random subsets of the original dataset and then aggregate their individual predictions…
0 runs0 likes0 downloads0 reach0 impact
An AdaBoost classifier. An AdaBoost [1] classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset…
0 runs0 likes0 downloads0 reach0 impact
Soft Voting/Majority Rule classifier for unfitted estimators.
0 runs0 likes0 downloads0 reach0 impact
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, ..., wp) to minimize the residual sum of squares between the observed targets in the dataset,…
1 runs0 likes0 downloads0 reach0 impact
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.9978, f_measure: 0.9372, kappa: 0.8597, kb_relative_information_score: 0.836, mean_absolute_error: 0.0372, mean_prior_absolute_error: 0.141, weighted_recall: 0.9426, number_of_instances: 296, precision: 0.9423, predictive_accuracy: 0.9426, prior_entropy: 1.309, relative_absolute_error: 0.2638, root_mean_prior_squared_error: 0.2709, root_mean_squared_error: 0.1099, root_relative_squared_error: 0.4057,
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Soft Voting/Majority Rule classifier for unfitted estimators.
0 runs0 likes0 downloads0 reach0 impact
Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the 'multi_class' option is set to 'ovr', and uses the…
0 runs0 likes0 downloads0 reach0 impact
test description
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, ..., wp) to minimize the residual sum of squares between the observed targets in the dataset,…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Encode categorical features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features.…
0 runs0 likes0 downloads0 reach0 impact
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, ..., wp) to minimize the residual sum of squares between the observed targets in the dataset,…
0 runs0 likes0 downloads0 reach0 impact
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Encode categorical features as an integer array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Test study for the Python tutorial on studies
3 datasets, 3 tasks, 1 flows, 3 runs
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.8104, f_measure: 0.8199, kappa: 0.4228, kb_relative_information_score: 0.2508, mean_absolute_error: 0.2191, mean_prior_absolute_error: 0.3266, weighted_recall: 0.8295, number_of_instances: 522, precision: 0.8162, predictive_accuracy: 0.8295, prior_entropy: 0.7318, relative_absolute_error: 0.671, root_mean_prior_squared_error: 0.4037, root_mean_squared_error: 0.3478, root_relative_squared_error: 0.8615, unweighted_recall: 0.6916,
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.9925, f_measure: 0.877, kappa: 0.8719, kb_relative_information_score: 0.6594, mean_absolute_error: 0.0706, mean_prior_absolute_error: 0.1212, weighted_recall: 0.884, number_of_instances: 500, precision: 0.8872, predictive_accuracy: 0.884, prior_entropy: 3.6489, relative_absolute_error: 0.5824, root_mean_prior_squared_error: 0.246, root_mean_squared_error: 0.1595, root_relative_squared_error: 0.6482, unweighted_recall: 0.8017,
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, ..., wp) to minimize the residual sum of squares between the observed targets in the dataset,…
0 runs0 likes0 downloads0 reach0 impact
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Exhaustive search over specified parameter values for an estimator. Important members are fit, predict. GridSearchCV implements a "fit" and a "score" method. It also implements "score_samples",…
1 runs0 likes0 downloads0 reach0 impact
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.8277, f_measure: 0.7574, kappa: 0.4592, kb_relative_information_score: 0.3097, mean_absolute_error: 0.3187, mean_prior_absolute_error: 0.4545, weighted_recall: 0.7617, number_of_instances: 768, precision: 0.7565, predictive_accuracy: 0.7617, prior_entropy: 0.9331, relative_absolute_error: 0.7013, root_mean_prior_squared_error: 0.4766, root_mean_squared_error: 0.3989, root_relative_squared_error: 0.8369, unweighted_recall: 0.7226,
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
1 runs0 likes0 downloads0 reach0 impact
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
1 runs0 likes0 downloads0 reach0 impact
Test suite for the Python tutorial on benchmark suites
20 datasets, 20 tasks, 0 flows, 0 runs
Encode categorical features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features.…
0 runs0 likes0 downloads0 reach0 impact