0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.8268, f_measure: 0.7476, kappa: 0.456, kb_relative_information_score: 0.3226, mean_absolute_error: 0.3141, mean_prior_absolute_error: 0.4589, weighted_recall: 0.747, number_of_instances: 253, precision: 0.7483, predictive_accuracy: 0.747, prior_entropy: 0.9463, relative_absolute_error: 0.6844, root_mean_prior_squared_error: 0.4813, root_mean_squared_error: 0.4023, root_relative_squared_error: 0.8359, unweighted_recall: 0.729,
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive…
1 runs0 likes0 downloads0 reach0 impact
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.9947, f_measure: 0.969, kappa: 0.9374, kb_relative_information_score: 0.8914, mean_absolute_error: 0.0579, mean_prior_absolute_error: 0.4948, weighted_recall: 0.969, number_of_instances: 14980, precision: 0.969, predictive_accuracy: 0.969, prior_entropy: 0.9924, relative_absolute_error: 0.117, root_mean_prior_squared_error: 0.4974, root_mean_squared_error: 0.159, root_relative_squared_error: 0.3196, unweighted_recall: 0.9684,
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive…
0 runs0 likes0 downloads0 reach0 impact
Randomized search on hyper parameters. RandomizedSearchCV implements a "fit" and a "score" method. It also implements "score_samples", "predict", "predict_proba", "decision_function", "transform" and…
1 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
bla
3 datasets, 3 tasks, 0 flows, 0 runs
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive…
0 runs0 likes0 downloads0 reach0 impact
Exhaustive search over specified parameter values for an estimator. Important members are fit, predict. GridSearchCV implements a "fit" and a "score" method. It also implements "score_samples",…
1 runs0 likes0 downloads0 reach0 impact
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive…
0 runs0 likes0 downloads0 reach0 impact
Randomized search on hyper parameters. RandomizedSearchCV implements a "fit" and a "score" method. It also implements "predict", "predict_proba", "decision_function", "transform" and…
1 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Duplicate class alias for sklearn's SimpleImputer Helps bypass the sklearn extension duplicate operation check
0 runs0 likes0 downloads0 reach0 impact
Encode categorical features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features.…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Applies transformers to columns of an array or pandas DataFrame. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the 'multi_class' option is set to 'ovr', and uses the…
1 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, ..., wp) to minimize the residual sum of squares between the observed targets in the dataset,…
1 runs0 likes0 downloads0 reach0 impact
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
Exhaustive search over specified parameter values for an estimator. Important members are fit, predict. GridSearchCV implements a "fit" and a "score" method. It also implements "predict",…
1 runs0 likes0 downloads0 reach0 impact
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
C-Support Vector Classification. The implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and may be impractical beyond tens of thousands of…
0 runs0 likes0 downloads0 reach0 impact
A Bagging classifier. A Bagging classifier is an ensemble meta-estimator that fits base classifiers each on random subsets of the original dataset and then aggregate their individual predictions…
0 runs0 likes0 downloads0 reach0 impact
Exhaustive search over specified parameter values for an estimator. Important members are fit, predict. GridSearchCV implements a "fit" and a "score" method. It also implements "predict",…
1 runs0 likes0 downloads0 reach0 impact
Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the 'multi_class' option is set to 'ovr', and uses the…
1 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
0 runs0 likes0 downloads0 reach0 impact
Randomized search on hyper parameters. RandomizedSearchCV implements a "fit" and a "score" method. It also implements "score_samples", "predict", "predict_proba", "decision_function", "transform" and…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, ..., wp) to minimize the residual sum of squares between the observed targets in the dataset,…
1 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Applies transformers to columns of an array or pandas DataFrame. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Duplicate class alias for sklearn's SimpleImputer Helps bypass the sklearn extension duplicate operation check
0 runs0 likes0 downloads0 reach0 impact
Encode categorical features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features.…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
2 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
bla
3 datasets, 3 tasks, 0 flows, 0 runs
A decision tree classifier.
1 runs0 likes0 downloads0 reach0 impact
A Bagging classifier. A Bagging classifier is an ensemble meta-estimator that fits base classifiers each on random subsets of the original dataset and then aggregate their individual predictions…
0 runs0 likes0 downloads0 reach0 impact
C-Support Vector Classification. The implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and may be impractical beyond tens of thousands of…
0 runs0 likes0 downloads0 reach0 impact
Exhaustive search over specified parameter values for an estimator. Important members are fit, predict. GridSearchCV implements a "fit" and a "score" method. It also implements "score_samples",…
1 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Dataset representing the XOR operation
0 runs0 likes0 downloads0 reach0 impact
4 instances - 3 features - 0 classes - 0 missing values
Dataset representing the XOR operation
0 runs0 likes0 downloads0 reach0 impact
4 instances - 3 features - 0 classes - 0 missing values
The weather problem is a tiny dataset that we will use repeatedly to illustrate machine learning methods. Entirely fictitious, it supposedly concerns the conditions that are suitable for playing some…
0 runs0 likes0 downloads0 reach0 impact
14 instances - 5 features - 2 classes - 0 missing values
The weather problem is a tiny dataset that we will use repeatedly to illustrate machine learning methods. Entirely fictitious, it supposedly concerns the conditions that are suitable for playing some…
0 runs0 likes0 downloads0 reach0 impact
14 instances - 5 features - 2 classes - 0 missing values
.. _diabetes_dataset: Diabetes dataset ---------------- Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of n = 442…
0 runs0 likes0 downloads0 reach0 impact
442 instances - 11 features - 0 classes - 0 missing values
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
1 runs0 likes0 downloads0 reach0 impact
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.742, f_measure: 0.7188, kappa: 0.3729, kb_relative_information_score: 0.2569, mean_absolute_error: 0.3326, mean_prior_absolute_error: 0.4545, weighted_recall: 0.724, number_of_instances: 768, precision: 0.717, predictive_accuracy: 0.724, prior_entropy: 0.9331, relative_absolute_error: 0.7317, root_mean_prior_squared_error: 0.4766, root_mean_squared_error: 0.4451, root_relative_squared_error: 0.9337, unweighted_recall: 0.6807,
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, ..., wp) to minimize the residual sum of squares between the observed targets in the dataset,…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.9303, f_measure: 0.8854, kappa: 0.7714, kb_relative_information_score: 0.6094, mean_absolute_error: 0.2093, mean_prior_absolute_error: 0.4978, weighted_recall: 0.8855, number_of_instances: 227, precision: 0.8889, predictive_accuracy: 0.8855, prior_entropy: 1.0024, relative_absolute_error: 0.4203, root_mean_prior_squared_error: 0.5008, root_mean_squared_error: 0.3143, root_relative_squared_error: 0.6276, unweighted_recall: 0.887,
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.9303, f_measure: 0.8854, kappa: 0.7714, kb_relative_information_score: 0.6094, mean_absolute_error: 0.2093, mean_prior_absolute_error: 0.4978, weighted_recall: 0.8855, number_of_instances: 227, precision: 0.8889, predictive_accuracy: 0.8855, prior_entropy: 1.0024, relative_absolute_error: 0.4203, root_mean_prior_squared_error: 0.5008, root_mean_squared_error: 0.3143, root_relative_squared_error: 0.6276, unweighted_recall: 0.887,
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.9285, f_measure: 0.8987, kappa: 0.7974, kb_relative_information_score: 0.6289, mean_absolute_error: 0.2013, mean_prior_absolute_error: 0.4978, weighted_recall: 0.8987, number_of_instances: 227, precision: 0.8996, predictive_accuracy: 0.8987, prior_entropy: 1.0024, relative_absolute_error: 0.4044, root_mean_prior_squared_error: 0.5008, root_mean_squared_error: 0.3075, root_relative_squared_error: 0.6141, unweighted_recall: 0.8994,
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
0 likes - 0 downloads - 0 reach - area_under_roc_curve: 0.8435, f_measure: 0.8454, kappa: 0.6873, kb_relative_information_score: 0.6861, mean_absolute_error: 0.1546, mean_prior_absolute_error: 0.4948, weighted_recall: 0.8454, number_of_instances: 14980, precision: 0.8453, predictive_accuracy: 0.8454, prior_entropy: 0.9924, relative_absolute_error: 0.3125, root_mean_prior_squared_error: 0.4974, root_mean_squared_error: 0.3932, root_relative_squared_error: 0.7906, unweighted_recall: 0.8435,
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, ..., wp) to minimize the residual sum of squares between the observed targets in the dataset,…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Encode categorical features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features.…
0 runs0 likes0 downloads0 reach0 impact
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Encode categorical features as an integer array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The…
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
1 runs0 likes0 downloads0 reach0 impact
Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via :meth:`partial_fit`. For details on algorithm used to update feature means and variance online, see Stanford CS…
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
0 runs0 likes0 downloads0 reach0 impact
Randomized search on hyper parameters. RandomizedSearchCV implements a "fit" and a "score" method. It also implements "predict", "predict_proba", "decision_function", "transform" and…
0 runs0 likes0 downloads0 reach0 impact
A decision tree classifier.
0 runs0 likes0 downloads0 reach0 impact