Area Under the ROC Curve achieved by the landmarker weka.classifiers.trees.REPTree -L 2
data quality
Error rate achieved by the landmarker weka.classifiers.trees.REPTree -L 2
data quality
Kappa coefficient achieved by the landmarker weka.classifiers.trees.REPTree -L 2
data quality
Area Under the ROC Curve achieved by the landmarker weka.classifiers.trees.REPTree -L 3
data quality
Error rate achieved by the landmarker weka.classifiers.trees.REPTree -L 3
data quality
Kappa coefficient achieved by the landmarker weka.classifiers.trees.REPTree -L 3
data quality
Area Under the ROC Curve achieved by the landmarker weka.classifiers.trees.RandomTree -depth 1
data quality
Error rate achieved by the landmarker weka.classifiers.trees.RandomTree -depth 1
data quality
Kappa coefficient achieved by the landmarker weka.classifiers.trees.RandomTree -depth 1
data quality
Area Under the ROC Curve achieved by the landmarker weka.classifiers.trees.RandomTree -depth 2
data quality
Error rate achieved by the landmarker weka.classifiers.trees.RandomTree -depth 2
data quality
Kappa coefficient achieved by the landmarker weka.classifiers.trees.RandomTree -depth 2
data quality
Area Under the ROC Curve achieved by the landmarker weka.classifiers.trees.RandomTree -depth 3
data quality
Error rate achieved by the landmarker weka.classifiers.trees.RandomTree -depth 3
data quality
Kappa coefficient achieved by the landmarker weka.classifiers.trees.RandomTree -depth 3
data quality
Standard deviation of the number of distinct values among attributes of the nominal type.
data quality
Area Under the ROC Curve achieved by the landmarker weka.classifiers.lazy.IBk
data quality
Error rate achieved by the landmarker weka.classifiers.lazy.IBk
data quality
Kappa coefficient achieved by the landmarker weka.classifiers.lazy.IBk
data quality
Subgroup discovery measure.
evaluation measure
Subgroup discovery measure.
evaluation measure
Subgroup discovery measure.
evaluation measure
The number of observations in the current subgroup.
evaluation measure
Subgroup discovery measure.
evaluation measure
Subgroup discovery measure.
evaluation measure
The amount of positives in the subgroup
evaluation measure
The probability of a subgroup.
evaluation measure
The quality of the founded subgroup
evaluation measure
Subgroup discovery measure.
evaluation measure
Area under the ROC curve for the 10 best subgroups
evaluation measure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
A custom holdout partitions a set of observations into a training set and a test set in a predefined way. This is typically done in order to compare the performance of different predictive algorithms…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
The weight of the bias component in the learning algorithm's error. I.e., the percentage of errors that can be attributed to bias error (underfitting) as opposed to variance error (overfitting).
flow quality
empirically calculated average ratio of bias error in the total error, using Kohavi-Wolpert's definition of bias and variance
flow quality
empirically determined average ratio of bias error in the total error, using Webb's definition of bias and variance
flow quality
true if the algorithm can perform classification, false otherwise
flow quality
true if the algorithm can perform regression, false otherwise
flow quality
empirically calculated average ratio of variance error in the total error, using Kohavi-Wolpert's definition of bias and variance
flow quality
empirically determined average ratio of variance error in the total error, using Webb's definition of bias and variance
flow quality
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Leave-on-out is a special case of cross-validation where the number of folds equals the number of instances. Thus, models are always evaluated on one instance and trained on all others. Leave-one-out…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Leave-on-out is a special case of cross-validation where the number of folds equals the number of instances. Thus, models are always evaluated on one instance and trained on all others. Leave-one-out…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Description to be added
estimation procedure
A custom holdout partitions a set of observations into a training set and a test set in a predefined way. This is typically done in order to compare the performance of different predictive algorithms…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Leave-on-out is a special case of cross-validation where the number of folds equals the number of instances. Thus, models are always evaluated on one instance and trained on all others. Leave-one-out…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
The area under the ROC curve (AUROC), calculated using the Mann-Whitney U-test. The curve is constructed by shifting the threshold for a positive prediction from 0 to 1, yielding a series of true…
evaluation measure
No data.
evaluation measure
The time in seconds to build a single model on all data.
evaluation measure
The memory, in bytes, needed to build a single model on all data.
evaluation measure
Used for survival Analysis
evaluation measure
Entropy, in bits, of the class distribution generated by the model's predictions. Calculated by taking the sum of -log2(predictedProb) over all instances, where predictedProb is the probability…
evaluation measure
Entropy reduction, in bits, between the class distribution generated by the model's predictions, and the prior class distribution. Calculated by taking the difference of the prior_class_complexity and…
evaluation measure
The confusion matrix, or contingency table, is a table that summarizes the number of instances that were predicted to belong to a certain class, versus their actual class. It is an NxN matrix where N…
evaluation measure
The sample Pearson correlation coefficient, or 'r': $$r = \frac{\sum ^n _{i=1}(X_i - \bar{X})(Y_i - \bar{Y})}{\sqrt{\sum ^n _{i=1}(X_i - \bar{X})^2} \sqrt{\sum ^n _{i=1}(Y_i - \bar{Y})^2}}$$ It…
evaluation measure
The F-Measure is the harmonic mean of precision and recall, also known as the the traditional F-measure, balanced F-score, or F1-score: Formula: 2*Precision*Recall/(Precision+Recall) See:…
evaluation measure
Cohen's kappa coefficient is a statistical measure of agreement for qualitative (categorical) items: it measures the agreement of prediction with the true class – 1.0 signifies complete agreement.…
evaluation measure
The Kononenko and Bratko Information score, divided by the prior entropy of the class distribution. See: Kononenko, I., Bratko, I.: Information-based evaluation criterion for classi er's performance.…
evaluation measure
Bias component (squared) of the bias-variance decomposition as defined by Kohavi and Wolpert in: R. Kohavi & D. Wolpert (1996), Bias plus variance decomposition for zero-one loss functions, in Proc.…
evaluation measure
Error rate measured in the bias-variance decomposition as defined by Kohavi and Wolpert in: R. Kohavi & D. Wolpert (1996), Bias plus variance decomposition for zero-one loss functions, in Proc. of the…
evaluation measure
Intrinsic error component (squared) of the bias-variance decomposition as defined by Kohavi and Wolpert in: R. Kohavi and D. Wolpert (1996), Bias plus variance decomposition for zero-one loss…
evaluation measure
Variance component of the bias-variance decomposition as defined by Kohavi and Wolpert in: R. Kohavi and D. Wolpert (1996), Bias plus variance decomposition for zero-one loss functions, in Proc. of…
evaluation measure
Kononenko and Bratko Information score. This measures predictive accuracy but eliminates the influence of prior probabilities. See: Kononenko, I., Bratko, I.: Information-based evaluation criterion…
evaluation measure
The Matthews correlation coefficient takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very…
evaluation measure
The mean absolute error (MAE) measures how close the model's predictions are to the actual target values. It is the sum of the absolute value of the difference of each instance prediction and the…
evaluation measure
The entropy of the class distribution generated by the model (see class_complexity), divided by the number of instances in the input data.
evaluation measure
The entropy gain of the class distribution by the model over the prior distribution (see class_complexity_gain), divided by the number of instances in the input data.
evaluation measure
Unweighted(!) macro-average F-Measure. In macro-averaging, F-measure is computed locally over each category ?rst and then the average over all categories is taken.
evaluation measure
Kononenko and Bratko Information score, see kononenko_bratko_information_score, divided by the number of instances in the input data. See: Kononenko, I., Bratko, I.: Information-based evaluation…
evaluation measure
Unweighted(!) macro-average Precision. In macro-averaging, Precision is computed locally over each category ?rst and then the average over all categories is taken.
evaluation measure
The mean prior absolute error (MPAE) is the mean absolute error (see mean_absolute_error) of the prior (e.g., default class prediction). See: http://en.wikipedia.org/wiki/Mean_absolute_error
evaluation measure
The entropy of the class distribution of the prior (see prior_class_complexity), divided by the number of instances in the input data.
evaluation measure
Unweighted(!) macro-average Recall. In macro-averaging, Recall is computed locally over each category ?rst and then the average over all categories is taken.
evaluation measure
The macro weighted (by class size) average area_under_ROC_curve (AUROC). In macro-averaging, AUROC is computed locally over each category ?rst and then the average over all categories is taken,…
evaluation measure
The macro weighted (by class size) average F-Measure. In macro-averaging, F-measure is computed locally over each category ?rst and then the average over all categories is taken, weighted by the…
evaluation measure
The macro weighted (by class size) average Precision. In macro-averaging, Precision is computed locally over each category ?rst and then the average over all categories is taken, weighted by the…
evaluation measure
The macro weighted (by class size) average Recall. In macro-averaging, Recall is computed locally over each category ?rst and then the average over all categories is taken, weighted by the number of…
evaluation measure
The number of instances used for this evaluation.
evaluation measure
Default information about OS, JVM, installations, etc.
evaluation measure