Issue | #Downvotes for this reason | By |
---|
C | Inverse of regularization strength; must be a positive float Like in support vector machines, smaller values specify stronger regularization | default: 1.0 |
class_weight | Weights associated with classes in the form ``{class_label: weight}`` If not given, all classes are supposed to have weight one The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified .. versionadded:: 0.17 *class_weight='balanced'* | default: null |
dual | Dual (constrained) or primal (regularized, see also
:ref:`this equation | default: false |
fit_intercept | Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function | default: true |
intercept_scaling | Useful only when the solver 'liblinear' is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a "synthetic" feature with constant value equal to intercept_scaling is appended to the instance vector The intercept becomes ``intercept_scaling * synthetic_feature_weight`` Note! the synthetic feature weight is subject to l1/l2 regularization as all other features To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased | default: 1 |
l1_ratio | The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a combination of L1 and L2. | default: null |
max_iter | Maximum number of iterations taken for the solvers to converge multi_class : {'auto', 'ovr', 'multinomial'}, default='auto' If the option chosen is 'ovr', then a binary problem is fit for each label. For 'multinomial' the loss minimised is the multinomial loss fit across the entire probability distribution, *even when the data is binary*. 'multinomial' is unavailable when solver='liblinear' 'auto' selects 'ovr' if the data is binary, or if solver='liblinear', and otherwise selects 'multinomial' .. versionadded:: 0.18 Stochastic Average Gradient descent solver for 'multinomial' case .. versionchanged:: 0.22 Default changed from 'ovr' to 'auto' in 0.22 | default: 1000 |
multi_class | default: "auto" | |
n_jobs | Number of CPU cores used when parallelizing over classes if
multi_class='ovr'". This parameter is ignored when the ``solver`` is
set to 'liblinear' regardless of whether 'multi_class' is specified or
not. ``None`` means 1 unless in a :obj:`joblib.parallel_backend`
context. ``-1`` means using all processors
See :term:`Glossary | default: null |
penalty | default: "l2" | |
random_state | Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the
data. See :term:`Glossary | default: null |
solver | default: "lbfgs" | |
tol | Tolerance for stopping criteria | default: 0.0001 |
verbose | For the liblinear and lbfgs solvers set verbose to any positive number for verbosity | default: 0 |
warm_start | When set to True, reuse the solution of the previous call to fit as
initialization, otherwise, just erase the previous solution
Useless for liblinear solver. See :term:`the Glossary | default: false |