Whether to add intercept (default: false).
Whether to add intercept (default: false).
Create a model given the weights and intercept
Create a model given the weights and intercept
The dimension of training features.
The dimension of training features.
In GeneralizedLinearModel
, only single linear predictor is allowed for both weights
and intercept.
In GeneralizedLinearModel
, only single linear predictor is allowed for both weights
and intercept. However, for multinomial logistic regression, with K possible outcomes,
we are training K-1 independent binary logistic regression models which requires K-1 sets
of linear predictor.
As a result, the workaround here is if more than two sets of linear predictors are needed,
we construct bigger weights
vector which can hold both weights and intercepts.
If the intercepts are added, the dimension of weights
will be
(numOfLinearPredictor) * (numFeatures + 1) . If the intercepts are not added,
the dimension of weights
will be (numOfLinearPredictor) * numFeatures.
Thus, the intercepts will be encapsulated into weights, and we leave the value of intercept in GeneralizedLinearModel as zero.
The optimizer to solve the problem.
The optimizer to solve the problem.
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries starting from the initial weights provided.
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries starting from the initial weights provided.
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.
Set if the algorithm should add an intercept.
Set if the algorithm should add an intercept. Default false. We set the default to false because adding the intercept will cause memory allocation.
Set if the algorithm should validate data before training.
Set if the algorithm should validate data before training. Default true.
Train a classification model for Binary Logistic Regression using Stochastic Gradient Descent. By default L2 regularization is used, which can be changed via LogisticRegressionWithSGD.optimizer. NOTE: Labels used in Logistic Regression should be {0, 1, ..., k - 1} for k classes multi-label classification problem. Using LogisticRegressionWithLBFGS is recommended over this.