Main Content

loss

Class: ClassificationLinear

Classification loss for linear classification models

Description

example

L = loss(Mdl,X,Y) returns the classification losses for the binary, linear classification model Mdl using predictor data in X and corresponding class labels in Y. L contains classification error rates for each regularization strength in Mdl.

L = loss(Mdl,Tbl,ResponseVarName) returns the classification losses for the predictor data in Tbl and the true class labels in Tbl.ResponseVarName.

L = loss(Mdl,Tbl,Y) returns the classification losses for the predictor data in table Tbl and the true class labels in Y.

example

L = loss(___,Name,Value) specifies options using one or more name-value pair arguments in addition to any of the input argument combinations in previous syntaxes. For example, you can specify that columns in the predictor data correspond to observations or specify the classification loss function.

Note

If the predictor data in X or Tbl contains any missing values and LossFun is not set to "classifcost", "classiferror", or "mincost", the loss function can return NaN. For more details, see loss can return NaN for predictor data with missing values.

Input Arguments

expand all

Binary, linear classification model, specified as a ClassificationLinear model object. You can create a ClassificationLinear model object using fitclinear.

Predictor data, specified as an n-by-p full or sparse matrix. This orientation of X indicates that rows correspond to individual observations, and columns correspond to individual predictor variables.

Note

If you orient your predictor matrix so that observations correspond to columns and specify 'ObservationsIn','columns', then you might experience a significant reduction in computation time.

The length of Y and the number of observations in X must be equal.

Data Types: single | double

Class labels, specified as a categorical, character, or string array; logical or numeric vector; or cell array of character vectors.

  • The data type of Y must be the same as the data type of Mdl.ClassNames. (The software treats string arrays as cell arrays of character vectors.)

  • The distinct classes in Y must be a subset of Mdl.ClassNames.

  • If Y is a character array, then each element must correspond to one row of the array.

  • The length of Y must be equal to the number of observations in X or Tbl.

Data Types: categorical | char | string | logical | single | double | cell

Sample data used to train the model, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, Tbl can contain additional columns for the response variable and observation weights. Tbl must contain all the predictors used to train Mdl. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

If Tbl contains the response variable used to train Mdl, then you do not need to specify ResponseVarName or Y.

If you train Mdl using sample data contained in a table, then the input data for loss must also be in a table.

Response variable name, specified as the name of a variable in Tbl. If Tbl contains the response variable used to train Mdl, then you do not need to specify ResponseVarName.

If you specify ResponseVarName, then you must specify it as a character vector or string scalar. For example, if the response variable is stored as Tbl.Y, then specify ResponseVarName as 'Y'. Otherwise, the software treats all columns of Tbl, including Tbl.Y, as predictors.

The response variable must be a categorical, character, or string array; a logical or numeric vector; or a cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.

Data Types: char | string

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Loss function, specified as the comma-separated pair consisting of 'LossFun' and a built-in loss function name or function handle.

  • The following table lists the available loss functions. Specify one using its corresponding character vector or string scalar.

    ValueDescription
    "binodeviance"Binomial deviance
    "classifcost"Observed misclassification cost
    "classiferror"Misclassified rate in decimal
    "exponential"Exponential loss
    "hinge"Hinge loss
    "logit"Logistic loss
    "mincost"Minimal expected misclassification cost (for classification scores that are posterior probabilities)
    "quadratic"Quadratic loss

    'mincost' is appropriate for classification scores that are posterior probabilities. For linear classification models, logistic regression learners return posterior probabilities as classification scores by default, but SVM learners do not (see predict).

  • To specify a custom loss function, use function handle notation. The function must have this form:

    lossvalue = lossfun(C,S,W,Cost)

    • The output argument lossvalue is a scalar.

    • You specify the function name (lossfun).

    • C is an n-by-K logical matrix with rows indicating the class to which the corresponding observation belongs. n is the number of observations in Tbl or X, and K is the number of distinct classes (numel(Mdl.ClassNames)). The column order corresponds to the class order in Mdl.ClassNames. Create C by setting C(p,q) = 1, if observation p is in class q, for each row. Set all other elements of row p to 0.

    • S is an n-by-K numeric matrix of classification scores. The column order corresponds to the class order in Mdl.ClassNames. S is a matrix of classification scores, similar to the output of predict.

    • W is an n-by-1 numeric vector of observation weights.

    • Cost is a K-by-K numeric matrix of misclassification costs. For example, Cost = ones(K) – eye(K) specifies a cost of 0 for correct classification and 1 for misclassification.

Example: 'LossFun',@lossfun

Data Types: char | string | function_handle

Predictor data observation dimension, specified as 'rows' or 'columns'.

Note

If you orient your predictor matrix so that observations correspond to columns and specify 'ObservationsIn','columns', then you might experience a significant reduction in computation time. You cannot specify 'ObservationsIn','columns' for predictor data in a table.

Data Types: char | string

Observation weights, specified as the comma-separated pair consisting of 'Weights' and a numeric vector or the name of a variable in Tbl.

  • If you specify Weights as a numeric vector, then the size of Weights must be equal to the number of observations in X or Tbl.

  • If you specify Weights as the name of a variable in Tbl, then the name must be a character vector or string scalar. For example, if the weights are stored as Tbl.W, then specify Weights as 'W'. Otherwise, the software treats all columns of Tbl, including Tbl.W, as predictors.

If you supply weights, then for each regularization strength, loss computes the weighted classification loss and normalizes weights to sum up to the value of the prior probability in the respective class.

Data Types: double | single

Output Arguments

expand all

Classification losses, returned as a numeric scalar or row vector. The interpretation of L depends on Weights and LossFun.

L is the same size as Mdl.Lambda. L(j) is the classification loss of the linear classification model trained using the regularization strength Mdl.Lambda(j).

Examples

expand all

Load the NLP data set.

load nlpdata

X is a sparse matrix of predictor data, and Y is a categorical vector of class labels. There are more than two classes in the data.

The models should identify whether the word counts in a web page are from the Statistics and Machine Learning Toolbox™ documentation. So, identify the labels that correspond to the Statistics and Machine Learning Toolbox™ documentation web pages.

Ystats = Y == 'stats';

Train a binary, linear classification model that can identify whether the word counts in a documentation web page are from the Statistics and Machine Learning Toolbox™ documentation. Specify to hold out 30% of the observations. Optimize the objective function using SpaRSA.

rng(1); % For reproducibility 
CVMdl = fitclinear(X,Ystats,'Solver','sparsa','Holdout',0.30);
CMdl = CVMdl.Trained{1};

CVMdl is a ClassificationPartitionedLinear model. It contains the property Trained, which is a 1-by-1 cell array holding a ClassificationLinear model that the software trained using the training set.

Extract the training and test data from the partition definition.

trainIdx = training(CVMdl.Partition);
testIdx = test(CVMdl.Partition);

Estimate the training- and test-sample classification error.

ceTrain = loss(CMdl,X(trainIdx,:),Ystats(trainIdx))
ceTrain = 1.3572e-04
ceTest = loss(CMdl,X(testIdx,:),Ystats(testIdx))
ceTest = 5.2804e-04

Because there is one regularization strength in CMdl, ceTrain and ceTest are numeric scalars.

Load the NLP data set. Preprocess the data as in Estimate Test-Sample Classification Loss, and transpose the predictor data.

load nlpdata
Ystats = Y == 'stats';
X = X';

Train a binary, linear classification model. Specify to hold out 30% of the observations. Optimize the objective function using SpaRSA. Specify that the predictor observations correspond to columns.

rng(1); % For reproducibility 
CVMdl = fitclinear(X,Ystats,'Solver','sparsa','Holdout',0.30,...
    'ObservationsIn','columns');
CMdl = CVMdl.Trained{1};

CVMdl is a ClassificationPartitionedLinear model. It contains the property Trained, which is a 1-by-1 cell array holding a ClassificationLinear model that the software trained using the training set.

Extract the training and test data from the partition definition.

trainIdx = training(CVMdl.Partition);
testIdx = test(CVMdl.Partition);

Create an anonymous function that measures linear loss, that is,

L=j-wjyjfjjwj.

wj is the weight for observation j, yj is response j (-1 for the negative class, and 1 otherwise), and fj is the raw classification score of observation j. Custom loss functions must be written in a particular form. For rules on writing a custom loss function, see the LossFun name-value pair argument.

linearloss = @(C,S,W,Cost)sum(-W.*sum(S.*C,2))/sum(W);

Estimate the training- and test-sample classification loss using the linear loss function.

ceTrain = loss(CMdl,X(:,trainIdx),Ystats(trainIdx),'LossFun',linearloss,...
    'ObservationsIn','columns')
ceTrain = -7.8330
ceTest = loss(CMdl,X(:,testIdx),Ystats(testIdx),'LossFun',linearloss,...
    'ObservationsIn','columns')
ceTest = -7.7383

To determine a good lasso-penalty strength for a linear classification model that uses a logistic regression learner, compare test-sample classification error rates.

Load the NLP data set. Preprocess the data as in Specify Custom Classification Loss.

load nlpdata
Ystats = Y == 'stats';
X = X'; 

rng(10); % For reproducibility
Partition = cvpartition(Ystats,'Holdout',0.30);
testIdx = test(Partition);
XTest = X(:,testIdx);
YTest = Ystats(testIdx);

Create a set of 11 logarithmically-spaced regularization strengths from 10-6 through 10-0.5.

Lambda = logspace(-6,-0.5,11);

Train binary, linear classification models that use each of the regularization strengths. Optimize the objective function using SpaRSA. Lower the tolerance on the gradient of the objective function to 1e-8.

CVMdl = fitclinear(X,Ystats,'ObservationsIn','columns',...
    'CVPartition',Partition,'Learner','logistic','Solver','sparsa',...
    'Regularization','lasso','Lambda',Lambda,'GradientTolerance',1e-8)
CVMdl = 
  ClassificationPartitionedLinear
    CrossValidatedModel: 'Linear'
           ResponseName: 'Y'
        NumObservations: 31572
                  KFold: 1
              Partition: [1x1 cvpartition]
             ClassNames: [0 1]
         ScoreTransform: 'none'


Extract the trained linear classification model.

Mdl = CVMdl.Trained{1}
Mdl = 
  ClassificationLinear
      ResponseName: 'Y'
        ClassNames: [0 1]
    ScoreTransform: 'logit'
              Beta: [34023x11 double]
              Bias: [-12.1623 -12.1623 -12.1623 -12.1623 -12.1623 -6.2503 -5.0651 -4.2165 -3.3990 -3.2452 -2.9783]
            Lambda: [1.0000e-06 3.5481e-06 1.2589e-05 4.4668e-05 1.5849e-04 5.6234e-04 0.0020 0.0071 0.0251 0.0891 0.3162]
           Learner: 'logistic'


Mdl is a ClassificationLinear model object. Because Lambda is a sequence of regularization strengths, you can think of Mdl as 11 models, one for each regularization strength in Lambda.

Estimate the test-sample classification error.

ce = loss(Mdl,X(:,testIdx),Ystats(testIdx),'ObservationsIn','columns');

Because there are 11 regularization strengths, ce is a 1-by-11 vector of classification error rates.

Higher values of Lambda lead to predictor variable sparsity, which is a good quality of a classifier. For each regularization strength, train a linear classification model using the entire data set and the same options as when you cross-validated the models. Determine the number of nonzero coefficients per model.

Mdl = fitclinear(X,Ystats,'ObservationsIn','columns',...
    'Learner','logistic','Solver','sparsa','Regularization','lasso',...
    'Lambda',Lambda,'GradientTolerance',1e-8);
numNZCoeff = sum(Mdl.Beta~=0);

In the same figure, plot the test-sample error rates and frequency of nonzero coefficients for each regularization strength. Plot all variables on the log scale.

figure;
[h,hL1,hL2] = plotyy(log10(Lambda),log10(ce),...
    log10(Lambda),log10(numNZCoeff + 1)); 
hL1.Marker = 'o';
hL2.Marker = 'o';
ylabel(h(1),'log_{10} classification error')
ylabel(h(2),'log_{10} nonzero-coefficient frequency')
xlabel('log_{10} Lambda')
title('Test-Sample Statistics')
hold off

Choose the index of the regularization strength that balances predictor variable sparsity and low classification error. In this case, a value between 10-4 to 10-1 should suffice.

idxFinal = 7;

Select the model from Mdl with the chosen regularization strength.

MdlFinal = selectModels(Mdl,idxFinal);

MdlFinal is a ClassificationLinear model containing one regularization strength. To estimate labels for new observations, pass MdlFinal and the new data to predict.

More About

expand all

Algorithms

By default, observation weights are prior class probabilities. If you supply weights using Weights, then the software normalizes them to sum to the prior probabilities in the respective classes. The software uses the renormalized weights to estimate the weighted classification loss.

Extended Capabilities

Version History

Introduced in R2016a

expand all