# addMetrics

## Description

`rocmetrics`

computes the
false positive rates (FPR), true positive rates (TPR), and additional metrics specified by
the `AdditionalMetrics`

name-value argument. After creating a `rocmetrics`

object, you can
compute additional classification performance metrics by using the
`addMetrics`

function.

computes additional classification performance metrics specified in
`UpdatedROCObj`

= addMetrics(`rocObj`

,`metrics`

)`metrics`

using the classification model information stored in the
`rocmetrics`

object
`rocObj`

.

`UpdatedROCObj`

contains all the information in
`rocObj`

plus additional performance metrics computed by
`addMetrics`

. The function attaches the additional computed metrics
(`metrics`

) as new variables in the table of the `Metrics`

property.

If you compute confidence intervals when you create `rocObj`

, the
`addMetrics`

function computes the confidence intervals for the
additional `metrics`

. The new variables in the
`Metrics`

property contain a three-column matrix in which the first
column corresponds to the metric values, and the second and third columns correspond to the
lower and upper bounds, respectively. Using confidence intervals
requires Statistics and Machine Learning Toolbox™.

## Examples

### Compute Additional Metrics

Compute the performance metrics (FPR, TPR, and expected cost) for a multiclass classification problem when you create a `rocmetrics`

object. Compute additional metrics, the positive predictive value (PPV) and the negative predictive value (NPV), and add them to the object.

Load a sample of true labels and the prediction scores for a classification problem. For this example, there are five classes: daisy, dandelion, roses, sunflowers, and tulips. The class names are stored in `classNames`

. The scores are the softmax prediction scores generated using the `predict`

function. `scores`

is an N-by-K array where N is the number of observations and K is the number of classes. The column order of s`cores`

follows the class order stored in c`lassNames`

.

```
load('flowersDataResponses.mat')
scores = flowersData.scores;
trueLabels = flowersData.trueLabels;
classNames = flowersData.classNames;
```

Create a `rocmetrics`

object by using the true labels and the classification scores. Specify the column order of `scores`

using `classNames`

. By default, `rocmetrics`

computes the FPR and TPR. Specify `AdditionalMetrics="ExpectedCost"`

to compute the expected cost as well.

rocObj = rocmetrics(trueLabels,scores,classNames, ... AdditionalMetrics="ExpectedCost");

The table in the `Metrics`

property of `rocObj`

contains performance metric values for each of the classes, vertically concatenated according to the class order. Find and display the top rows for the second class in the table.

idx = rocObj.Metrics.ClassName == classNames(2); head(rocObj.Metrics(idx,:))

ClassName Threshold FalsePositiveRate TruePositiveRate ExpectedCost _________ _________ _________________ ________________ ____________ dandelion 1 0 0 0.045287 dandelion 1 0 0.23889 0.034469 dandelion 1 0 0.26111 0.033462 dandelion 1 0 0.27222 0.032959 dandelion 1 0 0.28889 0.032204 dandelion 1 0 0.29444 0.031953 dandelion 1 0 0.3 0.031701 dandelion 1 0 0.31111 0.031198

The table in `Metrics`

contains the variables for the class names, threshold, false positive rate, true positive rate, and expected cost (the additional metric).

After creating a `rocmetrics`

object, you can compute additional metrics using the classification model information stored in the object. Compute the PPV and NPV by using the `addMetrics`

function. To overwrite the input argument `rocObj`

, assign the output of `addMetrics`

to the input.

rocObj = addMetrics(rocObj,["PositivePredictiveValue","NegativePredictiveValue"]);

Display the `Metrics`

property for the top rows.

head(rocObj.Metrics(idx,:))

ClassName Threshold FalsePositiveRate TruePositiveRate ExpectedCost PositivePredictiveValue NegativePredictiveValue _________ _________ _________________ ________________ ____________ _______________________ _______________________ dandelion 1 0 0 0.045287 NaN 0.7551 dandelion 1 0 0.23889 0.034469 1 0.80202 dandelion 1 0 0.26111 0.033462 1 0.80669 dandelion 1 0 0.27222 0.032959 1 0.80904 dandelion 1 0 0.28889 0.032204 1 0.81259 dandelion 1 0 0.29444 0.031953 1 0.81378 dandelion 1 0 0.3 0.031701 1 0.81498 dandelion 1 0 0.31111 0.031198 1 0.81738

The table in `Metrics`

now includes the `PositivePredictiveValue`

and `NegativePredictiveValue`

variables in the last two columns, in the order you specified. Note that the positive predictive value (`PPV = TP/(TP+FP)`

) is `NaN`

for the reject-all threshold (largest threshold), and the negative predictive value (`NPV = TN/(TN+FN)`

) is `NaN`

for the accept-all threshold (lowest threshold). `TP`

, `FP`

, `TN`

, and `FN`

represent the number of true positives, false positives, true negatives, and false negatives, respectively.

## Input Arguments

`rocObj`

— Object evaluating classification performance

`rocmetrics`

object

Object evaluating classification performance, specified as a `rocmetrics`

object.

`metrics`

— Additional model performance metrics

character vector | string array | function handle | cell array

Additional model performance metrics to compute, specified as a character vector or string
scalar of the built-in metric name, string array of names, function handle
(`@metricName`

), or cell array of names or function handles. A
`rocmetrics`

object always computes the false positive rates (FPR) and
the true positive rates (TPR) to obtain a ROC curve. Therefore, you do not have to specify
to compute FPR and TPR.

Built-in metrics — Specify one of the following built-in metric names by using a character vector or string scalar. You can specify more than one by using a string array.

Name Description `"TruePositives"`

or`"tp"`

Number of true positives (TP) `"FalseNegatives"`

or`"fn"`

Number of false negatives (FN) `"FalsePositives"`

or`"fp"`

Number of false positives (FP) `"TrueNegatives"`

or`"tn"`

Number of true negatives (TN) `"SumOfTrueAndFalsePositives"`

or`"tp+fp"`

Sum of TP and FP `"RateOfPositivePredictions"`

or`"rpp"`

Rate of positive predictions (RPP), `(TP+FP)/(TP+FN+FP+TN)`

`"RateOfNegativePredictions"`

or`"rnp"`

Rate of negative predictions (RNP), `(TN+FN)/(TP+FN+FP+TN)`

`"Accuracy"`

or`"accu"`

Accuracy, `(TP+TN)/(TP+FN+FP+TN)`

`"FalseNegativeRate"`

,`"fnr"`

, or`"miss"`

False negative rate (FNR), or miss rate, `FN/(TP+FN)`

`"TrueNegativeRate"`

,`"tnr"`

, or`"spec"`

True negative rate (TNR), or specificity, `TN/(TN+FP)`

`"PositivePredictiveValue"`

,`"ppv"`

,`"prec"`

, or`"precision"`

Positive predictive value (PPV), or precision, `TP/(TP+FP)`

`"NegativePredictiveValue"`

or`"npv"`

Negative predictive value (NPV), `TN/(TN+FN)`

`"ExpectedCost"`

or`"ecost"`

Expected cost,

`(TP*cost(P|P)+FN*cost(N|P)+FP*cost(P|N)+TN*cost(N|N))/(TP+FN+FP+TN)`

, where`cost`

is a 2-by-2 misclassification cost matrix containing`[0,cost(N|P);cost(P|N),0]`

.`cost(N|P)`

is the cost of misclassifying a positive class (`P`

) as a negative class (`N`

), and`cost(P|N)`

is the cost of misclassifying a negative class as a positive class.The software converts the

`K`

-by-`K`

matrix specified by the`Cost`

name-value argument of`rocmetrics`

to a 2-by-2 matrix for each one-versus-all binary problem. For details, see Misclassification Cost Matrix.`"f1score"`

F1 score, `2*TP/(2*TP+FP+FN)`

You can obtain all of the previous metrics by specifying `"all"`

. You cannot specify`"all"`

in conjunction with any other metric.The software computes the scale vector using the prior class probabilities (

`Prior`

) and the number of classes in`Labels`

, and then scales the performance metrics according to this scale vector. For details, see Performance Metrics.Custom metric — Specify a custom metric by using a function handle. A custom function that returns a performance metric must have this form:

metric = customMetric(C,scale,cost)

The output argument

`metric`

is a scalar value.A custom metric is a function of the confusion matrix (

`C`

), scale vector (`scale`

), and cost matrix (`cost`

). The software finds these input values for each one-versus-all binary problem. For details, see Performance Metrics.`C`

is a`2`

-by-`2`

confusion matrix consisting of`[TP,FN;FP,TN]`

.`scale`

is a`2`

-by-`1`

scale vector.`cost`

is a`2`

-by-`2`

misclassification cost matrix.

The software does not support cross-validation for a custom metric. Instead, you can specify to use bootstrap when you create a

`rocmetrics`

object.

Note that the positive predictive value (PPV) is
`NaN`

for the reject-all threshold for which `TP`

= `FP`

= `0`

, and the negative predictive value (NPV) is `NaN`

for the
accept-all threshold for which `TN`

= `FN`

= `0`

. For more details, see Thresholds, Fixed Metric, and Fixed Metric Values.

**Example: **`["Accuracy","PositivePredictiveValue"]`

**Example: **`{"Accuracy",@m1,@m2}`

specifies the accuracy metric and the custom
metrics `m1`

and `m2`

as additional metrics.
`addMetrics`

stores the custom metric values as variables named
`CustomMetric1`

and `CustomMetric2`

in the
`Metrics`

property.

**Data Types: **`char`

| `string`

| `cell`

| `function_handle`

## Output Arguments

`UpdatedROCObj`

— Object evaluating classification performance

`rocmetrics`

object

Object evaluating classification performance, returned as a `rocmetrics`

object.

To overwrite the input argument `rocObj`

, assign the output of `addMetrics`

to `rocObj`

:

rocObj = addMetrics(rocObj,metrics);

## Version History

**Introduced in R2022b**

### R2024b: Additional metrics available

`addmetrics`

has new metrics:

`"f1score"`

, which computes the F1 score.`"precision"`

, which is the same as`"ppv"`

and`"prec"`

.`"all"`

, which computes all supported metrics. You cannot use`"all"`

in combination with any other metric.

## MATLAB 명령

다음 MATLAB 명령에 해당하는 링크를 클릭했습니다.

명령을 실행하려면 MATLAB 명령 창에 입력하십시오. 웹 브라우저는 MATLAB 명령을 지원하지 않습니다.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

## How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

### Americas

- América Latina (Español)
- Canada (English)
- United States (English)

### Europe

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)