Main Content

lof

Create local outlier factor model for anomaly detection

Since R2022b

    Description

    Use the lof function to create a local outlier factor model for outlier detection and novelty detection.

    • Outlier detection (detecting anomalies in training data) — Use the output argument tf of lof to identify anomalies in training data.

    • Novelty detection (detecting anomalies in new data with uncontaminated training data) — Create a LocalOutlierFactor object by passing uncontaminated training data (data with no outliers) to lof. Detect anomalies in new data by passing the object and the new data to the object function isanomaly.

    LOFObj = lof(Tbl) returns a LocalOutlierFactor object for predictor data in the table Tbl.

    example

    LOFObj = lof(X) uses predictor data in the matrix X.

    LOFObj = lof(___,Name=Value) specifies options using one or more name-value arguments in addition to any of the input argument combinations in the previous syntaxes. For example, ContaminationFraction=0.1 instructs the function to process 10% of the training data as anomalies.

    [LOFObj,tf] = lof(___) also returns the logical array tf, whose elements are true when an anomaly is detected in the corresponding row of Tbl or X.

    [LOFObj,tf,scores] = lof(___) also returns an anomaly score, which is a local outlier factor value, for each observation in Tbl or X. A score value less than or close to 1 indicates a normal observation, and a value greater than 1 can indicate an anomaly.

    example

    Examples

    collapse all

    Detect outliers (anomalies in training data) by using the lof function.

    Load the sample data set NYCHousing2015.

    load NYCHousing2015

    The data set includes 10 variables with information on the sales of properties in New York City in 2015. Display a summary of the data set.

    summary(NYCHousing2015)
    NYCHousing2015: 91446×10 table
    
    Variables:
    
        BOROUGH: double
        NEIGHBORHOOD: cell array of character vectors
        BUILDINGCLASSCATEGORY: cell array of character vectors
        RESIDENTIALUNITS: double
        COMMERCIALUNITS: double
        LANDSQUAREFEET: double
        GROSSSQUAREFEET: double
        YEARBUILT: double
        SALEPRICE: double
        SALEDATE: datetime
    
    Statistics for applicable variables:
    
                                 NumMissing          Min              Median               Max               Mean               Std      
    
        BOROUGH                      0                     1                  3                  5             2.8431            1.3343  
        NEIGHBORHOOD                 0                                                                                                   
        BUILDINGCLASSCATEGORY        0                                                                                                   
        RESIDENTIALUNITS             0                     0                  1               8759             2.1789           32.2738  
        COMMERCIALUNITS              0                     0                  0                612             0.2201            3.2991  
        LANDSQUAREFEET               0                     0               1700           29305534         2.8752e+03        1.0118e+05  
        GROSSSQUAREFEET              0                     0               1056            8942176         4.6598e+03        4.3098e+04  
        YEARBUILT                    0                     0               1939               2016         1.7951e+03          526.9998  
        SALEPRICE                    0                     0             333333         4.1111e+09         1.2364e+06        2.0130e+07  
        SALEDATE                     0           01-Jan-2015        09-Jul-2015        31-Dec-2015        07-Jul-2015        2470:47:17  
    

    Remove nonnumeric variables from NYCHousing2015. The data type of the BOROUGH variable is double, but it is a categorical variable indicating the borough in which the property is located. Remove the BOROUGH variable as well.

    NYCHousing2015 = NYCHousing2015(:,vartype("numeric"));
    NYCHousing2015.BOROUGH = [];

    Train a local outlier factor model for NYCHousing2015. Specify the fraction of anomalies in the training observations as 0.01.

    [Mdl,tf,scores] = lof(NYCHousing2015,ContaminationFraction=0.01);

    Mdl is a LocalOutlierFactor object. lof also returns the anomaly indicators (tf) and anomaly scores (scores) for the training data NYCHousing2015.

    Plot a histogram of the score values. Create a vertical line at the score threshold corresponding to the specified fraction.

    h = histogram(scores,NumBins=50);
    h.Parent.YScale = 'log';
    xline(Mdl.ScoreThreshold,"r-",["Threshold" Mdl.ScoreThreshold]) 

    Figure contains an axes object. The axes object contains 2 objects of type histogram, constantline.

    If you want to identify anomalies with a different contamination fraction (for example, 0.05), you can train a new local outlier factor model.

     [newMdl,newtf,scores] = lof(NYCHousing2015,ContaminationFraction=0.05);
    

    Note that changing the contamination fraction changes the anomaly indicators only, and does not affect the anomaly scores. Therefore, if you do not want to compute the anomaly scores again by using lof, you can obtain a new anomaly indicator with the existing score values.

    Change the fraction of anomalies in the training data to 0.05.

    newContaminationFraction = 0.05;

    Find a new score threshold by using the quantile function.

    newScoreThreshold = quantile(scores,1-newContaminationFraction)
    newScoreThreshold = 
    6.7493
    

    Obtain a new anomaly indicator.

    newtf = scores > newScoreThreshold;

    Create a LocalOutlierFactor object for uncontaminated training observations by using the lof function. Then detect novelties (anomalies in new data) by passing the object and the new data to the object function isanomaly.

    Load the 1994 census data stored in census1994.mat. The data set consists of demographic data from the US Census Bureau to predict whether an individual makes over $50,000 per year.

    load census1994

    census1994 contains the training data set adultdata and the test data set adulttest. The predictor data must be either all continuous or all categorical to train a LocalOutlierFactor object. Remove nonnumeric variables from adultdata and adulttest.

    adultdata = adultdata(:,vartype("numeric"));
    adulttest = adulttest(:,vartype("numeric"));

    Train a local outlier factor model for adultdata. Assume that adultdata does not contain outliers.

    [Mdl,tf,s] = lof(adultdata);

    Mdl is a LocalOutlierFactor object. lof also returns the anomaly indicators tf and anomaly scores s for the training data adultdata. If you do not specify the ContaminationFraction name-value argument as a value greater than 0, then lof treats all training observations as normal observations, meaning all the values in tf are logical 0 (false). The function sets the score threshold to the maximum score value. Display the threshold value.

    Mdl.ScoreThreshold
    ans = 
    28.6719
    

    Find anomalies in adulttest by using the trained local outlier factor model.

    [tf_test,s_test] = isanomaly(Mdl,adulttest);

    The isanomaly function returns the anomaly indicators tf_test and scores s_test for adulttest. By default, isanomaly identifies observations with scores above the threshold (Mdl.ScoreThreshold) as anomalies.

    Create histograms for the anomaly scores s and s_test. Create a vertical line at the threshold of the anomaly scores.

    h1 = histogram(s,NumBins=50,Normalization="probability");
    hold on
    h2 = histogram(s_test,h1.BinEdges,Normalization="probability");
    xline(Mdl.ScoreThreshold,"r-",join(["Threshold" Mdl.ScoreThreshold]))
    h1.Parent.YScale = 'log';
    h2.Parent.YScale = 'log';
    legend("Training Data","Test Data",Location="north")
    hold off

    Figure contains an axes object. The axes object contains 3 objects of type histogram, constantline. These objects represent Training Data, Test Data.

    Display the observation index of the anomalies in the test data.

    find(tf_test)
    ans =
    
      0×1 empty double column vector
    

    The anomaly score distribution of the test data is similar to that of the training data, so isanomaly does not detect any anomalies in the test data with the default threshold value. You can specify a different threshold value by using the ScoreThreshold name-value argument. For an example, see Specify Anomaly Score Threshold.

    Input Arguments

    collapse all

    Predictor data, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

    The predictor data must be either all continuous or all categorical. If you specify Tbl, the lof function assumes that a variable is categorical if it is a logical vector, unordered categorical vector, character array, string array, or cell array of character vectors. If Tbl includes both continuous and categorical values, and you want to identify all predictors in Tbl as categorical, you must specify CategoricalPredictors as "all".

    To use a subset of the variables in Tbl, specify the variables by using the PredictorNames name-value argument.

    Data Types: table

    Predictor data, specified as a numeric matrix. Each row of X corresponds to one observation, and each column corresponds to one predictor variable.

    The predictor data must be either all continuous or all categorical. If you specify X, the lof function assumes that all predictors are continuous. To identify all predictors in X as categorical, specify CategoricalPredictors as "all".

    You can use the PredictorNames name-value argument to assign names to the predictor variables in X.

    Data Types: single | double

    Name-Value Arguments

    collapse all

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: SearchMethod=exhaustive,Distance=minkowski uses the exhaustive search algorithm with the Minkowski distance.

    Maximum number of data points in the leaf node of the Kd-tree, specified as a positive integer value. This argument is valid only when SearchMethod is "kdtree".

    Example: BucketSize=40

    Data Types: single | double

    Size of the Gram matrix in megabytes, specified as a positive scalar or "maximal". For the definition of the Gram matrix, see Algorithms. The lof function can use a Gram matrix when the Distance name-value argument is "fasteuclidean".

    When CacheSize is "maximal", lof attempts to allocate enough memory for an entire intermediate matrix whose size is MX-by-MX, where MX is the number of rows of the input data, X or Tbl. CacheSize does not have to be large enough for an entire intermediate matrix, but must be at least large enough to hold an MX-by-1 vector. Otherwise, lof uses the "euclidean" distance.

    If Distance is "fasteuclidean" and CacheSize is too large or "maximal", lof might attempt to allocate a Gram matrix that exceeds the available memory. In this case, MATLAB® issues an error.

    Example: CacheSize="maximal"

    Data Types: double | char | string

    Categorical predictor flag, specified as one of the following:

    • "all" — All predictors are categorical. By default, lof uses the Hamming distance ("hamming") for the Distance name-value argument.

    • [] — No predictors are categorical, that is, all predictors are continuous (numeric). In this case, the default Distance value is "euclidean".

    The predictor data for lof must be either all continuous or all categorical.

    • If the predictor data is in a table (Tbl), lof assumes that a variable is categorical if it is a logical vector, unordered categorical vector, character array, string array, or cell array of character vectors. If Tbl includes both continuous and categorical values, and you want to identify all predictors in Tbl as categorical, you must specify CategoricalPredictors as "all".

    • If the predictor data is a matrix (X), lof assumes that all predictors are continuous. To identify all predictors in X as categorical, specify CategoricalPredictors as "all".

    lof encodes categorical variables as numeric variables by assigning a positive integer value to each category. When you use categorical predictors, ensure that you use an appropriate distance metric (Distance).

    Example: CategoricalPredictors="all"

    Fraction of anomalies in the training data, specified as a numeric scalar in the range [0,1].

    • If the ContaminationFraction value is 0 (default), then lof treats all training observations as normal observations, and sets the score threshold (ScoreThreshold property value of LOFObj) to the maximum value of scores.

    • If the ContaminationFraction value is in the range (0,1], then lof determines the threshold value so that the function detects the specified fraction of training observations as anomalies.

    Example: ContaminationFraction=0.1

    Data Types: single | double

    Covariance matrix, specified as a positive definite matrix of scalar values representing the covariance matrix when the function computes the Mahalanobis distance. This argument is valid only when Distance is "mahalanobis".

    The default value is the covariance matrix computed from the predictor data (Tbl or X) after the function excludes rows with duplicated values and missing values.

    Data Types: single | double

    Distance metric, specified as a character vector or string scalar.

    • If all the predictor variables are continuous (numeric) variables, then you can specify one of these distance metrics.

      ValueDescription
      "euclidean"

      Euclidean distance

      "fasteuclidean"

      Euclidean distance using an algorithm that usually saves time when the number of elements in a data point exceeds 10. See Algorithms. "fasteuclidean" applies only to the "exhaustive" SearchMethod.

      "mahalanobis"

      Mahalanobis distance — You can specify the covariance matrix for the Mahalanobis distance by using the Cov name-value argument.

      "minkowski"

      Minkowski distance — You can specify the exponent of the Minkowski distance by using the Exponent name-value argument.

      "chebychev"

      Chebychev distance (maximum coordinate difference)

      "cityblock"

      City block distance

      "correlation"

      One minus the sample correlation between observations (treated as sequences of values)

      "cosine"

      One minus the cosine of the included angle between observations (treated as vectors)

      "spearman"

      One minus the sample Spearman's rank correlation between observations (treated as sequences of values)

      Note

      If you specify one of these distance metrics for categorical predictors, then the software treats each categorical predictor as a numeric variable for the distance computation, with each category represented by a positive integer. The Distance value does not affect the CategoricalPredictors property of the trained model.

    • If all the predictor variables are categorical variables, then you can specify one of these distance metrics.

      ValueDescription
      "hamming"

      Hamming distance, which is the percentage of coordinates that differ

      "jaccard"

      One minus the Jaccard coefficient, which is the percentage of nonzero coordinates that differ

      Note

      If you specify one of these distance metrics for continuous (numeric) predictors, then the software treats each continuous predictor as a categorical variable for the distance computation. This option does not change the CategoricalPredictors value.

    The default value is "euclidean" if all the predictor variables are continuous, and "hamming" if all the predictor variables are categorical.

    If you want to use the Kd-tree algorithm (SearchMethod="kdtree"), then Distance must be "euclidean", "cityblock", "minkowski", or "chebychev".

    For more information on the various distance metrics, see Distance Metrics.

    Example: Distance="jaccard"

    Data Types: char | string

    Minkowski distance exponent, specified as a positive scalar value. This argument is valid only when Distance is "minkowski".

    Example: Exponent=3

    Data Types: single | double

    Tie inclusion flag indicating whether the software includes all the neighbors whose distance values are equal to the kth smallest distance, specified as logical 0 (false) or 1 (true). If IncludeTies is true, the software includes all of these neighbors. Otherwise, the software includes exactly k neighbors.

    Example: IncludeTies=true

    Data Types: logical

    Number of nearest neighbors in the predictor data (Tbl or X) to find for computing the local outlier factor values, specified as a positive integer value.

    The default value is min(20,n-1), where n is the number of unique rows in the predictor data.

    Example: NumNeighbors=3

    Data Types: single | double

    This property is read-only.

    Predictor variable names, specified as a string array of unique names or cell array of unique character vectors. The functionality of PredictorNames depends on how you supply the predictor data.

    • If you supply Tbl, then you can use PredictorNames to specify which predictor variables to use. That is, lof uses only the predictor variables in PredictorNames.

      • PredictorNames must be a subset of Tbl.Properties.VariableNames.

      • By default, PredictorNames contains the names of all predictor variables in Tbl.

    • If you supply X, then you can use PredictorNames to assign names to the predictor variables in X.

      • The order of the names in PredictorNames must correspond to the column order of X. That is, PredictorNames{1} is the name of X(:,1), PredictorNames{2} is the name of X(:,2), and so on. Also, size(X,2) and numel(PredictorNames) must be equal.

      • By default, PredictorNames is {"x1","x2",...}.

    Data Types: string | cell

    Nearest neighbor search method, specified as "kdtree" or "exhaustive".

    • "kdtree" — This method uses the Kd-tree algorithm to find nearest neighbors. This option is valid when the distance metric (Distance) is one of the following:

      • "euclidean" — Euclidean distance

      • "cityblock" — City block distance

      • "minkowski" — Minkowski distance

      • "chebychev" — Chebychev distance

    • "exhaustive" — This method uses the exhaustive search algorithm to find nearest neighbors.

      • When you compute local outlier factor values for the predictor data (Tbl or X), the lof function finds nearest neighbors by computing the distance values from all points in the predictor data to each point in the predictor data.

      • When you compute local outlier factor values for new data Xnew using the isanomaly function, the function finds nearest neighbors by computing the distance values from all points in the predictor data (Tbl or X) to each point in Xnew.

    The default value is "kdtree" if the predictor data has 10 or fewer columns, the data is not sparse, and the distance metric (Distance) is valid for the Kd-tree algorithm. Otherwise, the default value is "exhaustive".

    Output Arguments

    collapse all

    Trained local outlier factor model, returned as a LocalOutlierFactor object.

    You can use the object function isanomaly with LOFObj to find anomalies in new data.

    Anomaly indicators, returned as a logical column vector. An element of tf is logical 1 (true) when the observation in the corresponding row of Tbl or X is an anomaly, and logical 0 (false) otherwise. tf has the same length as Tbl or X.

    lof identifies observations with scores above the threshold (ScoreThreshold property value of LOFObj) as anomalies. The function determines the threshold value to detect the specified fraction (ContaminationFraction name-value argument) of training observations as anomalies.

    Anomaly scores (local outlier factor values), returned as a numeric column vector whose values are nonnegative. scores has the same length as Tbl or X, and each element of scores contains an anomaly score for the observation in the corresponding row of Tbl or X. A score value less than or close to 1 indicates a normal observation, and a value greater than 1 can indicate an anomaly.

    More About

    collapse all

    Algorithms

    collapse all

    References

    [1] Breunig, Markus M., et al. “LOF: Identifying Density-Based Local Outliers.” Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, 2000, pp. 93–104.

    [2] Albanie, Samuel. Euclidean Distance Matrix Trick. June, 2019. Available at https://samuelalbanie.com/files/Euclidean_distance_trick.pdf.

    Version History

    Introduced in R2022b

    expand all