Main Content

predict

Compute deep learning network output for inference

Since R2019b

Description

Some deep learning layers behave differently during training and inference (prediction). For example, during training, dropout layers randomly set input elements to zero to help prevent overfitting, but during inference, dropout layers do not change the input.

To compute network outputs for inference, use the predict function. To compute network outputs for training, use the forward function. For prediction with SeriesNetwork and DAGNetwork objects, see predict.

Tip

For prediction with SeriesNetwork and DAGNetwork objects, see predict.

example

Y = predict(net,X) returns the network output Y during inference given the input data X and the network net with a single input and a single output.

Y = predict(net,X1,...,XM) returns the network output Y during inference given the M inputs X1, ...,XM and the network net that has M inputs and a single output.

[Y1,...,YN] = predict(___) returns the N outputs Y1, …, YN during inference for networks that have N outputs using any of the previous syntaxes.

[Y1,...,YK] = predict(___,Outputs=layerNames) returns the outputs Y1, …, YK during inference for the specified layers using any of the previous syntaxes.

[___] = predict(___,Name=Value) specifies additional options using one or more name-value arguments.

[___,state] = predict(___) also returns the updated network state.

Examples

collapse all

This example shows how to make predictions using a dlnetwork object by splitting data into mini-batches.

For large data sets, or when predicting on hardware with limited memory, make predictions by splitting the data into mini-batches. When making predictions with SeriesNetwork or DAGNetwork objects, the predict function automatically splits the input data into mini-batches. For dlnetwork objects, you must split the data into mini-batches manually.

Load dlnetwork Object

Load a trained dlnetwork object and the corresponding classes.

s = load("digitsCustom.mat");
dlnet = s.dlnet;
classes = s.classes;

Load Data for Prediction

Load the digits data for prediction.

digitDatasetPath = fullfile(matlabroot,'toolbox','nnet','nndemos', ...
    'nndatasets','DigitDataset');
imds = imageDatastore(digitDatasetPath, ...
    'IncludeSubfolders',true);

Make Predictions

Loop over the mini-batches of the test data and make predictions using a custom prediction loop.

Use minibatchqueue to process and manage the mini-batches of images. Specify a mini-batch size of 128. Set the read size property of the image datastore to the mini-batch size.

For each mini-batch:

  • Use the custom mini-batch preprocessing function preprocessMiniBatch (defined at the end of this example) to concatenate the data into a batch and normalize the images.

  • Format the images with the dimensions 'SSCB' (spatial, spatial, channel, batch). By default, the minibatchqueue object converts the data to dlarray objects with underlying type single.

  • Make predictions on a GPU if one is available. By default, the minibatchqueue object converts the output to a gpuArray if a GPU is available. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox).

miniBatchSize = 128;
imds.ReadSize = miniBatchSize;

mbq = minibatchqueue(imds,...
    "MiniBatchSize",miniBatchSize,...
    "MiniBatchFcn", @preprocessMiniBatch,...
    "MiniBatchFormat","SSCB");

Loop over the minibatches of data and make predictions using the predict function. Use the onehotdecode function to determine the class labels. Store the predicted class labels.

numObservations = numel(imds.Files);
YPred = strings(1,numObservations);

predictions = [];

% Loop over mini-batches.
while hasdata(mbq)
    
    % Read mini-batch of data.
    dlX = next(mbq);
       
    % Make predictions using the predict function.
    dlYPred = predict(dlnet,dlX);
   
    % Determine corresponding classes.
    predBatch = onehotdecode(dlYPred,classes,1);
    predictions = [predictions predBatch];
  
end

Visualize some of the predictions.

idx = randperm(numObservations,9);

figure
for i = 1:9
    subplot(3,3,i)
    I = imread(imds.Files{idx(i)});    
    label = predictions(idx(i));
    imshow(I)
    title("Label: " + string(label))
  
end

Mini-Batch Preprocessing Function

The preprocessMiniBatch function preprocesses the data using the following steps:

  1. Extract the data from the incoming cell array and concatenate into a numeric array. Concatenating over the fourth dimension adds a third dimension to each image, to be used as a singleton channel dimension.

  2. Normalize the pixel values between 0 and 1.

function X = preprocessMiniBatch(data)    
    % Extract image data from cell and concatenate
    X = cat(4,data{:});
    
    % Normalize the images.
    X = X/255;
end

Input Arguments

collapse all

This argument can represent either of these:

To prune a deep neural network, you require the Deep Learning Toolbox™ Model Quantization Library support package. This support package is a free add-on that you can download using the Add-On Explorer. Alternatively, see Deep Learning Toolbox Model Quantization Library.

Input data, specified as one of these values:

  • Numeric array (since R2023b)

  • dlarray object (since R2023b)

  • Formatted dlarray object

Tip

Neural networks expect input data with a specific layout. For example vector-sequence classification networks typically expect a sequence to be represented as a t-by-c numeric array, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions output data in the layout that the network expects. If your data is in a different layout to what the network expects, then indicate that your data has a different layout by using the InputDataFormats option or by specifying input data as a formatted dlarray object. It is usually easiest to adjust the InputDataFormats training option than to preprocess the input data.

For neural networks that do not have input layers, you must use the InputDataFormats option or use formatted dlarray objects.

For more information, see Deep Learning Data Formats.

Layers to extract outputs from, specified as a string array or a cell array of character vectors containing the layer names.

  • If layerNames(i) corresponds to a layer with a single output, then layerNames(i) is the name of the layer.

  • If layerNames(i) corresponds to a layer with multiple outputs, then layerNames(i) is the layer name followed by the / character and the name of the layer output: "layerName/outputName".

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: Y = predict(net,X,InputDataFormats="CBT") makes predictions with sequence data that has format "CBT" (channel, batch, time).

Since R2023b

Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.

If InputDataFormats is "auto", then the software uses the formats expected by the network input. Otherwise, the software uses the specified formats for the corresponding network input.

A data format is a string of characters, where each character describes the type of the corresponding dimension of the data.

The characters are:

  • "S" — Spatial

  • "C" — Channel

  • "B" — Batch

  • "T" — Time

  • "U" — Unspecified

For example, for an array containing a batch of sequences where the first, second, and third dimension correspond to channels, observations, and time steps, respectively, you can specify that it has the format "CBT".

You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" at most once. The software ignores singleton trailing "U" dimensions located after the second dimension.

For more information, see Deep Learning Data Formats.

Data Types: char | string | cell

Since R2023b

Description of the output data dimensions, specified as one of these values:

  • "auto" — If the output data has the same number of dimensions as the input data, then the predict function uses the format specified by InputDataFormats. If the output data has a different number of dimensions to the input data, then the predict function automatically permutes the dimensions of the output data so that they are consistent with the network input layers, the InputDataFormats option, or targets expected by the trainnet function.

  • Data formats, specified as a string array, character vector, or cell array of character vectors — The predict function uses the specified data formats.

A data format is a string of characters, where each character describes the type of the corresponding dimension of the data.

The characters are:

  • "S" — Spatial

  • "C" — Channel

  • "B" — Batch

  • "T" — Time

  • "U" — Unspecified

For example, for an array containing a batch of sequences where the first, second, and third dimension correspond to channels, observations, and time steps, respectively, you can specify that it has the format "CBT".

You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" at most once. The software ignores singleton trailing "U" dimensions located after the second dimension.

For more information, see Deep Learning Data Formats.

Data Types: char | string | cell

Performance optimization, specified as one of these values:

  • "auto" — Automatically apply a number of optimizations suitable for the input network and hardware resources.

  • "mex" — Compile and execute a MEX function. This option is available when using a GPU only. The input data or the network learnable parameters must be stored as gpuArray objects. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

  • "none" — Disable all acceleration.

When Acceleration is "auto", the software does not generate a MEX function.

When you use the "auto" or "mex" option, the software can offer performance benefits at the expense of an increased initial run time. Subsequent calls to the function are typically faster. Use performance optimization when you call the function multiple times using new input data.

The "mex" option generates and executes a MEX function based on the model and parameters you specify in the function call. A single model can have several associated MEX functions at one time. Clearing the model variable also clears any MEX functions associated with that model.

The "mex" option is available only when you use a GPU. You must have a C/C++ compiler installed and the GPU Coder™ Interface for Deep Learning support package. Install the support package using the Add-On Explorer in MATLAB®. For setup instructions, see MEX Setup (GPU Coder). GPU Coder is not required.

The "mex" option has these limitations:

  • The state output argument is not supported.

  • Only single precision is supported. The input data or the network learnable parameters must have underlying type single.

  • Networks with inputs that are not connected to an input layer are not supported.

  • Traced dlarray objects are not supported. This means that the 'mex' option is not supported inside a call to dlfeval.

  • Not all layers are supported. For a list of supported layers, see Supported Layers (GPU Coder).

  • You cannot use MATLAB Compiler™ to deploy your network when using the "mex" option.

For quantized networks, the "mex" option requires a CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3, or higher.

Output Arguments

collapse all

Output data, returned as a one of these values:

  • Numeric array (since R2023b)

  • Unformatted dlarray object (since R2023b)

  • Formatted dlarray object

The data type matches the data type of the input data.

Updated network state, returned as a table.

The network state is a table with three columns:

  • Layer – Layer name, specified as a string scalar.

  • Parameter – State parameter name, specified as a string scalar.

  • Value – Value of state parameter, specified as a dlarray object.

Layer states contain information calculated during the layer operation to be retained for use in subsequent forward passes of the layer. For example, the cell state and hidden state of LSTM layers, or running statistics in batch normalization layers.

For recurrent layers, such as LSTM layers, with the HasStateInputs property set to 1 (true), the state table does not contain entries for the states of that layer.

Update the state of a dlnetwork using the State property.

Algorithms

collapse all

Reproducibility

To provide the best performance, deep learning using a GPU in MATLAB is not guaranteed to be deterministic. Depending on your network architecture, under some conditions you might get different results when using a GPU to train two identical networks or make two predictions using the same network and data.

Extended Capabilities

Version History

Introduced in R2019b

expand all